Camel Spring Boot Reference Guide 3.18
Camel Spring Boot Reference Guide 3.18
Abstract
Preface
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. AMQP
Since Camel 1.2
Both producer and consumer are supported
The AMQP component supports the AMQP 1.0 protocol using the JMS Client API of the Qpid project.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-amqp</artifactId> <version>${camel.version}</version> <!-- use the same version as your Camel core version --> </dependency>
1.1. URI format
amqp:[queue:|topic:]destinationName[?options]
1.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
1.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
1.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
1.3. Component Options
The AMQP component supports 100 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
clientId (common) | Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. | String | |
connectionFactory (common) | The connection factory to be use. A connection factory must be configured either on the component or endpoint. | ConnectionFactory | |
disableReplyTo (common) | Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. | false | boolean |
durableSubscriptionName (common) | The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. | String | |
includeAmqpAnnotations (common) | Whether to include AMQP annotations when mapping from AMQP to Camel Message. Setting this to true maps AMQP message annotations that contain a JMS_AMQP_MA_ prefix to message headers. Due to limitations in Apache Qpid JMS API, currently delivery annotations are ignored. | false | boolean |
jmsMessageType (common) | Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values:
| JmsMessageType | |
replyTo (common) | Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). | String | |
testConnectionOnStartup (common) | Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. | false | boolean |
acknowledgementModeName (consumer) | The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values:
| AUTO_ACKNOWLEDGE | String |
artemisConsumerPriority (consumer) | Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). | int | |
asyncConsumer (consumer) | Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). | false | boolean |
autoStartup (consumer) | Specifies whether the consumer container should auto-startup. | true | boolean |
cacheLevel (consumer) | Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. | int | |
cacheLevelName (consumer) | Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values:
| CACHE_AUTO | String |
concurrentConsumers (consumer) | Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | 1 | int |
maxConcurrentConsumers (consumer) | Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | int | |
replyToDeliveryPersistent (consumer) | Specifies whether to use persistent delivery by default for replies. | true | boolean |
selector (consumer) | Sets the JMS selector to use. | String | |
subscriptionDurable (consumer) | Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. | false | boolean |
subscriptionName (consumer) | Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client’s JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). | String | |
subscriptionShared (consumer) | Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. | false | boolean |
acceptMessagesWhileStopping (consumer (advanced)) | Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. | false | boolean |
allowReplyManagerQuickStop (consumer (advanced)) | Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. | false | boolean |
consumerType (consumer (advanced)) | The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values:
| Default | ConsumerType |
defaultTaskExecutorType (consumer (advanced)) | Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring’s SimpleAsyncTaskExecutor) or ThreadPool (uses Spring’s ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values:
| DefaultTaskExecutorType | |
eagerLoadingOfProperties (consumer (advanced)) | Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. | false | boolean |
eagerPoisonBody (consumer (advanced)) | If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. | Poison JMS message due to $\{exception.message} | String |
exposeListenerSession (consumer (advanced)) | Specifies whether the listener session should be exposed when consuming messages. | false | boolean |
replyToConsumerType (consumer (advanced)) | The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values:
| Default | ConsumerType |
replyToSameDestinationAllowed (consumer (advanced)) | Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. | false | boolean |
taskExecutor (consumer (advanced)) | Allows you to specify a custom task executor for consuming messages. | TaskExecutor | |
deliveryDelay (producer) | Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. | -1 | long |
deliveryMode (producer) | Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values:
| Integer | |
deliveryPersistent (producer) | Specifies whether persistent delivery is used by default. | true | boolean |
explicitQosEnabled (producer) | Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring’s JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. | false | Boolean |
formatDateHeadersToIso8601 (producer) | Sets whether JMS date properties should be formatted according to the ISO 8601 standard. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
preserveMessageQos (producer) | Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. | false | boolean |
priority (producer) | Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values:
| 4 | int |
replyToConcurrentConsumers (producer) | Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | 1 | int |
replyToMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | int | |
replyToOnTimeoutMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. | 1 | int |
replyToOverride (producer) | Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. | String | |
replyToType (producer) | Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values:
| ReplyToType | |
requestTimeout (producer) | The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. | 20000 | long |
timeToLive (producer) | When sending messages, specifies the time-to-live of the message (in milliseconds). | -1 | long |
allowAdditionalHeaders (producer (advanced)) | This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. | String | |
allowNullBody (producer (advanced)) | Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. | true | boolean |
alwaysCopyMessage (producer (advanced)) | If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). | false | boolean |
correlationProperty (producer (advanced)) | When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. | String | |
disableTimeToLive (producer (advanced)) | Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. | false | boolean |
forceSendOriginalMessage (producer (advanced)) | When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. | false | boolean |
includeSentJMSMessageID (producer (advanced)) | Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. | false | boolean |
replyToCacheLevelName (producer (advanced)) | Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values:
| String | |
replyToDestinationSelectorName (producer (advanced)) | Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). | String | |
streamMessageTypeEnabled (producer (advanced)) | Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. | false | boolean |
allowAutoWiredConnectionFactory (advanced) | Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. | true | boolean |
allowAutoWiredDestinationResolver (advanced) | Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. | true | boolean |
allowSerializedHeaders (advanced) | Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
artemisStreamingEnabled (advanced) | Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. | false | boolean |
asyncStartListener (advanced) | Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. | false | boolean |
asyncStopListener (advanced) | Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
configuration (advanced) | To use a shared JMS configuration. | JmsConfiguration | |
destinationResolver (advanced) | A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). | DestinationResolver | |
errorHandler (advanced) | Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. | ErrorHandler | |
exceptionListener (advanced) | Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. | ExceptionListener | |
idleConsumerLimit (advanced) | Specify the limit for the number of consumers that are allowed to be idle at any given time. | 1 | int |
idleTaskExecutionLimit (advanced) | Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. | 1 | int |
includeAllJMSXProperties (advanced) | Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. | false | boolean |
jmsKeyFormatStrategy (advanced) | Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values:
| JmsKeyFormatStrategy | |
mapJmsMessage (advanced) | Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. | true | boolean |
maxMessagesPerTask (advanced) | The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. | -1 | int |
messageConverter (advanced) | To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. | MessageConverter | |
messageCreatedStrategy (advanced) | To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. | MessageCreatedStrategy | |
messageIdEnabled (advanced) | When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. | true | boolean |
messageListenerContainerFactory (advanced) | Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. | MessageListenerContainerFactory | |
messageTimestampEnabled (advanced) | Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. | true | boolean |
pubSubNoLocal (advanced) | Specifies whether to inhibit the delivery of messages published by its own connection. | false | boolean |
queueBrowseStrategy (advanced) | To use a custom QueueBrowseStrategy when browsing queues. | QueueBrowseStrategy | |
receiveTimeout (advanced) | The timeout for receiving messages (in milliseconds). | 1000 | long |
recoveryInterval (advanced) | Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. | 5000 | long |
requestTimeoutCheckerInterval (advanced) | Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. | 1000 | long |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
transferException (advanced) | If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. | false | boolean |
transferExchange (advanced) | You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. | false | boolean |
useMessageIDAsCorrelationID (advanced) | Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. | false | boolean |
waitForProvisionCorrelationToBeUpdatedCounter (advanced) | Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. | 50 | int |
waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) | Interval in millis to sleep each time while waiting for provisional correlation id to be updated. | 100 | long |
headerFilterStrategy (filter) | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
errorHandlerLoggingLevel (logging) | Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values:
| WARN | LoggingLevel |
errorHandlerLogStackTrace (logging) | Allows to control whether stacktraces should be logged or not, by the default errorHandler. | true | boolean |
password (security) | Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
username (security) | Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
transacted (transaction) | Specifies whether to use transacted mode. | false | boolean |
transactedInOut (transaction) | Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. | false | boolean |
lazyCreateTransactionManager (transaction (advanced)) | If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. | true | boolean |
transactionManager (transaction (advanced)) | The Spring transaction manager to use. | PlatformTransactionManager | |
transactionName (transaction (advanced)) | The name of the transaction to use. | String | |
transactionTimeout (transaction (advanced)) | The timeout value of the transaction (in seconds), if using transacted mode. | -1 | int |
1.4. Endpoint Options
The AMQP endpoint is configured using URI syntax:
amqp:destinationType:destinationName
with the following path and query parameters:
1.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
destinationType (common) | The kind of destination to use. Enum values:
| queue | String |
destinationName (common) | Required Name of the queue or topic to use as destination. | String |
1.4.2. Query Parameters (96 parameters)
Name | Description | Default | Type |
---|---|---|---|
clientId (common) | Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. | String | |
connectionFactory (common) | The connection factory to be use. A connection factory must be configured either on the component or endpoint. | ConnectionFactory | |
disableReplyTo (common) | Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. | false | boolean |
durableSubscriptionName (common) | The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. | String | |
jmsMessageType (common) | Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values:
| JmsMessageType | |
replyTo (common) | Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). | String | |
testConnectionOnStartup (common) | Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. | false | boolean |
acknowledgementModeName (consumer) | The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values:
| AUTO_ACKNOWLEDGE | String |
artemisConsumerPriority (consumer) | Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). | int | |
asyncConsumer (consumer) | Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). | false | boolean |
autoStartup (consumer) | Specifies whether the consumer container should auto-startup. | true | boolean |
cacheLevel (consumer) | Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. | int | |
cacheLevelName (consumer) | Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values:
| CACHE_AUTO | String |
concurrentConsumers (consumer) | Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | 1 | int |
maxConcurrentConsumers (consumer) | Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | int | |
replyToDeliveryPersistent (consumer) | Specifies whether to use persistent delivery by default for replies. | true | boolean |
selector (consumer) | Sets the JMS selector to use. | String | |
subscriptionDurable (consumer) | Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. | false | boolean |
subscriptionName (consumer) | Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client’s JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). | String | |
subscriptionShared (consumer) | Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. | false | boolean |
acceptMessagesWhileStopping (consumer (advanced)) | Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. | false | boolean |
allowReplyManagerQuickStop (consumer (advanced)) | Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. | false | boolean |
consumerType (consumer (advanced)) | The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values:
| Default | ConsumerType |
defaultTaskExecutorType (consumer (advanced)) | Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring’s SimpleAsyncTaskExecutor) or ThreadPool (uses Spring’s ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values:
| DefaultTaskExecutorType | |
eagerLoadingOfProperties (consumer (advanced)) | Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. | false | boolean |
eagerPoisonBody (consumer (advanced)) | If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. | Poison JMS message due to $\{exception.message} | String |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
exposeListenerSession (consumer (advanced)) | Specifies whether the listener session should be exposed when consuming messages. | false | boolean |
replyToConsumerType (consumer (advanced)) | The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values:
| Default | ConsumerType |
replyToSameDestinationAllowed (consumer (advanced)) | Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. | false | boolean |
taskExecutor (consumer (advanced)) | Allows you to specify a custom task executor for consuming messages. | TaskExecutor | |
deliveryDelay (producer) | Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. | -1 | long |
deliveryMode (producer) | Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values:
| Integer | |
deliveryPersistent (producer) | Specifies whether persistent delivery is used by default. | true | boolean |
explicitQosEnabled (producer) | Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring’s JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. | false | Boolean |
formatDateHeadersToIso8601 (producer) | Sets whether JMS date properties should be formatted according to the ISO 8601 standard. | false | boolean |
preserveMessageQos (producer) | Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. | false | boolean |
priority (producer) | Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values:
| 4 | int |
replyToConcurrentConsumers (producer) | Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | 1 | int |
replyToMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | int | |
replyToOnTimeoutMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. | 1 | int |
replyToOverride (producer) | Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. | String | |
replyToType (producer) | Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values:
| ReplyToType | |
requestTimeout (producer) | The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. | 20000 | long |
timeToLive (producer) | When sending messages, specifies the time-to-live of the message (in milliseconds). | -1 | long |
allowAdditionalHeaders (producer (advanced)) | This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. | String | |
allowNullBody (producer (advanced)) | Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. | true | boolean |
alwaysCopyMessage (producer (advanced)) | If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). | false | boolean |
correlationProperty (producer (advanced)) | When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. | String | |
disableTimeToLive (producer (advanced)) | Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. | false | boolean |
forceSendOriginalMessage (producer (advanced)) | When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. | false | boolean |
includeSentJMSMessageID (producer (advanced)) | Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. | false | boolean |
lazyStartProducer (producer (advanced)) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
replyToCacheLevelName (producer (advanced)) | Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values:
| String | |
replyToDestinationSelectorName (producer (advanced)) | Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). | String | |
streamMessageTypeEnabled (producer (advanced)) | Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. | false | boolean |
allowSerializedHeaders (advanced) | Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
artemisStreamingEnabled (advanced) | Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. | false | boolean |
asyncStartListener (advanced) | Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. | false | boolean |
asyncStopListener (advanced) | Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. | false | boolean |
destinationResolver (advanced) | A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). | DestinationResolver | |
errorHandler (advanced) | Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. | ErrorHandler | |
exceptionListener (advanced) | Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. | ExceptionListener | |
headerFilterStrategy (advanced) | To use a custom HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
idleConsumerLimit (advanced) | Specify the limit for the number of consumers that are allowed to be idle at any given time. | 1 | int |
idleTaskExecutionLimit (advanced) | Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. | 1 | int |
includeAllJMSXProperties (advanced) | Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. | false | boolean |
jmsKeyFormatStrategy (advanced) | Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values:
| JmsKeyFormatStrategy | |
mapJmsMessage (advanced) | Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. | true | boolean |
maxMessagesPerTask (advanced) | The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. | -1 | int |
messageConverter (advanced) | To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. | MessageConverter | |
messageCreatedStrategy (advanced) | To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. | MessageCreatedStrategy | |
messageIdEnabled (advanced) | When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. | true | boolean |
messageListenerContainerFactory (advanced) | Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. | MessageListenerContainerFactory | |
messageTimestampEnabled (advanced) | Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. | true | boolean |
pubSubNoLocal (advanced) | Specifies whether to inhibit the delivery of messages published by its own connection. | false | boolean |
receiveTimeout (advanced) | The timeout for receiving messages (in milliseconds). | 1000 | long |
recoveryInterval (advanced) | Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. | 5000 | long |
requestTimeoutCheckerInterval (advanced) | Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. | 1000 | long |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
transferException (advanced) | If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. | false | boolean |
transferExchange (advanced) | You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. | false | boolean |
useMessageIDAsCorrelationID (advanced) | Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. | false | boolean |
waitForProvisionCorrelationToBeUpdatedCounter (advanced) | Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. | 50 | int |
waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) | Interval in millis to sleep each time while waiting for provisional correlation id to be updated. | 100 | long |
errorHandlerLoggingLevel (logging) | Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values:
| WARN | LoggingLevel |
errorHandlerLogStackTrace (logging) | Allows to control whether stacktraces should be logged or not, by the default errorHandler. | true | boolean |
password (security) | Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
username (security) | Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
transacted (transaction) | Specifies whether to use transacted mode. | false | boolean |
transactedInOut (transaction) | Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. | false | boolean |
lazyCreateTransactionManager (transaction (advanced)) | If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. | true | boolean |
transactionManager (transaction (advanced)) | The Spring transaction manager to use. | PlatformTransactionManager | |
transactionName (transaction (advanced)) | The name of the transaction to use. | String | |
transactionTimeout (transaction (advanced)) | The timeout value of the transaction (in seconds), if using transacted mode. | -1 | int |
1.5. Usage
As AMQP component is inherited from JMS component, the usage of the former is almost identical to the latter:
Using AMQP component
// Consuming from AMQP queue from("amqp:queue:incoming"). to(...); // Sending message to the AMQP topic from(...). to("amqp:topic:notify");
1.6. Configuring AMQP component
Creating AMQP 1.0 component
AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672"); AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent("amqp://localhost:5672", "user", "password");
You can also add an instance of org.apache.camel.component.amqp.AMQPConnectionDetails
to the registry in order to automatically configure the AMQP component. For example for Spring Boot you just have to define bean:
AMQP connection details auto-configuration
@Bean AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); } @Bean AMQPConnectionDetails securedAmqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672", "username", "password"); }
Likewise, you can also use CDI producer methods when using Camel-CDI
AMQP connection details auto-configuration for CDI
@Produces AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); }
You can also rely on the to read the AMQP connection details. Factory method AMQPConnectionDetails.discoverAMQP()
attempts to read Camel properties in a Kubernetes-like convention, just as demonstrated on the snippet below:
AMQP connection details auto-configuration
export AMQP_SERVICE_HOST = "mybroker.com" export AMQP_SERVICE_PORT = "6666" export AMQP_SERVICE_USERNAME = "username" export AMQP_SERVICE_PASSWORD = "password" ... @Bean AMQPConnectionDetails amqpConnection() { return AMQPConnectionDetails.discoverAMQP(); }
Enabling AMQP specific options
If you, for example, need to enable amqp.traceFrames
you can do that by appending the option to your URI, like the following example:
AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672?amqp.traceFrames=true");
For reference refer QPID JMS client configuration.
1.7. Using topics
To have using topics working with camel-amqp
you need to configure the component to use topic://
as topic prefix, as shown below:
<bean id="amqp" class="org.apache.camel.component.amqp.AmqpComponent"> <property name="connectionFactory"> <bean class="org.apache.qpid.jms.JmsConnectionFactory" factory-method="createFromURL"> <property name="remoteURI" value="amqp://localhost:5672" /> <property name="topicPrefix" value="topic://" /> <!-- only necessary when connecting to ActiveMQ over AMQP 1.0 --> </bean> </property> </bean>
Keep in mind that both AMQPComponent#amqpComponent()
methods and AMQPConnectionDetails
pre-configure the component with the topic prefix, so you don’t have to configure it explicitly.
1.8. Spring Boot Auto-Configuration
When using amqp with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-amqp-starter</artifactId> </dependency>
The component supports 101 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.amqp.accept-messages-while-stopping | Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. | false | Boolean |
camel.component.amqp.acknowledgement-mode-name | The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. | AUTO_ACKNOWLEDGE | String |
camel.component.amqp.allow-additional-headers | This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. | String | |
camel.component.amqp.allow-auto-wired-connection-factory | Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. | true | Boolean |
camel.component.amqp.allow-auto-wired-destination-resolver | Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. | true | Boolean |
camel.component.amqp.allow-null-body | Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. | true | Boolean |
camel.component.amqp.allow-reply-manager-quick-stop | Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. | false | Boolean |
camel.component.amqp.allow-serialized-headers | Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | Boolean |
camel.component.amqp.always-copy-message | If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). | false | Boolean |
camel.component.amqp.artemis-consumer-priority | Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). | Integer | |
camel.component.amqp.artemis-streaming-enabled | Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. | false | Boolean |
camel.component.amqp.async-consumer | Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). | false | Boolean |
camel.component.amqp.async-start-listener | Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. | false | Boolean |
camel.component.amqp.async-stop-listener | Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. | false | Boolean |
camel.component.amqp.auto-startup | Specifies whether the consumer container should auto-startup. | true | Boolean |
camel.component.amqp.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.amqp.cache-level | Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. | Integer | |
camel.component.amqp.cache-level-name | Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. | CACHE_AUTO | String |
camel.component.amqp.client-id | Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. | String | |
camel.component.amqp.concurrent-consumers | Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | 1 | Integer |
camel.component.amqp.configuration | To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. | JmsConfiguration | |
camel.component.amqp.connection-factory | The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. | ConnectionFactory | |
camel.component.amqp.consumer-type | The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. | ConsumerType | |
camel.component.amqp.correlation-property | When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. | String | |
camel.component.amqp.default-task-executor-type | Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring’s SimpleAsyncTaskExecutor) or ThreadPool (uses Spring’s ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. | DefaultTaskExecutorType | |
camel.component.amqp.delivery-delay | Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. | -1 | Long |
camel.component.amqp.delivery-mode | Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. | Integer | |
camel.component.amqp.delivery-persistent | Specifies whether persistent delivery is used by default. | true | Boolean |
camel.component.amqp.destination-resolver | A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. | DestinationResolver | |
camel.component.amqp.disable-reply-to | Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. | false | Boolean |
camel.component.amqp.disable-time-to-live | Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. | false | Boolean |
camel.component.amqp.durable-subscription-name | The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. | String | |
camel.component.amqp.eager-loading-of-properties | Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. | false | Boolean |
camel.component.amqp.eager-poison-body | If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. | Poison JMS message due to $\{exception.message} | String |
camel.component.amqp.enabled | Whether to enable auto configuration of the amqp component. This is enabled by default. | Boolean | |
camel.component.amqp.error-handler | Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. | ErrorHandler | |
camel.component.amqp.error-handler-log-stack-trace | Allows to control whether stacktraces should be logged or not, by the default errorHandler. | true | Boolean |
camel.component.amqp.error-handler-logging-level | Allows to configure the default errorHandler logging level for logging uncaught exceptions. | LoggingLevel | |
camel.component.amqp.exception-listener | Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. | ExceptionListener | |
camel.component.amqp.explicit-qos-enabled | Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring’s JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. | false | Boolean |
camel.component.amqp.expose-listener-session | Specifies whether the listener session should be exposed when consuming messages. | false | Boolean |
camel.component.amqp.force-send-original-message | When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. | false | Boolean |
camel.component.amqp.format-date-headers-to-iso8601 | Sets whether JMS date properties should be formatted according to the ISO 8601 standard. | false | Boolean |
camel.component.amqp.header-filter-strategy | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.amqp.idle-consumer-limit | Specify the limit for the number of consumers that are allowed to be idle at any given time. | 1 | Integer |
camel.component.amqp.idle-task-execution-limit | Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. | 1 | Integer |
camel.component.amqp.include-all-jmsx-properties | Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. | false | Boolean |
camel.component.amqp.include-amqp-annotations | Whether to include AMQP annotations when mapping from AMQP to Camel Message. Setting this to true maps AMQP message annotations that contain a JMS_AMQP_MA_ prefix to message headers. Due to limitations in Apache Qpid JMS API, currently delivery annotations are ignored. | false | Boolean |
camel.component.amqp.include-sent-jms-message-id | Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. | false | Boolean |
camel.component.amqp.jms-key-format-strategy | Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. | JmsKeyFormatStrategy | |
camel.component.amqp.jms-message-type | Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. | JmsMessageType | |
camel.component.amqp.lazy-create-transaction-manager | If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. | true | Boolean |
camel.component.amqp.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.amqp.map-jms-message | Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. | true | Boolean |
camel.component.amqp.max-concurrent-consumers | Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | Integer | |
camel.component.amqp.max-messages-per-task | The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. | -1 | Integer |
camel.component.amqp.message-converter | To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. | MessageConverter | |
camel.component.amqp.message-created-strategy | To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. | MessageCreatedStrategy | |
camel.component.amqp.message-id-enabled | When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. | true | Boolean |
camel.component.amqp.message-listener-container-factory | Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. The option is a org.apache.camel.component.jms.MessageListenerContainerFactory type. | MessageListenerContainerFactory | |
camel.component.amqp.message-timestamp-enabled | Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. | true | Boolean |
camel.component.amqp.password | Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
camel.component.amqp.preserve-message-qos | Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. | false | Boolean |
camel.component.amqp.priority | Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. | 4 | Integer |
camel.component.amqp.pub-sub-no-local | Specifies whether to inhibit the delivery of messages published by its own connection. | false | Boolean |
camel.component.amqp.queue-browse-strategy | To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. | QueueBrowseStrategy | |
camel.component.amqp.receive-timeout | The timeout for receiving messages (in milliseconds). The option is a long type. | 1000 | Long |
camel.component.amqp.recovery-interval | Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. The option is a long type. | 5000 | Long |
camel.component.amqp.reply-to | Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). | String | |
camel.component.amqp.reply-to-cache-level-name | Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. | String | |
camel.component.amqp.reply-to-concurrent-consumers | Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | 1 | Integer |
camel.component.amqp.reply-to-consumer-type | The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. | ConsumerType | |
camel.component.amqp.reply-to-delivery-persistent | Specifies whether to use persistent delivery by default for replies. | true | Boolean |
camel.component.amqp.reply-to-destination-selector-name | Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). | String | |
camel.component.amqp.reply-to-max-concurrent-consumers | Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | Integer | |
camel.component.amqp.reply-to-on-timeout-max-concurrent-consumers | Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. | 1 | Integer |
camel.component.amqp.reply-to-override | Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. | String | |
camel.component.amqp.reply-to-same-destination-allowed | Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. | false | Boolean |
camel.component.amqp.reply-to-type | Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. | ReplyToType | |
camel.component.amqp.request-timeout | The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. The option is a long type. | 20000 | Long |
camel.component.amqp.request-timeout-checker-interval | Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. The option is a long type. | 1000 | Long |
camel.component.amqp.selector | Sets the JMS selector to use. | String | |
camel.component.amqp.stream-message-type-enabled | Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. | false | Boolean |
camel.component.amqp.subscription-durable | Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. | false | Boolean |
camel.component.amqp.subscription-name | Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client’s JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). | String | |
camel.component.amqp.subscription-shared | Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. | false | Boolean |
camel.component.amqp.synchronous | Sets whether synchronous processing should be strictly used. | false | Boolean |
camel.component.amqp.task-executor | Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. | TaskExecutor | |
camel.component.amqp.test-connection-on-startup | Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. | false | Boolean |
camel.component.amqp.time-to-live | When sending messages, specifies the time-to-live of the message (in milliseconds). | -1 | Long |
camel.component.amqp.transacted | Specifies whether to use transacted mode. | false | Boolean |
camel.component.amqp.transacted-in-out | Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. | false | Boolean |
camel.component.amqp.transaction-manager | The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. | PlatformTransactionManager | |
camel.component.amqp.transaction-name | The name of the transaction to use. | String | |
camel.component.amqp.transaction-timeout | The timeout value of the transaction (in seconds), if using transacted mode. | -1 | Integer |
camel.component.amqp.transfer-exception | If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. | false | Boolean |
camel.component.amqp.transfer-exchange | You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. | false | Boolean |
camel.component.amqp.use-message-id-as-correlation-id | Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. | false | Boolean |
camel.component.amqp.username | Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
camel.component.amqp.wait-for-provision-correlation-to-be-updated-counter | Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. | 50 | Integer |
camel.component.amqp.wait-for-provision-correlation-to-be-updated-thread-sleeping-time | Interval in millis to sleep each time while waiting for provisional correlation id to be updated. The option is a long type. | 100 | Long |
Chapter 2. AWS CloudWatch
Only producer is supported
The AWS2 Cloudwatch component allows messages to be sent to an Amazon CloudWatch metrics. The implementation of the Amazon API is provided by the AWS SDK.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon CloudWatch. More information is available at Amazon CloudWatch.
2.1. URI Format
aws2-cw://namespace[?options]
The metrics will be created if they don’t already exists. You can append query options to the URI in the following format, ?options=value&option2=value&…
2.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
2.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
2.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
2.3. Component Options
The AWS CloudWatch component supports 18 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
amazonCwClient (producer) | Autowired To use the AmazonCloudWatch as the client. | CloudWatchClient | |
configuration (producer) | The component configuration. | Cw2Configuration | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
name (producer) | The metric name. | String | |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
proxyHost (producer) | To define a proxy host when instantiating the CW client. | String | |
proxyPort (producer) | To define a proxy port when instantiating the CW client. | Integer | |
proxyProtocol (producer) | To define a proxy protocol when instantiating the CW client. Enum values:
| HTTPS | Protocol |
region (producer) | The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
timestamp (producer) | The metric timestamp. | Instant | |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
unit (producer) | The metric unit. | String | |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
value (producer) | The metric value. | Double | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
2.4. Endpoint Options
The AWS CloudWatch endpoint is configured using URI syntax:
aws2-cw:namespace
with the following path and query parameters:
2.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
namespace (producer) | Required The metric namespace. | String |
2.4.2. Query Parameters (16 parameters)
Name | Description | Default | Type |
---|---|---|---|
amazonCwClient (producer) | Autowired To use the AmazonCloudWatch as the client. | CloudWatchClient | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
name (producer) | The metric name. | String | |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
proxyHost (producer) | To define a proxy host when instantiating the CW client. | String | |
proxyPort (producer) | To define a proxy port when instantiating the CW client. | Integer | |
proxyProtocol (producer) | To define a proxy protocol when instantiating the CW client. Enum values:
| HTTPS | Protocol |
region (producer) | The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
timestamp (producer) | The metric timestamp. | Instant | |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
unit (producer) | The metric unit. | String | |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
value (producer) | The metric value. | Double | |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
Required CW component options
You have to provide the amazonCwClient in the Registry or your accessKey and secretKey to access the Amazon’s CloudWatch.
2.5. Usage
2.5.1. Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true.
- Java system properties - aws.accessKeyId and aws.secretKey
- Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- Web Identity Token from AWS STS.
- The shared credentials and config files.
- Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set.
- Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation
2.5.2. Message headers evaluated by the CW producer
Header | Type | Description |
---|---|---|
|
| The Amazon CW metric name. |
|
| The Amazon CW metric value. |
|
| The Amazon CW metric unit. |
|
| The Amazon CW metric namespace. |
|
| The Amazon CW metric timestamp. |
|
| The Amazon CW metric dimension name. |
|
| The Amazon CW metric dimension value. |
|
| A map of dimension names and dimension values. |
2.5.3. Advanced CloudWatchClient configuration
If you need more control over the CloudWatchClient
instance configuration you can create your own instance and refer to it from the URI:
from("direct:start") .to("aws2-cw://namespace?amazonCwClient=#client");
The #client
refers to a CloudWatchClient
in the Registry.
2.6. Dependencies
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-cw</artifactId> <version>${camel-version}</version> </dependency>
where {camel-version
} must be replaced by the actual version of Camel.
2.7. Examples
2.7.1. Producer Example
from("direct:start") .to("aws2-cw://http://camel.apache.org/aws-cw");
and sends something like
exchange.getIn().setHeader(Cw2Constants.METRIC_NAME, "ExchangesCompleted"); exchange.getIn().setHeader(Cw2Constants.METRIC_VALUE, "2.0"); exchange.getIn().setHeader(Cw2Constants.METRIC_UNIT, "Count");
2.8. Spring Boot Auto-Configuration
When using aws2-cw with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-cw-starter</artifactId> </dependency>
The component supports 19 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-cw.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-cw.amazon-cw-client | To use the AmazonCloudWatch as the client. The option is a software.amazon.awssdk.services.cloudwatch.CloudWatchClient type. | CloudWatchClient | |
camel.component.aws2-cw.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-cw.configuration | The component configuration. The option is a org.apache.camel.component.aws2.cw.Cw2Configuration type. | Cw2Configuration | |
camel.component.aws2-cw.enabled | Whether to enable auto configuration of the aws2-cw component. This is enabled by default. | Boolean | |
camel.component.aws2-cw.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-cw.name | The metric name. | String | |
camel.component.aws2-cw.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-cw.proxy-host | To define a proxy host when instantiating the CW client. | String | |
camel.component.aws2-cw.proxy-port | To define a proxy port when instantiating the CW client. | Integer | |
camel.component.aws2-cw.proxy-protocol | To define a proxy protocol when instantiating the CW client. | Protocol | |
camel.component.aws2-cw.region | The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.aws2-cw.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-cw.timestamp | The metric timestamp. The option is a java.time.Instant type. | Instant | |
camel.component.aws2-cw.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-cw.unit | The metric unit. | String | |
camel.component.aws2-cw.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-cw.use-default-credentials-provider | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | Boolean |
camel.component.aws2-cw.value | The metric value. | Double |
Chapter 3. AWS DynamoDB
Only producer is supported
The AWS2 DynamoDB component supports storing and retrieving data from/to service.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon DynamoDB. More information is available at Amazon DynamoDB.
3.1. URI Format
aws2-ddb://domainName[?options]
You can append query options to the URI in the following format, ?options=value&option2=value&…
3.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
3.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
3.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
3.3. Component Options
The AWS DynamoDB component supports 22 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
amazonDDBClient (producer) | Autowired To use the AmazonDynamoDB as the client. | DynamoDbClient | |
configuration (producer) | The component configuration. | Ddb2Configuration | |
consistentRead (producer) | Determines whether or not strong consistency should be enforced when data is read. | false | boolean |
enabledInitialDescribeTable (producer) | Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. | true | boolean |
keyAttributeName (producer) | Attribute name when creating table. | String | |
keyAttributeType (producer) | Attribute type when creating table. | String | |
keyScalarType (producer) | The key scalar type, it can be S (String), N (Number) and B (Bytes). | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | What operation to perform. Enum values:
| PutItem | Ddb2Operations |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
proxyHost (producer) | To define a proxy host when instantiating the DDB client. | String | |
proxyPort (producer) | The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | Integer | |
proxyProtocol (producer) | To define a proxy protocol when instantiating the DDB client. Enum values:
| HTTPS | Protocol |
readCapacity (producer) | The provisioned throughput to reserve for reading resources from your table. | Long | |
region (producer) | The region in which DDB client needs to work. | String | |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
writeCapacity (producer) | The provisioned throughput to reserved for writing resources to your table. | Long | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
3.4. Endpoint Options
The AWS DynamoDB endpoint is configured using URI syntax:
aws2-ddb:tableName
with the following path and query parameters:
3.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
tableName (producer) | Required The name of the table currently worked with. | String |
3.4.2. Query Parameters (20 parameters)
Name | Description | Default | Type |
---|---|---|---|
amazonDDBClient (producer) | Autowired To use the AmazonDynamoDB as the client. | DynamoDbClient | |
consistentRead (producer) | Determines whether or not strong consistency should be enforced when data is read. | false | boolean |
enabledInitialDescribeTable (producer) | Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. | true | boolean |
keyAttributeName (producer) | Attribute name when creating table. | String | |
keyAttributeType (producer) | Attribute type when creating table. | String | |
keyScalarType (producer) | The key scalar type, it can be S (String), N (Number) and B (Bytes). | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | What operation to perform. Enum values:
| PutItem | Ddb2Operations |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
proxyHost (producer) | To define a proxy host when instantiating the DDB client. | String | |
proxyPort (producer) | The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | Integer | |
proxyProtocol (producer) | To define a proxy protocol when instantiating the DDB client. Enum values:
| HTTPS | Protocol |
readCapacity (producer) | The provisioned throughput to reserve for reading resources from your table. | Long | |
region (producer) | The region in which DDB client needs to work. | String | |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
writeCapacity (producer) | The provisioned throughput to reserved for writing resources to your table. | Long | |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
Required DDB component options
You have to provide the amazonDDBClient in the Registry or your accessKey and secretKey to access the Amazon’s DynamoDB.
3.5. Usage
3.5.1. Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true.
- Java system properties - aws.accessKeyId and aws.secretKey
- Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- Web Identity Token from AWS STS.
- The shared credentials and config files.
- Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set.
- Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation
3.5.2. Message headers evaluated by the DDB producer
Header | Type | Description |
---|---|---|
|
| A map of the table name and corresponding items to get by primary key. |
|
| Table Name for this operation. |
|
| The primary key that uniquely identifies each item in a table. |
|
| Use this parameter if you want to get the attribute name-value pairs before or after they are modified(NONE, ALL_OLD, UPDATED_OLD, ALL_NEW, UPDATED_NEW). |
|
| Designates an attribute for a conditional modification. |
|
| If attribute names are not specified then all attributes will be returned. |
|
| If set to true, then a consistent read is issued, otherwise eventually consistent is used. |
|
| If set will be used as Secondary Index for Query operation. |
|
| A map of the attributes for the item, and must include the primary key values that define the item. |
|
| If set to true, Amazon DynamoDB returns a total number of items that match the query parameters, instead of a list of the matching items and their attributes. |
|
| This header specify the selection criteria for the query, and merge together the two old headers CamelAwsDdbHashKeyValue and CamelAwsDdbScanRangeKeyCondition |
|
| Primary key of the item from which to continue an earlier query. |
|
| Value of the hash component of the composite primary key. |
|
| The maximum number of items to return. |
|
| A container for the attribute values and comparison operators to use for the query. |
|
| Specifies forward or backward traversal of the index. |
|
| Evaluates the scan results and returns only the desired values. |
|
| Map of attribute name to the new value and action for the update. |
3.5.3. Message headers set during BatchGetItems operation
Header | Type | Description |
---|---|---|
|
| Table names and the respective item attributes from the tables. |
|
| Contains a map of tables and their respective keys that were not processed with the current response. |
3.5.4. Message headers set during DeleteItem operation
Header | Type | Description |
---|---|---|
|
| The list of attributes returned by the operation. |
3.5.5. Message headers set during DeleteTable operation
Header | Type | Description |
---|---|---|
| ||
| The value of the ProvisionedThroughput property for this table | |
|
| Creation DateTime of this table. |
|
| Item count for this table. |
|
| The KeySchema that identifies the primary key for this table. From Camel 2.16.0 the type of this header is List<KeySchemaElement> and not KeySchema |
|
| The table name. |
|
| The table size in bytes. |
|
| The status of the table: CREATING, UPDATING, DELETING, ACTIVE |
3.5.6. Message headers set during DescribeTable operation
Header | Type | Description |
---|---|---|
| \{{ProvisionedThroughputDescription}} | The value of the ProvisionedThroughput property for this table |
|
| Creation DateTime of this table. |
|
| Item count for this table. |
| \{{KeySchema}} | The KeySchema that identifies the primary key for this table. |
|
| The table name. |
|
| The table size in bytes. |
|
| The status of the table: CREATING, UPDATING, DELETING, ACTIVE |
|
| ReadCapacityUnits property of this table. |
|
| WriteCapacityUnits property of this table. |
3.5.7. Message headers set during GetItem operation
Header | Type | Description |
---|---|---|
|
| The list of attributes returned by the operation. |
3.5.8. Message headers set during PutItem operation
Header | Type | Description |
---|---|---|
|
| The list of attributes returned by the operation. |
3.5.9. Message headers set during Query operation
Header | Type | Description |
---|---|---|
|
| The list of attributes returned by the operation. |
|
| Primary key of the item where the query operation stopped, inclusive of the previous result set. |
|
| The number of Capacity Units of the provisioned throughput of the table consumed during the operation. |
|
| Number of items in the response. |
3.5.10. Message headers set during Scan operation
Header | Type | Description |
---|---|---|
|
| The list of attributes returned by the operation. |
|
| Primary key of the item where the query operation stopped, inclusive of the previous result set. |
|
| The number of Capacity Units of the provisioned throughput of the table consumed during the operation. |
|
| Number of items in the response. |
|
| Number of items in the complete scan before any filters are applied. |
3.5.11. Message headers set during UpdateItem operation
Header | Type | Description |
---|---|---|
|
| The list of attributes returned by the operation. |
3.5.12. Advanced AmazonDynamoDB configuration
If you need more control over the AmazonDynamoDB
instance configuration you can create your own instance and refer to it from the URI:
from("direct:start") .to("aws2-ddb://domainName?amazonDDBClient=#client");
The #client
refers to a DynamoDbClient
in the Registry.
3.6. Supported producer operations
- BatchGetItems
- DeleteItem
- DeleteTable
- DescribeTable
- GetItem
- PutItem
- Query
- Scan
- UpdateItem
- UpdateTable
3.7. Examples
3.7.1. Producer Examples
- PutItem: this operation will create an entry into DynamoDB
from("direct:start") .setHeader(Ddb2Constants.OPERATION, Ddb2Operations.PutItem) .setHeader(Ddb2Constants.CONSISTENT_READ, "true") .setHeader(Ddb2Constants.RETURN_VALUES, "ALL_OLD") .setHeader(Ddb2Constants.ITEM, attributeMap) .setHeader(Ddb2Constants.ATTRIBUTE_NAMES, attributeMap.keySet()); .to("aws2-ddb://" + tableName + "?keyAttributeName=" + attributeName + "&keyAttributeType=" + KeyType.HASH + "&keyScalarType=" + ScalarAttributeType.S + "&readCapacity=1&writeCapacity=1");
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-ddb</artifactId> <version>${camel-version}</version> </dependency>
where 3.14.2
must be replaced by the actual version of Camel.
3.8. Spring Boot Auto-Configuration
When using aws2-ddb with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-ddb-starter</artifactId> </dependency>
The component supports 40 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-ddb.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-ddb.amazon-d-d-b-client | To use the AmazonDynamoDB as the client. The option is a software.amazon.awssdk.services.dynamodb.DynamoDbClient type. | DynamoDbClient | |
camel.component.aws2-ddb.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-ddb.configuration | The component configuration. The option is a org.apache.camel.component.aws2.ddb.Ddb2Configuration type. | Ddb2Configuration | |
camel.component.aws2-ddb.consistent-read | Determines whether or not strong consistency should be enforced when data is read. | false | Boolean |
camel.component.aws2-ddb.enabled | Whether to enable auto configuration of the aws2-ddb component. This is enabled by default. | Boolean | |
camel.component.aws2-ddb.enabled-initial-describe-table | Set whether the initial Describe table operation in the DDB Endpoint must be done, or not. | true | Boolean |
camel.component.aws2-ddb.key-attribute-name | Attribute name when creating table. | String | |
camel.component.aws2-ddb.key-attribute-type | Attribute type when creating table. | String | |
camel.component.aws2-ddb.key-scalar-type | The key scalar type, it can be S (String), N (Number) and B (Bytes). | String | |
camel.component.aws2-ddb.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-ddb.operation | What operation to perform. | Ddb2Operations | |
camel.component.aws2-ddb.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-ddb.proxy-host | To define a proxy host when instantiating the DDB client. | String | |
camel.component.aws2-ddb.proxy-port | The region in which DynamoDB client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | Integer | |
camel.component.aws2-ddb.proxy-protocol | To define a proxy protocol when instantiating the DDB client. | Protocol | |
camel.component.aws2-ddb.read-capacity | The provisioned throughput to reserve for reading resources from your table. | Long | |
camel.component.aws2-ddb.region | The region in which DDB client needs to work. | String | |
camel.component.aws2-ddb.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-ddb.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-ddb.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-ddb.use-default-credentials-provider | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | Boolean |
camel.component.aws2-ddb.write-capacity | The provisioned throughput to reserved for writing resources to your table. | Long | |
camel.component.aws2-ddbstream.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-ddbstream.amazon-dynamo-db-streams-client | Amazon DynamoDB client to use for all requests for this endpoint. The option is a software.amazon.awssdk.services.dynamodb.streams.DynamoDbStreamsClient type. | DynamoDbStreamsClient | |
camel.component.aws2-ddbstream.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-ddbstream.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.aws2-ddbstream.configuration | The component configuration. The option is a org.apache.camel.component.aws2.ddbstream.Ddb2StreamConfiguration type. | Ddb2StreamConfiguration | |
camel.component.aws2-ddbstream.enabled | Whether to enable auto configuration of the aws2-ddbstream component. This is enabled by default. | Boolean | |
camel.component.aws2-ddbstream.max-results-per-request | Maximum number of records that will be fetched in each poll. | Integer | |
camel.component.aws2-ddbstream.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-ddbstream.proxy-host | To define a proxy host when instantiating the DDBStreams client. | String | |
camel.component.aws2-ddbstream.proxy-port | To define a proxy port when instantiating the DDBStreams client. | Integer | |
camel.component.aws2-ddbstream.proxy-protocol | To define a proxy protocol when instantiating the DDBStreams client. | Protocol | |
camel.component.aws2-ddbstream.region | The region in which DDBStreams client needs to work. | String | |
camel.component.aws2-ddbstream.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-ddbstream.stream-iterator-type | Defines where in the DynamoDB stream to start getting records. Note that using FROM_START can cause a significant delay before the stream has caught up to real-time. | Ddb2StreamConfiguration$StreamIteratorType | |
camel.component.aws2-ddbstream.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-ddbstream.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-ddbstream.use-default-credentials-provider | Set whether the DynamoDB Streams client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | Boolean |
Chapter 4. AWS Kinesis
Both producer and consumer are supported
The AWS2 Kinesis component supports receiving messages from and sending messages to Amazon Kinesis (no Batch supported) service.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon Kinesis. More information are available at AWS Kinesis.
4.1. URI Format
aws2-kinesis://stream-name[?options]
The stream needs to be created prior to it being used. You can append query options to the URI in the following format, ?options=value&option2=value&…
4.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
4.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
4.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
4.3. Component Options
The AWS Kinesis component supports 22 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
amazonKinesisClient (common) | Autowired Amazon Kinesis client to use for all requests for this endpoint. | KinesisClient | |
cborEnabled (common) | This option will set the CBOR_ENABLED property during the execution. | true | boolean |
configuration (common) | Component configuration. | Kinesis2Configuration | |
overrideEndpoint (common) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
proxyHost (common) | To define a proxy host when instantiating the Kinesis client. | String | |
proxyPort (common) | To define a proxy port when instantiating the Kinesis client. | Integer | |
proxyProtocol (common) | To define a proxy protocol when instantiating the Kinesis client. Enum values:
| HTTPS | Protocol |
region (common) | The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (common) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (common) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (common) | Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
iteratorType (consumer) | Defines where in the Kinesis stream to start getting records. Enum values:
| TRIM_HORIZON | ShardIteratorType |
maxResultsPerRequest (consumer) | Maximum number of records that will be fetched in each poll. | 1 | int |
resumeStrategy (consumer) | Defines a resume strategy for AWS Kinesis. The default strategy reads the sequenceNumber if provided. | KinesisUserConfigurationResumeStrategy | KinesisResumeStrategy |
sequenceNumber (consumer) | The sequence number to start polling from. Required if iteratorType is set to AFTER_SEQUENCE_NUMBER or AT_SEQUENCE_NUMBER. | String | |
shardClosed (consumer) | Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised. Enum values:
| ignore | Kinesis2ShardClosedStrategyEnum |
shardId (consumer) | Defines which shardId in the Kinesis stream to get records from. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
4.4. Endpoint Options
The AWS Kinesis endpoint is configured using URI syntax:
aws2-kinesis:streamName
with the following path and query parameters:
4.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
streamName (common) | Required Name of the stream. | String |
4.4.2. Query Parameters (38 parameters)
Name | Description | Default | Type |
---|---|---|---|
amazonKinesisClient (common) | Autowired Amazon Kinesis client to use for all requests for this endpoint. | KinesisClient | |
cborEnabled (common) | This option will set the CBOR_ENABLED property during the execution. | true | boolean |
overrideEndpoint (common) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
proxyHost (common) | To define a proxy host when instantiating the Kinesis client. | String | |
proxyPort (common) | To define a proxy port when instantiating the Kinesis client. | Integer | |
proxyProtocol (common) | To define a proxy protocol when instantiating the Kinesis client. Enum values:
| HTTPS | Protocol |
region (common) | The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (common) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (common) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (common) | Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
iteratorType (consumer) | Defines where in the Kinesis stream to start getting records. Enum values:
| TRIM_HORIZON | ShardIteratorType |
maxResultsPerRequest (consumer) | Maximum number of records that will be fetched in each poll. | 1 | int |
resumeStrategy (consumer) | Defines a resume strategy for AWS Kinesis. The default strategy reads the sequenceNumber if provided. | KinesisUserConfigurationResumeStrategy | KinesisResumeStrategy |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
sequenceNumber (consumer) | The sequence number to start polling from. Required if iteratorType is set to AFTER_SEQUENCE_NUMBER or AT_SEQUENCE_NUMBER. | String | |
shardClosed (consumer) | Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised. Enum values:
| ignore | Kinesis2ShardClosedStrategyEnum |
shardId (consumer) | Defines which shardId in the Kinesis stream to get records from. | String | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
Required Kinesis component options
You have to provide the KinesisClient in the Registry with proxies and relevant credentials configured.
4.5. Batch Consumer
This component implements the Batch Consumer.
This allows you for instance to know how many messages exists in this batch and for instance let the Aggregator aggregate this number of messages.
4.6. Usage
4.6.1. Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true.
- Java system properties - aws.accessKeyId and aws.secretKey
- Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- Web Identity Token from AWS STS.
- The shared credentials and config files.
- Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set.
- Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation
4.6.2. Message headers set by the Kinesis consumer
Header | Type | Description |
---|---|---|
|
| The sequence number of the record. This is represented as a String as it size is not defined by the API. If it is to be used as a numerical type then use |
|
| The time AWS assigned as the arrival time of the record. |
|
| Identifies which shard in the stream the data record is assigned to. |
4.6.3. AmazonKinesis configuration
You then have to reference the KinesisClient in the amazonKinesisClient
URI option.
from("aws2-kinesis://mykinesisstream?amazonKinesisClient=#kinesisClient") .to("log:out?showAll=true");
4.6.4. Providing AWS Credentials
It is recommended that the credentials are obtained by using the DefaultAWSCredentialsProviderChain that is the default when creating a new ClientConfiguration instance, however, a different AWSCredentialsProvider can be specified when calling createClient(…).
4.6.5. Message headers used by the Kinesis producer to write to Kinesis. The producer expects that the message body is a byte[]
.
Header | Type | Description |
---|---|---|
|
| The PartitionKey to pass to Kinesis to store this record. |
|
| Optional paramter to indicate the sequence number of this record. |
4.6.6. Message headers set by the Kinesis producer on successful storage of a Record
Header | Type | Description |
---|---|---|
|
| The sequence number of the record, as defined in Response Syntax |
|
| The shard ID of where the Record was stored |
4.7. Dependencies
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-kinesis</artifactId> <version>${camel-version}</version> </dependency>
where 3.14.2
must be replaced by the actual version of Camel.
4.8. Spring Boot Auto-Configuration
When using aws2-kinesis with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-kinesis-starter</artifactId> </dependency>
The component supports 40 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-kinesis-firehose.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-kinesis-firehose.amazon-kinesis-firehose-client | Amazon Kinesis Firehose client to use for all requests for this endpoint. The option is a software.amazon.awssdk.services.firehose.FirehoseClient type. | FirehoseClient | |
camel.component.aws2-kinesis-firehose.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-kinesis-firehose.cbor-enabled | This option will set the CBOR_ENABLED property during the execution. | true | Boolean |
camel.component.aws2-kinesis-firehose.configuration | Component configuration. The option is a org.apache.camel.component.aws2.firehose.KinesisFirehose2Configuration type. | KinesisFirehose2Configuration | |
camel.component.aws2-kinesis-firehose.enabled | Whether to enable auto configuration of the aws2-kinesis-firehose component. This is enabled by default. | Boolean | |
camel.component.aws2-kinesis-firehose.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-kinesis-firehose.operation | The operation to do in case the user don’t want to send only a record. | KinesisFirehose2Operations | |
camel.component.aws2-kinesis-firehose.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-kinesis-firehose.proxy-host | To define a proxy host when instantiating the Kinesis Firehose client. | String | |
camel.component.aws2-kinesis-firehose.proxy-port | To define a proxy port when instantiating the Kinesis Firehose client. | Integer | |
camel.component.aws2-kinesis-firehose.proxy-protocol | To define a proxy protocol when instantiating the Kinesis Firehose client. | Protocol | |
camel.component.aws2-kinesis-firehose.region | The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.aws2-kinesis-firehose.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-kinesis-firehose.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-kinesis-firehose.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-kinesis-firehose.use-default-credentials-provider | Set whether the Kinesis Firehose client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | Boolean |
camel.component.aws2-kinesis.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-kinesis.amazon-kinesis-client | Amazon Kinesis client to use for all requests for this endpoint. The option is a software.amazon.awssdk.services.kinesis.KinesisClient type. | KinesisClient | |
camel.component.aws2-kinesis.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-kinesis.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.aws2-kinesis.cbor-enabled | This option will set the CBOR_ENABLED property during the execution. | true | Boolean |
camel.component.aws2-kinesis.configuration | Component configuration. The option is a org.apache.camel.component.aws2.kinesis.Kinesis2Configuration type. | Kinesis2Configuration | |
camel.component.aws2-kinesis.enabled | Whether to enable auto configuration of the aws2-kinesis component. This is enabled by default. | Boolean | |
camel.component.aws2-kinesis.iterator-type | Defines where in the Kinesis stream to start getting records. | ShardIteratorType | |
camel.component.aws2-kinesis.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-kinesis.max-results-per-request | Maximum number of records that will be fetched in each poll. | 1 | Integer |
camel.component.aws2-kinesis.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-kinesis.proxy-host | To define a proxy host when instantiating the Kinesis client. | String | |
camel.component.aws2-kinesis.proxy-port | To define a proxy port when instantiating the Kinesis client. | Integer | |
camel.component.aws2-kinesis.proxy-protocol | To define a proxy protocol when instantiating the Kinesis client. | Protocol | |
camel.component.aws2-kinesis.region | The region in which Kinesis Firehose client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.aws2-kinesis.resume-strategy | Defines a resume strategy for AWS Kinesis. The default strategy reads the sequenceNumber if provided. The option is a org.apache.camel.component.aws2.kinesis.consumer.KinesisResumeStrategy type. | KinesisResumeStrategy | |
camel.component.aws2-kinesis.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-kinesis.sequence-number | The sequence number to start polling from. Required if iteratorType is set to AFTER_SEQUENCE_NUMBER or AT_SEQUENCE_NUMBER. | String | |
camel.component.aws2-kinesis.shard-closed | Define what will be the behavior in case of shard closed. Possible value are ignore, silent and fail. In case of ignore a message will be logged and the consumer will restart from the beginning,in case of silent there will be no logging and the consumer will start from the beginning,in case of fail a ReachedClosedStateException will be raised. | Kinesis2ShardClosedStrategyEnum | |
camel.component.aws2-kinesis.shard-id | Defines which shardId in the Kinesis stream to get records from. | String | |
camel.component.aws2-kinesis.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-kinesis.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-kinesis.use-default-credentials-provider | Set whether the Kinesis client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | Boolean |
Chapter 5. AWS 2 Lambda
Only producer is supported
The AWS2 Lambda component supports create, get, list, delete and invoke AWS Lambda functions.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon Lambda. More information is available at AWS Lambda.
When creating a Lambda function, you need to specify a IAM role which has at least the AWSLambdaBasicExecuteRole policy attached.
5.1. URI Format
aws2-lambda://functionName[?options]
You can append query options to the URI in the following format, options=value&option2=value&…
5.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
5.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
5.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
5.3. Component Options
The AWS Lambda component supports 16 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
configuration (producer) | Component configuration. | Lambda2Configuration | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction. Enum values:
| invokeFunction | Lambda2Operations |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
pojoRequest (producer) | If we want to use a POJO request as body or not. | false | boolean |
region (producer) | The region in which Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
awsLambdaClient (advanced) | Autowired To use a existing configured AwsLambdaClient as client. | LambdaClient | |
proxyHost (proxy) | To define a proxy host when instantiating the Lambda client. | String | |
proxyPort (proxy) | To define a proxy port when instantiating the Lambda client. | Integer | |
proxyProtocol (proxy) | To define a proxy protocol when instantiating the Lambda client. Enum values:
| HTTPS | Protocol |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
5.4. Endpoint Options
The AWS Lambda endpoint is configured using URI syntax:
aws2-lambda:function
with the following path and query parameters:
5.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
function (producer) | Required Name of the Lambda function. | String |
5.4.2. Query Parameters (14 parameters)
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction. Enum values:
| invokeFunction | Lambda2Operations |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
pojoRequest (producer) | If we want to use a POJO request as body or not. | false | boolean |
region (producer) | The region in which Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
awsLambdaClient (advanced) | Autowired To use a existing configured AwsLambdaClient as client. | LambdaClient | |
proxyHost (proxy) | To define a proxy host when instantiating the Lambda client. | String | |
proxyPort (proxy) | To define a proxy port when instantiating the Lambda client. | Integer | |
proxyProtocol (proxy) | To define a proxy protocol when instantiating the Lambda client. Enum values:
| HTTPS | Protocol |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
Required Lambda component options
You have to provide the awsLambdaClient in the Registry or your accessKey and secretKey to access the Amazon Lambda service..
5.5. Usage
5.5.1. Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true.
- Java system properties - aws.accessKeyId and aws.secretKey
- Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- Web Identity Token from AWS STS.
- The shared credentials and config files.
- Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set.
- Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation
5.5.2. Message headers evaluated by the Lambda producer
Operation | Header | Type | Description | Required |
---|---|---|---|---|
All |
|
| The operation we want to perform. Override operation passed as query parameter | Yes |
createFunction |
|
| Amazon S3 bucket name where the .zip file containing your deployment package is stored. This bucket must reside in the same AWS region where you are creating the Lambda function. | No |
createFunction |
|
| The Amazon S3 object (the deployment package) key name you want to upload. | No |
createFunction |
| String | The Amazon S3 object (the deployment package) version you want to upload. | No |
createFunction |
|
| The local path of the zip file (the deployment package). Content of zip file can also be put in Message body. | No |
createFunction |
|
| The Amazon Resource Name (ARN) of the IAM role that Lambda assumes when it executes your function to access any other Amazon Web Services (AWS) resources. | Yes |
createFunction |
| String | The runtime environment for the Lambda function you are uploading. (nodejs, nodejs4.3, nodejs6.10, java8, python2.7, python3.6, dotnetcore1.0, odejs4.3-edge) | Yes |
createFunction |
|
| The function within your code that Lambda calls to begin execution. For Node.js, it is the module-name.export value in your function. For Java, it can be package.class-name::handler or package.class-name. | Yes |
createFunction |
|
| The user-provided description. | No |
createFunction |
|
| The parent object that contains the target ARN (Amazon Resource Name) of an Amazon SQS queue or Amazon SNS topic. | No |
createFunction |
|
| The memory size, in MB, you configured for the function. Must be a multiple of 64 MB. | No |
createFunction |
|
| The Amazon Resource Name (ARN) of the KMS key used to encrypt your function’s environment variables. If not provided, AWS Lambda will use a default service key. | No |
createFunction |
|
| This boolean parameter can be used to request AWS Lambda to create the Lambda function and publish a version as an atomic operation. | No |
createFunction |
|
| The function execution time at which Lambda should terminate the function. The default is 3 seconds. | No |
createFunction |
|
| Your function’s tracing settings (Active or PassThrough). | No |
createFunction |
|
| The key-value pairs that represent your environment’s configuration settings. | No |
createFunction |
|
| The list of tags (key-value pairs) assigned to the new function. | No |
createFunction |
|
| If your Lambda function accesses resources in a VPC, a list of one or more security groups IDs in your VPC. | No |
createFunction |
|
| If your Lambda function accesses resources in a VPC, a list of one or more subnet IDs in your VPC. | No |
createAlias |
|
| The function version to set in the alias | Yes |
createAlias |
|
| The function name to set in the alias | Yes |
createAlias |
|
| The function description to set in the alias | No |
deleteAlias |
|
| The function name of the alias | Yes |
getAlias |
|
| The function name of the alias | Yes |
listAliases |
|
| The function version to set in the alias | No |
5.6. List of Avalaible Operations
- listFunctions
- getFunction
- createFunction
- deleteFunction
- invokeFunction
- updateFunction
- createEventSourceMapping
- deleteEventSourceMapping
- listEventSourceMapping
- listTags
- tagResource
- untagResource
- publishVersion
- listVersions
- createAlias
- deleteAlias
- getAlias
- listAliases
5.7. Examples
5.7.1. Producer Example
To have a full understanding of how the component works, you may have a look at these integration tests.
5.7.2. Producer Examples
- CreateFunction: this operation will create a function for you in AWS Lambda
from("direct:createFunction").to("aws2-lambda://GetHelloWithName?operation=createFunction").to("mock:result");
and by sending
template.send("direct:createFunction", ExchangePattern.InOut, new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(Lambda2Constants.RUNTIME, "nodejs6.10"); exchange.getIn().setHeader(Lambda2Constants.HANDLER, "GetHelloWithName.handler"); exchange.getIn().setHeader(Lambda2Constants.DESCRIPTION, "Hello with node.js on Lambda"); exchange.getIn().setHeader(Lambda2Constants.ROLE, "arn:aws:iam::643534317684:role/lambda-execution-role"); ClassLoader classLoader = getClass().getClassLoader(); File file = new File( classLoader .getResource("org/apache/camel/component/aws2/lambda/function/node/GetHelloWithName.zip") .getFile()); FileInputStream inputStream = new FileInputStream(file); exchange.getIn().setBody(inputStream); } });
5.8. Using a POJO as body
Sometimes build an AWS Request can be complex, because of multiple options. We introduce the possibility to use a POJO as body. In AWS Lambda there are multiple operations you can submit, as an example for Get Function request, you can do something like:
from("direct:getFunction") .setBody(GetFunctionRequest.builder().functionName("test").build()) .to("aws2-lambda://GetHelloWithName?awsLambdaClient=#awsLambdaClient&operation=getFunction&pojoRequest=true")
In this way you’ll pass the request directly without the need of passing headers and options specifically related to this operation.
5.9. Dependencies
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-lambda</artifactId> <version>${camel-version}</version> </dependency>
where 3.14.2
must be replaced by the actual version of Camel.
5.10. Spring Boot Auto-Configuration
When using aws2-lambda with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-lambda-starter</artifactId> </dependency>
The component supports 17 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-lambda.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-lambda.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-lambda.aws-lambda-client | To use a existing configured AwsLambdaClient as client. The option is a software.amazon.awssdk.services.lambda.LambdaClient type. | LambdaClient | |
camel.component.aws2-lambda.configuration | Component configuration. The option is a org.apache.camel.component.aws2.lambda.Lambda2Configuration type. | Lambda2Configuration | |
camel.component.aws2-lambda.enabled | Whether to enable auto configuration of the aws2-lambda component. This is enabled by default. | Boolean | |
camel.component.aws2-lambda.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-lambda.operation | The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction. | Lambda2Operations | |
camel.component.aws2-lambda.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-lambda.pojo-request | If we want to use a POJO request as body or not. | false | Boolean |
camel.component.aws2-lambda.proxy-host | To define a proxy host when instantiating the Lambda client. | String | |
camel.component.aws2-lambda.proxy-port | To define a proxy port when instantiating the Lambda client. | Integer | |
camel.component.aws2-lambda.proxy-protocol | To define a proxy protocol when instantiating the Lambda client. | Protocol | |
camel.component.aws2-lambda.region | The region in which Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.aws2-lambda.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-lambda.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-lambda.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-lambda.use-default-credentials-provider | Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | Boolean |
Chapter 6. AWS S3 Storage Service
Both producer and consumer are supported
The AWS2 S3 component supports storing and retrieving objects from/to Amazon’s S3 service.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon S3. More information is available at link:https://aws.amazon.com/s3 [Amazon S3].
6.1. URI Format
aws2-s3://bucketNameOrArn[?options]
The bucket will be created if it don’t already exists. You can append query options to the URI in the following format,
options=value&option2=value&…
6.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
6.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
6.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
6.3. Component Options
The AWS S3 Storage Service component supports 50 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
amazonS3Client (common) | Autowired Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. | S3Client | |
amazonS3Presigner (common) | Autowired An S3 Presigner for Request, used mainly in createDownloadLink operation. | S3Presigner | |
autoCreateBucket (common) | Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. | false | boolean |
configuration (common) | The component configuration. | AWS2S3Configuration | |
overrideEndpoint (common) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
pojoRequest (common) | If we want to use a POJO request as body or not. | false | boolean |
policy (common) | The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. | String | |
proxyHost (common) | To define a proxy host when instantiating the SQS client. | String | |
proxyPort (common) | Specify a proxy port to be used inside the client definition. | Integer | |
proxyProtocol (common) | To define a proxy protocol when instantiating the S3 client. Enum values:
| HTTPS | Protocol |
region (common) | The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (common) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (common) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (common) | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
customerAlgorithm (common (advanced)) | Define the customer algorithm to use in case CustomerKey is enabled. | String | |
customerKeyId (common (advanced)) | Define the id of Customer key to use in case CustomerKey is enabled. | String | |
customerKeyMD5 (common (advanced)) | Define the MD5 of Customer key to use in case CustomerKey is enabled. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
deleteAfterRead (consumer) | Delete objects from S3 after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUCKET_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header. | true | boolean |
delimiter (consumer) | The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. | String | |
destinationBucket (consumer) | Define the destination bucket where an object must be moved when moveAfterRead is set to true. | String | |
destinationBucketPrefix (consumer) | Define the destination bucket prefix to use when an object must be moved and moveAfterRead is set to true. | String | |
destinationBucketSuffix (consumer) | Define the destination bucket suffix to use when an object must be moved and moveAfterRead is set to true. | String | |
doneFileName (consumer) | If provided, Camel will only consume files if a done file exists. | String | |
fileName (consumer) | To get the object from the bucket with the given file name. | String | |
ignoreBody (consumer) | If it is true, the S3 Object Body will be ignored completely, if it is set to false the S3 Object will be put in the body. Setting this to true, will override any behavior defined by includeBody option. | false | boolean |
includeBody (consumer) | If it is true, the S3Object exchange will be consumed and put into the body and closed. If false the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion. | true | boolean |
includeFolders (consumer) | If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those. | true | boolean |
moveAfterRead (consumer) | Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. | false | boolean |
prefix (consumer) | The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. | String | |
autocloseBody (consumer (advanced)) | If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically. | true | boolean |
batchMessageNumber (producer) | The number of messages composing a batch in streaming upload mode. | 10 | int |
batchSize (producer) | The batch size (in bytes) in streaming upload mode. | 1000000 | int |
deleteAfterWrite (producer) | Delete file object after the S3 file has been uploaded. | false | boolean |
keyName (producer) | Setting the key name for an element in the bucket through endpoint parameter. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
multiPartUpload (producer) | If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize. | false | boolean |
namingStrategy (producer) | The naming strategy to use in streaming upload mode. Enum values:
| progressive | AWSS3NamingStrategyEnum |
operation (producer) | The operation to do in case the user don’t want to do only an upload. Enum values:
| AWS2S3Operations | |
partSize (producer) | Setup the partSize which is used in multi part upload, the default size is 25M. | 26214400 | long |
restartingPolicy (producer) | The restarting policy to use in streaming upload mode. Enum values:
| override | AWSS3RestartingPolicyEnum |
storageClass (producer) | The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. | String | |
streamingUploadMode (producer) | When stream mode is true the upload to bucket will be done in streaming. | false | boolean |
streamingUploadTimeout (producer) | While streaming upload mode is true, this option set the timeout to complete upload. | long | |
awsKMSKeyId (producer (advanced)) | Define the id of KMS key to use in case KMS is enabled. | String | |
useAwsKMS (producer (advanced)) | Define if KMS must be used or not. | false | boolean |
useCustomerKey (producer (advanced)) | Define if Customer Key must be used or not. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
6.4. Endpoint Options
The AWS S3 Storage Service endpoint is configured using URI syntax:
aws2-s3://bucketNameOrArn
with the following path and query parameters:
6.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
bucketNameOrArn (common) | Required Bucket name or ARN. | String |
6.4.2. Query Parameters (68 parameters)
Name | Description | Default | Type |
---|---|---|---|
amazonS3Client (common) | Autowired Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. | S3Client | |
amazonS3Presigner (common) | Autowired An S3 Presigner for Request, used mainly in createDownloadLink operation. | S3Presigner | |
autoCreateBucket (common) | Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. | false | boolean |
overrideEndpoint (common) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
pojoRequest (common) | If we want to use a POJO request as body or not. | false | boolean |
policy (common) | The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. | String | |
proxyHost (common) | To define a proxy host when instantiating the SQS client. | String | |
proxyPort (common) | Specify a proxy port to be used inside the client definition. | Integer | |
proxyProtocol (common) | To define a proxy protocol when instantiating the S3 client. Enum values:
| HTTPS | Protocol |
region (common) | The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (common) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (common) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (common) | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | boolean |
customerAlgorithm (common (advanced)) | Define the customer algorithm to use in case CustomerKey is enabled. | String | |
customerKeyId (common (advanced)) | Define the id of Customer key to use in case CustomerKey is enabled. | String | |
customerKeyMD5 (common (advanced)) | Define the MD5 of Customer key to use in case CustomerKey is enabled. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
deleteAfterRead (consumer) | Delete objects from S3 after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUCKET_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header. | true | boolean |
delimiter (consumer) | The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. | String | |
destinationBucket (consumer) | Define the destination bucket where an object must be moved when moveAfterRead is set to true. | String | |
destinationBucketPrefix (consumer) | Define the destination bucket prefix to use when an object must be moved and moveAfterRead is set to true. | String | |
destinationBucketSuffix (consumer) | Define the destination bucket suffix to use when an object must be moved and moveAfterRead is set to true. | String | |
doneFileName (consumer) | If provided, Camel will only consume files if a done file exists. | String | |
fileName (consumer) | To get the object from the bucket with the given file name. | String | |
ignoreBody (consumer) | If it is true, the S3 Object Body will be ignored completely, if it is set to false the S3 Object will be put in the body. Setting this to true, will override any behavior defined by includeBody option. | false | boolean |
includeBody (consumer) | If it is true, the S3Object exchange will be consumed and put into the body and closed. If false the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion. | true | boolean |
includeFolders (consumer) | If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those. | true | boolean |
maxConnections (consumer) | Set the maxConnections parameter in the S3 client configuration. | 60 | int |
maxMessagesPerPoll (consumer) | Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited. | 10 | int |
moveAfterRead (consumer) | Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. | false | boolean |
prefix (consumer) | The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. | String | |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
autocloseBody (consumer (advanced)) | If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically. | true | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
batchMessageNumber (producer) | The number of messages composing a batch in streaming upload mode. | 10 | int |
batchSize (producer) | The batch size (in bytes) in streaming upload mode. | 1000000 | int |
deleteAfterWrite (producer) | Delete file object after the S3 file has been uploaded. | false | boolean |
keyName (producer) | Setting the key name for an element in the bucket through endpoint parameter. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
multiPartUpload (producer) | If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize. | false | boolean |
namingStrategy (producer) | The naming strategy to use in streaming upload mode. Enum values:
| progressive | AWSS3NamingStrategyEnum |
operation (producer) | The operation to do in case the user don’t want to do only an upload. Enum values:
| AWS2S3Operations | |
partSize (producer) | Setup the partSize which is used in multi part upload, the default size is 25M. | 26214400 | long |
restartingPolicy (producer) | The restarting policy to use in streaming upload mode. Enum values:
| override | AWSS3RestartingPolicyEnum |
storageClass (producer) | The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. | String | |
streamingUploadMode (producer) | When stream mode is true the upload to bucket will be done in streaming. | false | boolean |
streamingUploadTimeout (producer) | While streaming upload mode is true, this option set the timeout to complete upload. | long | |
awsKMSKeyId (producer (advanced)) | Define the id of KMS key to use in case KMS is enabled. | String | |
useAwsKMS (producer (advanced)) | Define if KMS must be used or not. | false | boolean |
useCustomerKey (producer (advanced)) | Define if Customer Key must be used or not. | false | boolean |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
Required S3 component options
You have to provide the amazonS3Client in the Registry or your accessKey and secretKey to access the Amazon’s S3.
6.5. Batch Consumer
This component implements the Batch Consumer.
This allows you for instance to know how many messages exists in this batch and for instance let the Aggregator aggregate this number of messages.
6.6. Usage
For example in order to read file hello.txt
from bucket helloBucket
, use the following snippet:
from("aws2-s3://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt") .to("file:/var/downloaded");
6.6.1. Message headers evaluated by the S3 producer
Header | Type | Description |
---|---|---|
|
| The bucket Name which this object will be stored or which will be used for the current operation |
|
| The bucket Destination Name which will be used for the current operation |
|
| The content length of this object. |
|
| The content type of this object. |
|
| The content control of this object. |
|
| The content disposition of this object. |
|
| The content encoding of this object. |
|
| The md5 checksum of this object. |
|
| The Destination key which will be used for the current operation |
|
| The key under which this object will be stored or which will be used for the current operation |
|
| The last modified timestamp of this object. |
|
| The operation to perform. Permitted values are copyObject, deleteObject, listBuckets, deleteBucket, listObjects |
|
| The storage class of this object. |
|
|
The canned acl that will be applied to the object. see |
|
|
A well constructed Amazon S3 Access Control List object. see |
| String | Sets the server-side encryption algorithm when encrypting the object using AWS-managed keys. For example use AES256. |
|
| The version Id of the object to be stored or returned from the current operation |
|
| A map of metadata to be stored with the object in S3. More details about metadata . |
6.6.2. Message headers set by the S3 producer
Header | Type | Description |
---|---|---|
|
| The ETag value for the newly uploaded object. |
|
| The optional version ID of the newly uploaded object. |
6.6.3. Message headers set by the S3 consumer
Header | Type | Description |
---|---|---|
|
| The key under which this object is stored. |
|
| The name of the bucket in which this object is contained. |
|
| The hex encoded 128-bit MD5 digest of the associated object according to RFC 1864. This data is used as an integrity check to verify that the data received by the caller is the same data that was sent by Amazon S3. |
|
| The value of the Last-Modified header, indicating the date and time at which Amazon S3 last recorded a modification to the associated object. |
|
| The version ID of the associated Amazon S3 object if available. Version IDs are only assigned to objects when an object is uploaded to an Amazon S3 bucket that has object versioning enabled. |
|
| The Content-Type HTTP header, which indicates the type of content stored in the associated object. The value of this header is a standard MIME type. |
|
| The base64 encoded 128-bit MD5 digest of the associated object (content - not including headers) according to RFC 1864. This data is used as a message integrity check to verify that the data received by Amazon S3 is the same data that the caller sent. |
|
| The Content-Length HTTP header indicating the size of the associated object in bytes. |
|
| The optional Content-Encoding HTTP header specifying what content encodings have been applied to the object and what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type field. |
|
| The optional Content-Disposition HTTP header, which specifies presentational information such as the recommended filename for the object to be saved as. |
|
| The optional Cache-Control HTTP header which allows the user to specify caching behavior along the HTTP request/reply chain. |
| String | The server-side encryption algorithm when encrypting the object using AWS-managed keys. |
|
| A map of metadata stored with the object in S3. More details about metadata . |
6.6.4. S3 Producer operations
Camel-AWS2-S3 component provides the following operation on the producer side:
- copyObject
- deleteObject
- listBuckets
- deleteBucket
- listObjects
- getObject (this will return an S3Object instance)
- getObjectRange (this will return an S3Object instance)
- createDownloadLink
If you don’t specify an operation explicitly the producer will do: - a single file upload - a multipart upload if multiPartUpload option is enabled.
6.6.5. Advanced AmazonS3 configuration
If your Camel Application is running behind a firewall or if you need to have more control over the S3Client
instance configuration, you can create your own instance and refer to it in your Camel aws2-s3 component configuration:
from("aws2-s3://MyBucket?amazonS3Client=#client&delay=5000&maxMessagesPerPoll=5") .to("mock:result");
6.6.6. Use KMS with the S3 component
To use AWS KMS to encrypt/decrypt data by using AWS infrastructure you can use the options introduced in 2.21.x like in the following example
from("file:tmp/test?fileName=test.txt") .setHeader(S3Constants.KEY, constant("testFile")) .to("aws2-s3://mybucket?amazonS3Client=#client&useAwsKMS=true&awsKMSKeyId=3f0637ad-296a-3dfe-a796-e60654fb128c");
In this way you’ll ask to S3, to use the KMS key 3f0637ad-296a-3dfe-a796-e60654fb128c, to encrypt the file test.txt. When you’ll ask to download this file, the decryption will be done directly before the download.
6.6.7. Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true.
- Java system properties - aws.accessKeyId and aws.secretKey
- Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- Web Identity Token from AWS STS.
- The shared credentials and config files.
- Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set.
- Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation
6.6.8. S3 Producer Operation examples
- Single Upload: This operation will upload a file to S3 based on the body content
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(S3Constants.KEY, "camel.txt"); exchange.getIn().setBody("Camel rocks!"); } }) .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client") .to("mock:result");
This operation will upload the file camel.txt with the content "Camel rocks!" in the mycamelbucket bucket
- Multipart Upload: This operation will perform a multipart upload of a file to S3 based on the body content
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(AWS2S3Constants.KEY, "empty.txt"); exchange.getIn().setBody(new File("src/empty.txt")); } }) .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&multiPartUpload=true&autoCreateBucket=true&partSize=1048576") .to("mock:result");
This operation will perform a multipart upload of the file empty.txt with based on the content the file src/empty.txt in the mycamelbucket bucket
- CopyObject: this operation copy an object from one bucket to a different one
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(S3Constants.BUCKET_DESTINATION_NAME, "camelDestinationBucket"); exchange.getIn().setHeader(S3Constants.KEY, "camelKey"); exchange.getIn().setHeader(S3Constants.DESTINATION_KEY, "camelDestinationKey"); } }) .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=copyObject") .to("mock:result");
This operation will copy the object with the name expressed in the header camelDestinationKey to the camelDestinationBucket bucket, from the bucket mycamelbucket.
- DeleteObject: this operation deletes an object from a bucket
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(S3Constants.KEY, "camelKey"); } }) .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=deleteObject") .to("mock:result");
This operation will delete the object camelKey from the bucket mycamelbucket.
- ListBuckets: this operation list the buckets for this account in this region
from("direct:start") .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=listBuckets") .to("mock:result");
This operation will list the buckets for this account
- DeleteBucket: this operation delete the bucket specified as URI parameter or header
from("direct:start") .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=deleteBucket") .to("mock:result");
This operation will delete the bucket mycamelbucket
- ListObjects: this operation list object in a specific bucket
from("direct:start") .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=listObjects") .to("mock:result");
This operation will list the objects in the mycamelbucket bucket
- GetObject: this operation get a single object in a specific bucket
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(S3Constants.KEY, "camelKey"); } }) .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=getObject") .to("mock:result");
This operation will return an S3Object instance related to the camelKey object in mycamelbucket bucket.
- GetObjectRange: this operation get a single object range in a specific bucket
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(S3Constants.KEY, "camelKey"); exchange.getIn().setHeader(S3Constants.RANGE_START, "0"); exchange.getIn().setHeader(S3Constants.RANGE_END, "9"); } }) .to("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&operation=getObjectRange") .to("mock:result");
This operation will return an S3Object instance related to the camelKey object in mycamelbucket bucket, containing a the bytes from 0 to 9.
- CreateDownloadLink: this operation will return a download link through S3 Presigner
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(S3Constants.KEY, "camelKey"); } }) .to("aws2-s3://mycamelbucket?accessKey=xxx&secretKey=yyy®ion=region&operation=createDownloadLink") .to("mock:result");
This operation will return a download link url for the file camel-key in the bucket mycamelbucket and region region
6.7. Streaming Upload mode
With the stream mode enabled users will be able to upload data to S3 without knowing ahead of time the dimension of the data, by leveraging multipart upload. The upload will be completed when: the batchSize has been completed or the batchMessageNumber has been reached. There are two possible naming strategy:
progressive
With the progressive strategy each file will have the name composed by keyName option and a progressive counter, and eventually the file extension (if any)
random.
With the random strategy a UUID will be added after keyName and eventually the file extension will appended.
As an example:
from(kafka("topic1").brokers("localhost:9092")) .log("Kafka Message is: ${body}") .to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.progressive).keyName("{{kafkaTopic1}}/{{kafkaTopic1}}.txt")); from(kafka("topic2").brokers("localhost:9092")) .log("Kafka Message is: ${body}") .to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.progressive).keyName("{{kafkaTopic2}}/{{kafkaTopic2}}.txt"));
The default size for a batch is 1 Mb, but you can adjust it according to your requirements.
When you’ll stop your producer route, the producer will take care of flushing the remaining buffered messaged and complete the upload.
In Streaming upload you’ll be able restart the producer from the point where it left. It’s important to note that this feature is critical only when using the progressive naming strategy.
By setting the restartingPolicy to lastPart, you will restart uploading files and contents from the last part number the producer left.
Example
- Start the route with progressive naming strategy and keyname equals to camel.txt, with batchMessageNumber equals to 20, and restartingPolicy equals to lastPart - Send 70 messages.
- Stop the route
On your S3 bucket you should now see 4 files: * camel.txt
- camel-1.txt
- camel-2.txt
camel-3.txt
The first three will have 20 messages, while the last one only 10.
- Restart the route.
- Send 25 messages.
- Stop the route.
- You’ll now have 2 other files in your bucket: camel-5.txt and camel-6.txt, the first with 20 messages and second with 5 messages.
- Go ahead
This won’t be needed when using the random naming strategy.
On the opposite you can specify the override restartingPolicy. In that case you’ll be able to override whatever you written before (for that particular keyName) on your bucket.
In Streaming upload mode the only keyName option that will be taken into account is the endpoint option. Using the header will throw an NPE and this is done by design. Setting the header means potentially change the file name on each exchange and this is against the aim of the streaming upload producer. The keyName needs to be fixed and static. The selected naming strategy will do the rest of the of the work.
Another possibility is specifying a streamingUploadTimeout with batchMessageNumber and batchSize options. With this option the user will be able to complete the upload of a file after a certain time passed. In this way the upload completion will be passed on three tiers: the timeout, the number of messages and the batch size.
As an example:
from(kafka("topic1").brokers("localhost:9092")) .log("Kafka Message is: ${body}") .to(aws2S3("camel-bucket").streamingUploadMode(true).batchMessageNumber(25).streamingUploadTimeout(10000).namingStrategy(AWS2S3EndpointBuilderFactory.AWSS3NamingStrategyEnum.progressive).keyName("{{kafkaTopic1}}/{{kafkaTopic1}}.txt"));
In this case the upload will be completed after 10 seconds.
6.8. Bucket Autocreation
With the option autoCreateBucket
users are able to avoid the autocreation of an S3 Bucket in case it doesn’t exist. The default for this option is true
. If set to false any operation on a not-existent bucket in AWS won’t be successful and an error will be returned.
6.9. Moving stuff between a bucket and another bucket
Some users like to consume stuff from a bucket and move the content in a different one without using the copyObject feature of this component. If this is case for you, don’t forget to remove the bucketName header from the incoming exchange of the consumer, otherwise the file will be always overwritten on the same original bucket.
6.10. MoveAfterRead consumer option
In addition to deleteAfterRead it has been added another option, moveAfterRead. With this option enabled the consumed object will be moved to a target destinationBucket instead of being only deleted. This will require specifying the destinationBucket option. As example:
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&moveAfterRead=true&destinationBucket=myothercamelbucket") .to("mock:result");
In this case the objects consumed will be moved to myothercamelbucket bucket and deleted from the original one (because of deleteAfterRead set to true as default).
You have also the possibility of using a key prefix/suffix while moving the file to a different bucket. The options are destinationBucketPrefix and destinationBucketSuffix.
Taking the above example, you could do something like:
from("aws2-s3://mycamelbucket?amazonS3Client=#amazonS3Client&moveAfterRead=true&destinationBucket=myothercamelbucket&destinationBucketPrefix=RAW(pre-)&destinationBucketSuffix=RAW(-suff)") .to("mock:result");
In this case the objects consumed will be moved to myothercamelbucket bucket and deleted from the original one (because of deleteAfterRead set to true as default).
So if the file name is test, in the myothercamelbucket you should see a file called pre-test-suff.
6.11. Using customer key as encryption
We introduced also the customer key support (an alternative of using KMS). The following code shows an example.
String key = UUID.randomUUID().toString(); byte[] secretKey = generateSecretKey(); String b64Key = Base64.getEncoder().encodeToString(secretKey); String b64KeyMd5 = Md5Utils.md5AsBase64(secretKey); String awsEndpoint = "aws2-s3://mycamel?autoCreateBucket=false&useCustomerKey=true&customerKeyId=RAW(" + b64Key + ")&customerKeyMD5=RAW(" + b64KeyMd5 + ")&customerAlgorithm=" + AES256.name(); from("direct:putObject") .setHeader(AWS2S3Constants.KEY, constant("test.txt")) .setBody(constant("Test")) .to(awsEndpoint);
6.12. Using a POJO as body
Sometimes build an AWS Request can be complex, because of multiple options. We introduce the possibility to use a POJO as body. In AWS S3 there are multiple operations you can submit, as an example for List brokers request, you can do something like:
from("direct:aws2-s3") .setBody(ListObjectsRequest.builder().bucket(bucketName).build()) .to("aws2-s3://test?amazonS3Client=#amazonS3Client&operation=listObjects&pojoRequest=true")
In this way you’ll pass the request directly without the need of passing headers and options specifically related to this operation.
6.13. Create S3 client and add component to registry
Sometimes you would want to perform some advanced configuration using AWS2S3Configuration which also allows to set the S3 client. You can create and set the S3 client in the component configuration as shown in the following example
String awsBucketAccessKey = "your_access_key"; String awsBucketSecretKey = "your_secret_key"; S3Client s3Client = S3Client.builder().credentialsProvider(StaticCredentialsProvider.create(AwsBasicCredentials.create(awsBucketAccessKey, awsBucketSecretKey))) .region(Region.US_EAST_1).build(); AWS2S3Configuration configuration = new AWS2S3Configuration(); configuration.setAmazonS3Client(s3Client); configuration.setAutoDiscoverClient(true); configuration.setBucketName("s3bucket2020"); configuration.setRegion("us-east-1");
Now you can configure the S3 component (using the configuration object created above) and add it to the registry in the configure method before initialization of routes.
AWS2S3Component s3Component = new AWS2S3Component(getContext()); s3Component.setConfiguration(configuration); s3Component.setLazyStartProducer(true); camelContext.addComponent("aws2-s3", s3Component);
Now your component will be used for all the operations implemented in camel routes.
6.14. Dependencies
Maven users will need to add the following dependency to their pom.xml
.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-s3</artifactId> <version>${camel-version}</version> </dependency>
where 3.14.2
must be replaced by the actual version of Camel.
6.15. Spring Boot Auto-Configuration
When using aws2-s3 with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-s3-starter</artifactId> </dependency>
The component supports 51 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-s3.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-s3.amazon-s3-client | Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. The option is a software.amazon.awssdk.services.s3.S3Client type. | S3Client | |
camel.component.aws2-s3.amazon-s3-presigner | An S3 Presigner for Request, used mainly in createDownloadLink operation. The option is a software.amazon.awssdk.services.s3.presigner.S3Presigner type. | S3Presigner | |
camel.component.aws2-s3.auto-create-bucket | Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. | false | Boolean |
camel.component.aws2-s3.autoclose-body | If this option is true and includeBody is false, then the S3Object.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to false and autocloseBody to false, it will be up to the caller to close the S3Object stream. Setting autocloseBody to true, will close the S3Object stream automatically. | true | Boolean |
camel.component.aws2-s3.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-s3.aws-k-m-s-key-id | Define the id of KMS key to use in case KMS is enabled. | String | |
camel.component.aws2-s3.batch-message-number | The number of messages composing a batch in streaming upload mode. | 10 | Integer |
camel.component.aws2-s3.batch-size | The batch size (in bytes) in streaming upload mode. | 1000000 | Integer |
camel.component.aws2-s3.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.aws2-s3.configuration | The component configuration. The option is a org.apache.camel.component.aws2.s3.AWS2S3Configuration type. | AWS2S3Configuration | |
camel.component.aws2-s3.customer-algorithm | Define the customer algorithm to use in case CustomerKey is enabled. | String | |
camel.component.aws2-s3.customer-key-id | Define the id of Customer key to use in case CustomerKey is enabled. | String | |
camel.component.aws2-s3.customer-key-m-d5 | Define the MD5 of Customer key to use in case CustomerKey is enabled. | String | |
camel.component.aws2-s3.delete-after-read | Delete objects from S3 after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the AWS2S3Constants#BUClKET_NAME and AWS2S3Constants#KEY headers, or only the AWS2S3Constants#KEY header. | true | Boolean |
camel.component.aws2-s3.delete-after-write | Delete file object after the S3 file has been uploaded. | false | Boolean |
camel.component.aws2-s3.delimiter | The delimiter which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. | String | |
camel.component.aws2-s3.destination-bucket | Define the destination bucket where an object must be moved when moveAfterRead is set to true. | String | |
camel.component.aws2-s3.destination-bucket-prefix | Define the destination bucket prefix to use when an object must be moved and moveAfterRead is set to true. | String | |
camel.component.aws2-s3.destination-bucket-suffix | Define the destination bucket suffix to use when an object must be moved and moveAfterRead is set to true. | String | |
camel.component.aws2-s3.done-file-name | If provided, Camel will only consume files if a done file exists. | String | |
camel.component.aws2-s3.enabled | Whether to enable auto configuration of the aws2-s3 component. This is enabled by default. | Boolean | |
camel.component.aws2-s3.file-name | To get the object from the bucket with the given file name. | String | |
camel.component.aws2-s3.ignore-body | If it is true, the S3 Object Body will be ignored completely, if it is set to false the S3 Object will be put in the body. Setting this to true, will override any behavior defined by includeBody option. | false | Boolean |
camel.component.aws2-s3.include-body | If it is true, the S3Object exchange will be consumed and put into the body and closed. If false the S3Object stream will be put raw into the body and the headers will be set with the S3 object metadata. This option is strongly related to autocloseBody option. In case of setting includeBody to true because the S3Object stream will be consumed then it will also be closed, while in case of includeBody false then it will be up to the caller to close the S3Object stream. However setting autocloseBody to true when includeBody is false it will schedule to close the S3Object stream automatically on exchange completion. | true | Boolean |
camel.component.aws2-s3.include-folders | If it is true, the folders/directories will be consumed. If it is false, they will be ignored, and Exchanges will not be created for those. | true | Boolean |
camel.component.aws2-s3.key-name | Setting the key name for an element in the bucket through endpoint parameter. | String | |
camel.component.aws2-s3.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-s3.move-after-read | Move objects from S3 bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. | false | Boolean |
camel.component.aws2-s3.multi-part-upload | If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize. | false | Boolean |
camel.component.aws2-s3.naming-strategy | The naming strategy to use in streaming upload mode. | AWSS3NamingStrategyEnum | |
camel.component.aws2-s3.operation | The operation to do in case the user don’t want to do only an upload. | AWS2S3Operations | |
camel.component.aws2-s3.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-s3.part-size | Setup the partSize which is used in multi part upload, the default size is 25M. | 26214400 | Long |
camel.component.aws2-s3.pojo-request | If we want to use a POJO request as body or not. | false | Boolean |
camel.component.aws2-s3.policy | The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. | String | |
camel.component.aws2-s3.prefix | The prefix which is used in the com.amazonaws.services.s3.model.ListObjectsRequest to only consume objects we are interested in. | String | |
camel.component.aws2-s3.proxy-host | To define a proxy host when instantiating the SQS client. | String | |
camel.component.aws2-s3.proxy-port | Specify a proxy port to be used inside the client definition. | Integer | |
camel.component.aws2-s3.proxy-protocol | To define a proxy protocol when instantiating the S3 client. | Protocol | |
camel.component.aws2-s3.region | The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.aws2-s3.restarting-policy | The restarting policy to use in streaming upload mode. | AWSS3RestartingPolicyEnum | |
camel.component.aws2-s3.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-s3.storage-class | The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. | String | |
camel.component.aws2-s3.streaming-upload-mode | When stream mode is true the upload to bucket will be done in streaming. | false | Boolean |
camel.component.aws2-s3.streaming-upload-timeout | While streaming upload mode is true, this option set the timeout to complete upload. | Long | |
camel.component.aws2-s3.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-s3.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-s3.use-aws-k-m-s | Define if KMS must be used or not. | false | Boolean |
camel.component.aws2-s3.use-customer-key | Define if Customer Key must be used or not. | false | Boolean |
camel.component.aws2-s3.use-default-credentials-provider | Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. | false | Boolean |
Chapter 7. AWS Simple Notification System (SNS)
Only producer is supported
The AWS2 SNS component allows messages to be sent to an Amazon Simple Notification Topic. The implementation of the Amazon API is provided by the AWS SDK.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon SNS. More information is available at Amazon SNS.
7.1. URI Format
aws2-sns://topicNameOrArn[?options]
The topic will be created if they don’t already exists. You can append query options to the URI in the following format, ?options=value&option2=value&…
7.2. URI Options
7.2.1. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
7.2.1.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
7.2.1.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
7.3. Component Options
The AWS Simple Notification System (SNS) component supports 24 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
amazonSNSClient (producer) | Autowired To use the AmazonSNS as the client. | SnsClient | |
autoCreateTopic (producer) | Setting the autocreation of the topic. | false | boolean |
configuration (producer) | Component configuration. | Sns2Configuration | |
kmsMasterKeyId (producer) | The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
messageDeduplicationIdStrategy (producer) | Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values:
| useExchangeId | String |
messageGroupIdStrategy (producer) | Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values:
| String | |
messageStructure (producer) | The message structure to use such as json. | String | |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
policy (producer) | The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
proxyHost (producer) | To define a proxy host when instantiating the SNS client. | String | |
proxyPort (producer) | To define a proxy port when instantiating the SNS client. | Integer | |
proxyProtocol (producer) | To define a proxy protocol when instantiating the SNS client. Enum values:
| HTTPS | Protocol |
queueUrl (producer) | The queueUrl to subscribe to. | String | |
region (producer) | The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
serverSideEncryptionEnabled (producer) | Define if Server Side Encryption is enabled or not on the topic. | false | boolean |
subject (producer) | The subject which is used if the message header 'CamelAwsSnsSubject' is not present. | String | |
subscribeSNStoSQS (producer) | Define if the subscription between SNS Topic and SQS must be done or not. | false | boolean |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
7.4. Endpoint Options
The AWS Simple Notification System (SNS) endpoint is configured using URI syntax:
aws2-sns:topicNameOrArn
with the following path and query parameters:
7.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
topicNameOrArn (producer) | Required Topic name or ARN. | String |
7.4.2. Query Parameters (23 parameters)
Name | Description | Default | Type |
---|---|---|---|
amazonSNSClient (producer) | Autowired To use the AmazonSNS as the client. | SnsClient | |
autoCreateTopic (producer) | Setting the autocreation of the topic. | false | boolean |
headerFilterStrategy (producer) | To use a custom HeaderFilterStrategy to map headers to/from Camel. | HeaderFilterStrategy | |
kmsMasterKeyId (producer) | The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
messageDeduplicationIdStrategy (producer) | Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values:
| useExchangeId | String |
messageGroupIdStrategy (producer) | Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values:
| String | |
messageStructure (producer) | The message structure to use such as json. | String | |
overrideEndpoint (producer) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
policy (producer) | The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
proxyHost (producer) | To define a proxy host when instantiating the SNS client. | String | |
proxyPort (producer) | To define a proxy port when instantiating the SNS client. | Integer | |
proxyProtocol (producer) | To define a proxy protocol when instantiating the SNS client. Enum values:
| HTTPS | Protocol |
queueUrl (producer) | The queueUrl to subscribe to. | String | |
region (producer) | The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
serverSideEncryptionEnabled (producer) | Define if Server Side Encryption is enabled or not on the topic. | false | boolean |
subject (producer) | The subject which is used if the message header 'CamelAwsSnsSubject' is not present. | String | |
subscribeSNStoSQS (producer) | Define if the subscription between SNS Topic and SQS must be done or not. | false | boolean |
trustAllCertificates (producer) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (producer) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (producer) | Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. | false | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
Required SNS component options
You have to provide the amazonSNSClient in the Registry or your accessKey and secretKey to access the Amazon’s SNS.
7.5. Usage
7.5.1. Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true.
- Java system properties - aws.accessKeyId and aws.secretKey
- Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
- Web Identity Token from AWS STS.
- The shared credentials and config files.
- Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set.
- Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation.
7.5.2. Message headers evaluated by the SNS producer
Header | Type | Description |
---|---|---|
|
|
The Amazon SNS message subject. If not set, the subject from the |
7.5.3. Message headers set by the SNS producer
Header | Type | Description |
---|---|---|
|
| The Amazon SNS message ID. |
7.5.4. Advanced AmazonSNS configuration
If you need more control over the SnsClient
instance configuration you can create your own instance and refer to it from the URI:
from("direct:start") .to("aws2-sns://MyTopic?amazonSNSClient=#client");
The #client
refers to a AmazonSNS
in the Registry.
7.5.5. Create a subscription between an AWS SNS Topic and an AWS SQS Queue
You can create a subscription of an SQS Queue to an SNS Topic in this way:
from("direct:start") .to("aws2-sns://test-camel-sns1?amazonSNSClient=#amazonSNSClient&subscribeSNStoSQS=true&queueUrl=https://sqs.eu-central-1.amazonaws.com/780410022472/test-camel");
The #amazonSNSClient
refers to a SnsClient
in the Registry. By specifying subscribeSNStoSQS
to true and a queueUrl
of an existing SQS Queue, you’ll be able to subscribe your SQS Queue to your SNS Topic.
At this point you can consume messages coming from SNS Topic through your SQS Queue
from("aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessagesPerPoll=5") .to(...);
7.6. Topic Autocreation
With the option autoCreateTopic
users are able to avoid the autocreation of an SNS Topic in case it doesn’t exist. The default for this option is true
. If set to false any operation on a not-existent topic in AWS won’t be successful and an error will be returned.
7.7. SNS FIFO
SNS FIFO are supported. While creating the SQS queue you will subscribe to the SNS topic there is an important point to remember, you’ll need to make possible for the SNS Topic to send message to the SQS Queue.
Example
Suppose you created an SNS FIFO Topic called Order.fifo
and an SQS Queue called QueueSub.fifo
.
In the access Policy of the QueueSub.fifo
you should submit something like this:
{ "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__owner_statement", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::780560123482:root" }, "Action": "SQS:*", "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo" }, { "Effect": "Allow", "Principal": { "Service": "sns.amazonaws.com" }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:eu-west-1:780560123482:QueueSub.fifo", "Condition": { "ArnLike": { "aws:SourceArn": "arn:aws:sns:eu-west-1:780410022472:Order.fifo" } } } ] }
This is a critical step to make the subscription work correctly.
7.7.1. SNS Fifo Topic Message group Id Strategy and message Deduplication Id Strategy
When sending something to the FIFO topic you’ll need to always set up a message group Id strategy.
If the content-based message deduplication has been enabled on the SNS Fifo topic, where won’t be the need of setting a message deduplication id strategy, otherwise you’ll have to set it.
7.8. Examples
7.8.1. Producer Examples
Sending to a topic
from("direct:start") .to("aws2-sns://camel-topic?subject=The+subject+message&autoCreateTopic=true");
7.9. Dependencies
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sns</artifactId> <version>${camel-version}</version> </dependency>
where 3.14.2
must be replaced by the actual version of Camel.
7.10. Spring Boot Auto-Configuration
When using aws2-sns with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sns-starter</artifactId> </dependency>
The component supports 25 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-sns.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-sns.amazon-s-n-s-client | To use the AmazonSNS as the client. The option is a software.amazon.awssdk.services.sns.SnsClient type. | SnsClient | |
camel.component.aws2-sns.auto-create-topic | Setting the autocreation of the topic. | false | Boolean |
camel.component.aws2-sns.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-sns.configuration | Component configuration. The option is a org.apache.camel.component.aws2.sns.Sns2Configuration type. | Sns2Configuration | |
camel.component.aws2-sns.enabled | Whether to enable auto configuration of the aws2-sns component. This is enabled by default. | Boolean | |
camel.component.aws2-sns.kms-master-key-id | The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK. | String | |
camel.component.aws2-sns.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-sns.message-deduplication-id-strategy | Only for FIFO Topic. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. | useExchangeId | String |
camel.component.aws2-sns.message-group-id-strategy | Only for FIFO Topic. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. | String | |
camel.component.aws2-sns.message-structure | The message structure to use such as json. | String | |
camel.component.aws2-sns.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-sns.policy | The policy for this topic. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
camel.component.aws2-sns.proxy-host | To define a proxy host when instantiating the SNS client. | String | |
camel.component.aws2-sns.proxy-port | To define a proxy port when instantiating the SNS client. | Integer | |
camel.component.aws2-sns.proxy-protocol | To define a proxy protocol when instantiating the SNS client. | Protocol | |
camel.component.aws2-sns.queue-url | The queueUrl to subscribe to. | String | |
camel.component.aws2-sns.region | The region in which SNS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.aws2-sns.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-sns.server-side-encryption-enabled | Define if Server Side Encryption is enabled or not on the topic. | false | Boolean |
camel.component.aws2-sns.subject | The subject which is used if the message header 'CamelAwsSnsSubject' is not present. | String | |
camel.component.aws2-sns.subscribe-s-n-sto-s-q-s | Define if the subscription between SNS Topic and SQS must be done or not. | false | Boolean |
camel.component.aws2-sns.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-sns.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-sns.use-default-credentials-provider | Set whether the SNS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. | false | Boolean |
Chapter 8. AWS Simple Queue Service (SQS)
Both producer and consumer are supported
The AWS2 SQS component supports sending and receiving messages to Amazon’s SQS service.
Prerequisites
You must have a valid Amazon Web Services developer account, and be signed up to use Amazon SQS. More information is available at Amazon SQS.
8.1. URI Format
aws2-sqs://queueNameOrArn[?options]
The queue will be created if they don’t already exists. You can append query options to the URI in the following format,
?options=value&option2=value&…
8.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
8.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
8.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
8.3. Component Options
The AWS Simple Queue Service (SQS) component supports 43 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
amazonAWSHost (common) | The hostname of the Amazon AWS cloud. | amazonaws.com | String |
amazonSQSClient (common) | Autowired To use the AmazonSQS as client. | SqsClient | |
autoCreateQueue (common) | Setting the autocreation of the queue. | false | boolean |
configuration (common) | The AWS SQS default configuration. | Sqs2Configuration | |
overrideEndpoint (common) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
protocol (common) | The underlying protocol used to communicate with SQS. | https | String |
proxyProtocol (common) | To define a proxy protocol when instantiating the SQS client. Enum values:
| HTTPS | Protocol |
queueOwnerAWSAccountId (common) | Specify the queue owner aws account id when you need to connect the queue with different account owner. | String | |
region (common) | The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (common) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (common) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (common) | Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. | false | boolean |
attributeNames (consumer) | A list of attribute names to receive when consuming. Multiple names can be separated by comma. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
concurrentConsumers (consumer) | Allows you to use multiple threads to poll the sqs queue to increase throughput. | 1 | int |
defaultVisibilityTimeout (consumer) | The default visibility timeout (in seconds). | Integer | |
deleteAfterRead (consumer) | Delete message from SQS after it has been read. | true | boolean |
deleteIfFiltered (consumer) | Whether or not to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS_DELETE_FILTERED (CamelAwsSqsDeleteFiltered) set to true. | true | boolean |
extendMessageVisibility (consumer) | If enabled then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. | false | boolean |
kmsDataKeyReusePeriodSeconds (consumer) | The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). | Integer | |
kmsMasterKeyId (consumer) | The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. | String | |
messageAttributeNames (consumer) | A list of message attribute names to receive when consuming. Multiple names can be separated by comma. | String | |
serverSideEncryptionEnabled (consumer) | Define if Server Side Encryption is enabled or not on the queue. | false | boolean |
visibilityTimeout (consumer) | The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only make sense if its different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently. | Integer | |
waitTimeSeconds (consumer) | Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response. | Integer | |
batchSeparator (producer) | Set the separator when passing a String to send batch message operation. | , | String |
delaySeconds (producer) | Delay sending messages for a number of seconds. | Integer | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
messageDeduplicationIdStrategy (producer) | Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values:
| useExchangeId | String |
messageGroupIdStrategy (producer) | Only for FIFO queues. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values:
| String | |
operation (producer) | The operation to do in case the user don’t want to send only a message. Enum values:
| Sqs2Operations | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
delayQueue (advanced) | Define if you want to apply delaySeconds option to the queue or on single messages. | false | boolean |
queueUrl (advanced) | To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used, to connect to a mock implementation of SQS, for testing purposes. | String | |
proxyHost (proxy) | To define a proxy host when instantiating the SQS client. | String | |
proxyPort (proxy) | To define a proxy port when instantiating the SQS client. | Integer | |
maximumMessageSize (queue) | The maximumMessageSize (in bytes) an SQS message can contain for this queue. | Integer | |
messageRetentionPeriod (queue) | The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue. | Integer | |
policy (queue) | The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
receiveMessageWaitTimeSeconds (queue) | If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait. | Integer | |
redrivePolicy (queue) | Specify the policy that send message to DeadLetter queue. See detail at Amazon docs. | String | |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
8.4. Endpoint Options
The AWS Simple Queue Service (SQS) endpoint is configured using URI syntax:
aws2-sqs:queueNameOrArn
with the following path and query parameters:
8.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
queueNameOrArn (common) | Required Queue name or ARN. | String |
8.4.2. Query Parameters (61 parameters)
Name | Description | Default | Type |
---|---|---|---|
amazonAWSHost (common) | The hostname of the Amazon AWS cloud. | amazonaws.com | String |
amazonSQSClient (common) | Autowired To use the AmazonSQS as client. | SqsClient | |
autoCreateQueue (common) | Setting the autocreation of the queue. | false | boolean |
headerFilterStrategy (common) | To use a custom HeaderFilterStrategy to map headers to/from Camel. | HeaderFilterStrategy | |
overrideEndpoint (common) | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | boolean |
protocol (common) | The underlying protocol used to communicate with SQS. | https | String |
proxyProtocol (common) | To define a proxy protocol when instantiating the SQS client. Enum values:
| HTTPS | Protocol |
queueOwnerAWSAccountId (common) | Specify the queue owner aws account id when you need to connect the queue with different account owner. | String | |
region (common) | The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
trustAllCertificates (common) | If we want to trust all certificates in case of overriding the endpoint. | false | boolean |
uriEndpointOverride (common) | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
useDefaultCredentialsProvider (common) | Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. | false | boolean |
attributeNames (consumer) | A list of attribute names to receive when consuming. Multiple names can be separated by comma. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
concurrentConsumers (consumer) | Allows you to use multiple threads to poll the sqs queue to increase throughput. | 1 | int |
defaultVisibilityTimeout (consumer) | The default visibility timeout (in seconds). | Integer | |
deleteAfterRead (consumer) | Delete message from SQS after it has been read. | true | boolean |
deleteIfFiltered (consumer) | Whether or not to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS_DELETE_FILTERED (CamelAwsSqsDeleteFiltered) set to true. | true | boolean |
extendMessageVisibility (consumer) | If enabled then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. See details at Amazon docs. | false | boolean |
kmsDataKeyReusePeriodSeconds (consumer) | The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). | Integer | |
kmsMasterKeyId (consumer) | The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. | String | |
maxMessagesPerPoll (consumer) | Gets the maximum number of messages as a limit to poll at each polling. Is default unlimited, but use 0 or negative number to disable it as unlimited. | int | |
messageAttributeNames (consumer) | A list of message attribute names to receive when consuming. Multiple names can be separated by comma. | String | |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
serverSideEncryptionEnabled (consumer) | Define if Server Side Encryption is enabled or not on the queue. | false | boolean |
visibilityTimeout (consumer) | The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only make sense if its different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently. | Integer | |
waitTimeSeconds (consumer) | Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response. | Integer | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
batchSeparator (producer) | Set the separator when passing a String to send batch message operation. | , | String |
delaySeconds (producer) | Delay sending messages for a number of seconds. | Integer | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
messageDeduplicationIdStrategy (producer) | Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. Enum values:
| useExchangeId | String |
messageGroupIdStrategy (producer) | Only for FIFO queues. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. Enum values:
| String | |
operation (producer) | The operation to do in case the user don’t want to send only a message. Enum values:
| Sqs2Operations | |
delayQueue (advanced) | Define if you want to apply delaySeconds option to the queue or on single messages. | false | boolean |
queueUrl (advanced) | To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used, to connect to a mock implementation of SQS, for testing purposes. | String | |
proxyHost (proxy) | To define a proxy host when instantiating the SQS client. | String | |
proxyPort (proxy) | To define a proxy port when instantiating the SQS client. | Integer | |
maximumMessageSize (queue) | The maximumMessageSize (in bytes) an SQS message can contain for this queue. | Integer | |
messageRetentionPeriod (queue) | The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue. | Integer | |
policy (queue) | The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
receiveMessageWaitTimeSeconds (queue) | If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait. | Integer | |
redrivePolicy (queue) | Specify the policy that send message to DeadLetter queue. See detail at Amazon docs. | String | |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
accessKey (security) | Amazon AWS Access Key. | String | |
secretKey (security) | Amazon AWS Secret Key. | String |
Required SQS component options
You have to provide the amazonSQSClient in the Registry or your accessKey and secretKey to access the Amazon’s SQS.
8.5. Batch Consumer
This component implements the Batch Consumer.
This allows you for instance to know how many messages exists in this batch and for instance let the Aggregator aggregate this number of messages.
8.6. Usage
8.6.1. Static credentials vs Default Credential Provider
You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider
option and set it to true.
-
Java system properties -
aws.accessKeyId
andaws.secretKey
-
Environment variables -
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
. - Web Identity Token from AWS STS.
- The shared credentials and config files.
-
Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
is set. - Amazon EC2 Instance profile credentials.
For more information about this you can look at AWS credentials documentation
8.6.2. Message headers set by the SQS producer
Header | Type | Description |
---|---|---|
|
| The MD5 checksum of the Amazon SQS message. |
|
| The Amazon SQS message ID. |
|
| The delay seconds that the Amazon SQS message can be see by others. |
8.6.3. Message headers set by the SQS consumer
Header | Type | Description |
---|---|---|
|
| The MD5 checksum of the Amazon SQS message. |
|
| The Amazon SQS message ID. |
|
| The Amazon SQS message receipt handle. |
|
| The Amazon SQS message attributes. |
8.6.4. Advanced AmazonSQS configuration
If your Camel Application is running behind a firewall or if you need to have more control over the SqsClient
instance configuration, you can create your own instance:
from("aws2-sqs://MyQueue?amazonSQSClient=#client&delay=5000&maxMessagesPerPoll=5") .to("mock:result");
8.6.5. Creating or updating an SQS Queue
In the SQS Component, when an endpoint is started, a check is executed to obtain information about the existence of the queue or not. You’re able to customize the creation through the QueueAttributeName
mapping with the SQSConfiguration
option.
from("aws2-sqs://MyQueue?amazonSQSClient=#client&delay=5000&maxMessagesPerPoll=5") .to("mock:result");
In this example if the MyQueue
queue is not already created on AWS (and the autoCreateQueue
option is set to true), it will be created with default parameters from the SQS configuration. If it’s already up on AWS, the SQS configuration options will be used to override the existent AWS configuration.
8.6.6. DelayQueue VS Delay for Single message
When the option delayQueue
is set to true, the SQS Queue will be a DelayQueue
with the DelaySeconds
option as delay. For more information about DelayQueue
you can read the AWS SQS documentation. One important information to take into account is the following:
- For standard queues, the per-queue delay setting is not retroactive—changing the setting doesn’t affect the delay of messages already in the queue.
- For FIFO queues, the per-queue delay setting is retroactive—changing the setting affects the delay of messages already in the queue.
as stated in the official documentation. If you want to specify a delay on single messages, you can ignore the delayQueue
option, while you can set this option to true, if you need to add a fixed delay to all messages enqueued.
8.6.7. Server Side Encryption
There is a set of Server Side Encryption attributes for a queue. The related option are serverSideEncryptionEnabled
, keyMasterKeyId
and kmsDataKeyReusePeriod
. The SSE is disabled by default. You need to explicitly set the option to true and set the related parameters as queue attributes.
8.7. JMS-style Selectors
SQS does not allow selectors, but you can effectively achieve this by using the Camel Filter EIP and setting an appropriate visibilityTimeout
. When SQS dispatches a message, it will wait up to the visibility timeout before it will try to dispatch the message to a different consumer unless a DeleteMessage
is received. By default, Camel will always send the DeleteMessage
at the end of the route, unless the route ended in failure. To achieve appropriate filtering and not send the DeleteMessage
even on successful completion of the route, use a Filter:
from("aws2-sqs://MyQueue?amazonSQSClient=#client&defaultVisibilityTimeout=5000&deleteIfFiltered=false&deleteAfterRead=false") .filter("${header.login} == true") .setProperty(Sqs2Constants.SQS_DELETE_FILTERED, constant(true)) .to("mock:filter");
In the above code, if an exchange doesn’t have an appropriate header, it will not make it through the filter AND also not be deleted from the SQS queue. After 5000 milliseconds, the message will become visible to other consumers.
Note we must set the property Sqs2Constants.SQS_DELETE_FILTERED
to true
to instruct Camel to send the DeleteMessage
, if being filtered.
8.8. Available Producer Operations
- single message (default)
- sendBatchMessage
- deleteMessage
- listQueues
8.9. Send Message
You can set a SendMessageBatchRequest
or an Iterable
from("direct:start") .setBody(constant("Camel rocks!")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1");
8.10. Send Batch Message
You can set a SendMessageBatchRequest
or an Iterable
from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("sendBatchMessage")) .process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Collection c = new ArrayList(); c.add("team1"); c.add("team2"); c.add("team3"); c.add("team4"); exchange.getIn().setBody(c); } }) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1");
As result you’ll get an exchange containing a SendMessageBatchResponse
instance, that you can examinate to check what messages were successfull and what not. The id set on each message of the batch will be a Random UUID.
8.11. Delete single Message
Use deleteMessage
operation to delete a single message. You’ll need to set a receipt handle header for the message you want to delete.
from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("deleteMessage")) .setHeader(SqsConstants.RECEIPT_HANDLE, constant("123456")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1");
As result you’ll get an exchange containing a DeleteMessageResponse
instance, that you can use to check if the message was deleted or not.
8.12. List Queues
Use listQueues
operation to list queues.
from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("listQueues")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1");
As result you’ll get an exchange containing a ListQueuesResponse
instance, that you can examinate to check the actual queues.
8.13. Purge Queue
Use purgeQueue
operation to purge queue.
from("direct:start") .setHeader(SqsConstants.SQS_OPERATION, constant("purgeQueue")) .to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)®ion=eu-west-1");
As result you’ll get an exchange containing a PurgeQueueResponse
instance.
8.14. Queue Autocreation
With the option autoCreateQueue
users are able to avoid the autocreation of an SQS Queue in case it doesn’t exist. The default for this option is true
. If set to false any operation on a not-existent queue in AWS won’t be successful and an error will be returned.
8.15. Send Batch Message and Message Deduplication Strategy
In case you’re using a SendBatchMessage
Operation, you can set two different kind of Message Deduplication Strategy: - useExchangeId - useContentBasedDeduplication
The first one will use a ExchangeIdMessageDeduplicationIdStrategy
, that will use the Exchange ID as parameter The other one will use a NullMessageDeduplicationIdStrategy
, that will use the body as deduplication element.
In case of send batch message operation, you’ll need to use the useContentBasedDeduplication
and on the Queue you’re pointing you’ll need to enable the content based deduplication
option.
8.16. Dependencies
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-sqs</artifactId> <version>${camel-version}</version> </dependency>
where 3.14.2
must be replaced by the actual version of Camel.
8.17. Spring Boot Auto-Configuration
When using aws2-sqs with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-sqs-starter</artifactId> </dependency>
The component supports 44 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.aws2-sqs.access-key | Amazon AWS Access Key. | String | |
camel.component.aws2-sqs.amazon-a-w-s-host | The hostname of the Amazon AWS cloud. | amazonaws.com | String |
camel.component.aws2-sqs.amazon-s-q-s-client | To use the AmazonSQS as client. The option is a software.amazon.awssdk.services.sqs.SqsClient type. | SqsClient | |
camel.component.aws2-sqs.attribute-names | A list of attribute names to receive when consuming. Multiple names can be separated by comma. | String | |
camel.component.aws2-sqs.auto-create-queue | Setting the autocreation of the queue. | false | Boolean |
camel.component.aws2-sqs.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.aws2-sqs.batch-separator | Set the separator when passing a String to send batch message operation. | , | String |
camel.component.aws2-sqs.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.aws2-sqs.concurrent-consumers | Allows you to use multiple threads to poll the sqs queue to increase throughput. | 1 | Integer |
camel.component.aws2-sqs.configuration | The AWS SQS default configuration. The option is a org.apache.camel.component.aws2.sqs.Sqs2Configuration type. | Sqs2Configuration | |
camel.component.aws2-sqs.default-visibility-timeout | The default visibility timeout (in seconds). | Integer | |
camel.component.aws2-sqs.delay-queue | Define if you want to apply delaySeconds option to the queue or on single messages. | false | Boolean |
camel.component.aws2-sqs.delay-seconds | Delay sending messages for a number of seconds. | Integer | |
camel.component.aws2-sqs.delete-after-read | Delete message from SQS after it has been read. | true | Boolean |
camel.component.aws2-sqs.delete-if-filtered | Whether or not to send the DeleteMessage to the SQS queue if the exchange has property with key Sqs2Constants#SQS_DELETE_FILTERED (CamelAwsSqsDeleteFiltered) set to true. | true | Boolean |
camel.component.aws2-sqs.enabled | Whether to enable auto configuration of the aws2-sqs component. This is enabled by default. | Boolean | |
camel.component.aws2-sqs.extend-message-visibility | If enabled then a scheduled background task will keep extending the message visibility on SQS. This is needed if it takes a long time to process the message. If set to true defaultVisibilityTimeout must be set. See details at Amazon docs. | false | Boolean |
camel.component.aws2-sqs.kms-data-key-reuse-period-seconds | The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again. An integer representing seconds, between 60 seconds (1 minute) and 86,400 seconds (24 hours). Default: 300 (5 minutes). | Integer | |
camel.component.aws2-sqs.kms-master-key-id | The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK. | String | |
camel.component.aws2-sqs.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.aws2-sqs.maximum-message-size | The maximumMessageSize (in bytes) an SQS message can contain for this queue. | Integer | |
camel.component.aws2-sqs.message-attribute-names | A list of message attribute names to receive when consuming. Multiple names can be separated by comma. | String | |
camel.component.aws2-sqs.message-deduplication-id-strategy | Only for FIFO queues. Strategy for setting the messageDeduplicationId on the message. Can be one of the following options: useExchangeId, useContentBasedDeduplication. For the useContentBasedDeduplication option, no messageDeduplicationId will be set on the message. | useExchangeId | String |
camel.component.aws2-sqs.message-group-id-strategy | Only for FIFO queues. Strategy for setting the messageGroupId on the message. Can be one of the following options: useConstant, useExchangeId, usePropertyValue. For the usePropertyValue option, the value of property CamelAwsMessageGroupId will be used. | String | |
camel.component.aws2-sqs.message-retention-period | The messageRetentionPeriod (in seconds) a message will be retained by SQS for this queue. | Integer | |
camel.component.aws2-sqs.operation | The operation to do in case the user don’t want to send only a message. | Sqs2Operations | |
camel.component.aws2-sqs.override-endpoint | Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. | false | Boolean |
camel.component.aws2-sqs.policy | The policy for this queue. It can be loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
camel.component.aws2-sqs.protocol | The underlying protocol used to communicate with SQS. | https | String |
camel.component.aws2-sqs.proxy-host | To define a proxy host when instantiating the SQS client. | String | |
camel.component.aws2-sqs.proxy-port | To define a proxy port when instantiating the SQS client. | Integer | |
camel.component.aws2-sqs.proxy-protocol | To define a proxy protocol when instantiating the SQS client. | Protocol | |
camel.component.aws2-sqs.queue-owner-a-w-s-account-id | Specify the queue owner aws account id when you need to connect the queue with different account owner. | String | |
camel.component.aws2-sqs.queue-url | To define the queueUrl explicitly. All other parameters, which would influence the queueUrl, are ignored. This parameter is intended to be used, to connect to a mock implementation of SQS, for testing purposes. | String | |
camel.component.aws2-sqs.receive-message-wait-time-seconds | If you do not specify WaitTimeSeconds in the request, the queue attribute ReceiveMessageWaitTimeSeconds is used to determine how long to wait. | Integer | |
camel.component.aws2-sqs.redrive-policy | Specify the policy that send message to DeadLetter queue. See detail at Amazon docs. | String | |
camel.component.aws2-sqs.region | The region in which SQS client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.aws2-sqs.secret-key | Amazon AWS Secret Key. | String | |
camel.component.aws2-sqs.server-side-encryption-enabled | Define if Server Side Encryption is enabled or not on the queue. | false | Boolean |
camel.component.aws2-sqs.trust-all-certificates | If we want to trust all certificates in case of overriding the endpoint. | false | Boolean |
camel.component.aws2-sqs.uri-endpoint-override | Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. | String | |
camel.component.aws2-sqs.use-default-credentials-provider | Set whether the SQS client should expect to load credentials on an AWS infra instance or to expect static credentials to be passed in. | false | Boolean |
camel.component.aws2-sqs.visibility-timeout | The duration (in seconds) that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request to set in the com.amazonaws.services.sqs.model.SetQueueAttributesRequest. This only make sense if its different from defaultVisibilityTimeout. It changes the queue visibility timeout attribute permanently. | Integer | |
camel.component.aws2-sqs.wait-time-seconds | Duration in seconds (0 to 20) that the ReceiveMessage action call will wait until a message is in the queue to include in the response. | Integer |
Chapter 9. Azure Storage Blob Service
Both producer and consumer are supported
The Azure Storage Blob component is used for storing and retrieving blobs from Azure Storage Blob Service using Azure APIs v12. However in case of versions above v12, we will see if this component can adopt these changes depending on how much breaking changes can result.
Prerequisites
You must have a valid Windows Azure Storage account. More information is available at Azure Documentation Portal .
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-azure-storage-blob</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
9.1. URI Format
azure-storage-blob://accountName[/containerName][?options]
In case of consumer, accountName
, containerName
are required. In case of producer, it depends on the operation that being requested, for example if operation is on a container level, for example, createContainer
, accountName
and containerName
are only required, but in case of operation being requested in blob level, for example, getBlob
, accountName
, containerName
and blobName
are required.
The blob will be created if it does not already exist. You can append query options to the URI in the following format,
?options=value&option2=value&…
9.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
9.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
9.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
9.3. Component Options
The Azure Storage Blob Service component supports 31 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
blobName (common) | The blob name, to consume specific blob from a container. However on producer, is only required for the operations on the blob level. | String | |
blobOffset (common) | Set the blob offset for the upload or download operations, default is 0. | 0 | long |
blobType (common) | The blob type in order to initiate the appropriate settings for each blob type. Enum values:
| blockblob | BlobType |
closeStreamAfterRead (common) | Close the stream after read or keep it open, default is true. | true | boolean |
configuration (common) | The component configurations. | BlobConfiguration | |
credentials (common) | StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. | StorageSharedKeyCredential | |
dataCount (common) | How many bytes to include in the range. Must be greater than or equal to 0 if specified. | Long | |
fileDir (common) | The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. | String | |
maxResultsPerPage (common) | Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. | Integer | |
maxRetryRequests (common) | Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. | 0 | int |
prefix (common) | Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. | String | |
regex (common) | Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. | String | |
serviceClient (common) | Autowired Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). | BlobServiceClient | |
timeout (common) | An optional timeout value beyond which a RuntimeException will be raised. | Duration | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
blobSequenceNumber (producer) | A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. | 0 | Long |
blockListType (producer) | Specifies which type of blocks to return. Enum values:
| COMMITTED | BlockListType |
changeFeedContext (producer) | When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. | Context | |
changeFeedEndTime (producer) | When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. | OffsetDateTime | |
changeFeedStartTime (producer) | When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. | OffsetDateTime | |
closeStreamAfterWrite (producer) | Close the stream after write or keep it open, default is true. | true | boolean |
commitBlockListLater (producer) | When is set to true, the staged blocks will not be committed directly. | true | boolean |
createAppendBlob (producer) | When is set to true, the append blocks will be created when committing append blocks. | true | boolean |
createPageBlob (producer) | When is set to true, the page blob will be created when uploading page blob. | true | boolean |
downloadLinkExpiration (producer) | Override the default expiration (millis) of URL download link. | Long | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | The blob operation that can be used with this component on the producer. Enum values:
| listBlobContainers | BlobOperationsDefinition |
pageBlobSize (producer) | Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. | 512 | Long |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
accessKey (security) | Access key for the associated azure account name to be used for authentication with azure blob services. | String | |
sourceBlobAccessKey (security) | Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it’s unsafe so we could set as key. | String |
9.4. Endpoint Options
The Azure Storage Blob Service endpoint is configured using URI syntax:
azure-storage-blob:accountName/containerName
with the following path and query parameters:
9.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
accountName (common) | Azure account name to be used for authentication with azure blob services. | String | |
containerName (common) | The blob container name. | String |
9.4.2. Query Parameters (48 parameters)
Name | Description | Default | Type |
---|---|---|---|
blobName (common) | The blob name, to consume specific blob from a container. However on producer, is only required for the operations on the blob level. | String | |
blobOffset (common) | Set the blob offset for the upload or download operations, default is 0. | 0 | long |
blobServiceClient (common) | Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through getBlobContainerClient(String), and operations on a blob are available on BlobClient through getBlobContainerClient(String).getBlobClient(String). | BlobServiceClient | |
blobType (common) | The blob type in order to initiate the appropriate settings for each blob type. Enum values:
| blockblob | BlobType |
closeStreamAfterRead (common) | Close the stream after read or keep it open, default is true. | true | boolean |
credentials (common) | StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. | StorageSharedKeyCredential | |
dataCount (common) | How many bytes to include in the range. Must be greater than or equal to 0 if specified. | Long | |
fileDir (common) | The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. | String | |
maxResultsPerPage (common) | Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. | Integer | |
maxRetryRequests (common) | Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. | 0 | int |
prefix (common) | Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. | String | |
regex (common) | Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. | String | |
serviceClient (common) | Autowired Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). | BlobServiceClient | |
timeout (common) | An optional timeout value beyond which a RuntimeException will be raised. | Duration | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
blobSequenceNumber (producer) | A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. | 0 | Long |
blockListType (producer) | Specifies which type of blocks to return. Enum values:
| COMMITTED | BlockListType |
changeFeedContext (producer) | When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. | Context | |
changeFeedEndTime (producer) | When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. | OffsetDateTime | |
changeFeedStartTime (producer) | When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. | OffsetDateTime | |
closeStreamAfterWrite (producer) | Close the stream after write or keep it open, default is true. | true | boolean |
commitBlockListLater (producer) | When is set to true, the staged blocks will not be committed directly. | true | boolean |
createAppendBlob (producer) | When is set to true, the append blocks will be created when committing append blocks. | true | boolean |
createPageBlob (producer) | When is set to true, the page blob will be created when uploading page blob. | true | boolean |
downloadLinkExpiration (producer) | Override the default expiration (millis) of URL download link. | Long | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | The blob operation that can be used with this component on the producer. Enum values:
| listBlobContainers | BlobOperationsDefinition |
pageBlobSize (producer) | Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. | 512 | Long |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
accessKey (security) | Access key for the associated azure account name to be used for authentication with azure blob services. | String | |
sourceBlobAccessKey (security) | Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it’s unsafe so we could set as key. | String |
Required information options
To use this component, you have 3 options in order to provide the required Azure authentication information:
-
Provide
accountName
andaccessKey
for your Azure account, this is the simplest way to get started. The accessKey can be generated through your Azure portal. -
Provide a StorageSharedKeyCredential instance which can be provided into
credentials
option. -
Provide a BlobServiceClient instance which can be provided into
blobServiceClient
. Note: You don’t need to create a specific client, e.g: BlockBlobClient, the BlobServiceClient represents the upper level which can be used to retrieve lower level clients.
9.5. Usage
For example, in order to download a blob content from the block blob hello.txt
located on the container1
in the camelazure
storage account, use the following snippet:
from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accessKey=yourAccessKey"). to("file://blobdirectory");
9.5.1. Message headers evaluated by the component producer
Header | Variable Name | Type | Operations | Description |
---|---|---|---|---|
|
|
| All | An optional timeout value beyond which a {@link RuntimeException} will be raised. |
|
|
| Operations related to container and blob | Metadata to associate with the container or blob. |
|
|
|
|
Specifies how the data in this container is available to the public. Pass |
|
|
| Operations related to container and blob | This contains values which will restrict the successful operation of a variety of requests to the conditions present. These conditions are entirely optional. |
|
|
|
| The details for listing specific blobs |
|
|
|
| Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. |
|
|
|
| Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. |
|
|
|
| Defines options available to configure the behavior of a call to listBlobsFlatSegment on a {@link BlobContainerClient} object. |
|
|
|
| Additional parameters for a set of operations. |
|
|
|
| Defines values for AccessTier. |
|
|
| Most operations related to upload blob | An MD5 hash of the block content. This hash is used to verify the integrity of the block during transport. When this header is specified, the storage service compares the hash of the content that has arrived with this header value. Note that this MD5 hash is not stored with the blob. If the two hashes do not match, the operation will fail. |
|
|
| Operations related to page blob | A {@link PageRange} object. Given that pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the end offset must be a modulus of 512 - 1. Examples of valid byte ranges are 0-511, 512-1023, etc. |
|
|
|
|
When is set to |
|
|
|
|
When is set to |
|
|
|
|
When is set to |
|
|
|
| Specifies which type of blocks to return. |
|
|
|
| Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. |
|
|
|
| A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 2^63 - 1.The default value is 0. |
|
|
|
| Specifies the behavior for deleting the snapshots on this blob. \{@code Include} will delete the base blob and all snapshots. \{@code Only} will delete only the snapshots. If a snapshot is being deleted, you must pass null. |
|
|
|
| A {@link ListBlobContainersOptions} which specifies what data should be returned by the service. |
|
|
|
| {@link ParallelTransferOptions} to use to download to file. Number of parallel transfers parameter is ignored. |
|
|
|
| The file directory where the downloaded blobs will be saved to. |
|
|
|
| Override the default expiration (millis) of URL download link. |
|
|
| Operations related to blob | Override/set the blob name on the exchange headers. |
|
|
| Operations related to container and blob | Override/set the container name on the exchange headers. |
|
|
| All | Specify the producer operation to execute, please see the doc on this page related to producer operation. |
|
|
|
| Filters the results to return only blobs whose names match the specified regular expression. May be null to return all. If both prefix and regex are set, regex takes the priority and prefix is ignored. |
|
|
|
| It filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. |
|
|
|
| It filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. |
|
|
|
| This gives additional context that is passed through the Http pipeline during the service call. |
|
|
|
| The source blob account name to be used as source account name in a copy blob operation |
|
|
|
| The source blob container name to be used as source container name in a copy blob operation |
9.5.2. Message headers set by either component producer or consumer
Header | Variable Name | Type | Description |
---|---|---|---|
|
|
| Access tier of the blob. |
|
|
| Datetime when the access tier of the blob last changed. |
|
|
| Archive status of the blob. |
|
|
| Creation time of the blob. |
|
|
| The current sequence number for a page blob. |
|
|
| The size of the blob. |
|
|
| The type of the blob. |
|
|
| Cache control specified for the blob. |
|
|
| Number of blocks committed to an append blob |
|
|
| Content disposition specified for the blob. |
|
|
| Content encoding specified for the blob. |
|
|
| Content language specified for the blob. |
|
|
| Content MD5 specified for the blob. |
|
|
| Content type specified for the blob. |
|
|
| Datetime when the last copy operation on the blob completed. |
|
|
| Snapshot identifier of the last incremental copy snapshot for the blob. |
|
|
| Identifier of the last copy operation performed on the blob. |
|
|
| Progress of the last copy operation performed on the blob. |
|
|
| Source of the last copy operation performed on the blob. |
|
|
| Status of the last copy operation performed on the blob. |
|
|
| Description of the last copy operation on the blob. |
|
|
| The E Tag of the blob |
|
|
| Flag indicating if the access tier of the blob was inferred from properties of the blob. |
|
|
| Flag indicating if the blob was incrementally copied. |
|
|
| Flag indicating if the blob’s content is encrypted on the server. |
|
|
| Datetime when the blob was last modified. |
|
|
| Type of lease on the blob. |
|
|
| State of the lease on the blob. |
|
|
| Status of the lease on the blob. |
|
|
| Additional metadata associated with the blob. |
|
|
| The offset at which the block was committed to the block blob. |
|
|
|
The downloaded filename from the operation |
|
|
|
The download link generated by |
|
|
| Returns non-parsed httpHeaders that can be used by the user. |
9.5.3. Advanced Azure Storage Blob configuration
If your Camel Application is running behind a firewall or if you need to have more control over the BlobServiceClient
instance configuration, you can create your own instance:
StorageSharedKeyCredential credential = new StorageSharedKeyCredential("yourAccountName", "yourAccessKey"); String uri = String.format("https://%s.blob.core.windows.net", "yourAccountName"); BlobServiceClient client = new BlobServiceClientBuilder() .endpoint(uri) .credential(credential) .buildClient(); // This is camel context context.getRegistry().bind("client", client);
Then refer to this instance in your Camel azure-storage-blob
component configuration:
from("azure-storage-blob://cameldev/container1?blobName=myblob&serviceClient=#client") .to("mock:result");
9.5.4. Automatic detection of BlobServiceClient client in registry
The component is capable of detecting the presence of an BlobServiceClient bean into the registry. If it’s the only instance of that type it will be used as client and you won’t have to define it as uri parameter, like the example above. This may be really useful for smarter configuration of the endpoint.
9.5.5. Azure Storage Blob Producer operations
Camel Azure Storage Blob component provides wide range of operations on the producer side:
Operations on the service level
For these operations, accountName
is required.
Operation | Description |
---|---|
| Get the content of the blob. You can restrict the output of this operation to a blob range. |
| Returns transaction logs of all the changes that occur to the blobs and the blob metadata in your storage account. The change feed provides ordered, guaranteed, durable, immutable, read-only log of these changes. |
Operations on the container level
For these operations, accountName
and containerName
are required.
Operation | Description |
---|---|
| Creates a new container within a storage account. If a container with the same name already exists, the producer will ignore it. |
| Deletes the specified container in the storage account. If the container doesn’t exist the operation fails. |
| Returns a list of blobs in this container, with folder structures flattened. |
Operations on the blob level
For these operations, accountName
, containerName
and blobName
are required.
Operation | Blob Type | Description |
---|---|---|
| Common | Get the content of the blob. You can restrict the output of this operation to a blob range. |
| Common | Delete a blob. |
| Common | Downloads the entire blob into a file specified by the path.The file will be created and must not exist, if the file already exists a {@link FileAlreadyExistsException} will be thrown. |
| Common | Generates the download link for the specified blob using shared access signatures (SAS). This by default only limit to 1hour of allowed access. However, you can override the default expiration duration through the headers. |
| BlockBlob | Creates a new block blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with PutBlob; the content of the existing blob is overwritten with the new content. |
|
|
Uploads the specified block to the block blob’s "staging area" to be later committed by a call to commitBlobBlockList. However in case header |
|
|
Writes a blob by specifying the list of block IDs that are to make up the blob. In order to be written as part of a blob, a block must have been successfully written to the server in a prior |
|
| Returns the list of blocks that have been uploaded as part of a block blob using the specified block list filter. |
|
| Creates a 0-length append blob. Call commitAppendBlo`b operation to append data to an append blob. |
|
|
Commits a new block of data to the end of the existing append blob. In case of header |
|
|
Creates a page blob of the specified length. Call |
|
|
Writes one or more pages to the page blob. The write size must be a multiple of 512. In case of header |
|
| Resizes the page blob to the specified size (which must be a multiple of 512). |
|
| Frees the specified pages from the page blob. The size of the range must be a multiple of 512. |
|
| Returns the list of valid page ranges for a page blob or snapshot of a page blob. |
|
| Copy a blob from one container to another one, even from different accounts. |
Refer to the example section in this page to learn how to use these operations into your camel application.
9.5.6. Consumer Examples
To consume a blob into a file using file component, this can be done like this:
from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey"). to("file://blobdirectory");
However, you can also write to file directly without using the file component, you will need to specify fileDir
folder path in order to save your blob in your machine.
from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir"). to("mock:results");
Also, the component supports batch consumer, hence you can consume multiple blobs with only specifying the container name, the consumer will return multiple exchanges depending on the number of the blobs in the container.
Example
from("azure-storage-blob://camelazure/container1?accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir"). to("mock:results");
9.5.7. Producer Operations Examples
-
listBlobContainers
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.LIST_BLOB_CONTAINERS_OPTIONS, new ListBlobContainersOptions().setMaxResultsPerPage(10)); }) .to("azure-storage-blob://camelazure?operation=listBlobContainers&client&serviceClient=#client") .to("mock:result");
-
createBlobContainer
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "newContainerName"); }) .to("azure-storage-blob://camelazure/container1?operation=createBlobContainer&serviceClient=#client") .to("mock:result");
-
deleteBlobContainer
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?operation=deleteBlobContainer&serviceClient=#client") .to("mock:result");
-
listBlobs
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?operation=listBlobs&serviceClient=#client") .to("mock:result");
-
getBlob
:
We can either set an outputStream
in the exchange body and write the data to it. E.g:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); // set our body exchange.getIn().setBody(outputStream); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client") .to("mock:result");
If we don’t set a body, then this operation will give us an InputStream
instance which can proceeded further downstream:
from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client") .process(exchange -> { InputStream inputStream = exchange.getMessage().getBody(InputStream.class); // We use Apache common IO for simplicity, but you are free to do whatever dealing // with inputStream System.out.println(IOUtils.toString(inputStream, StandardCharsets.UTF_8.name())); }) .to("mock:result");
-
deleteBlob
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=deleteBlob&serviceClient=#client") .to("mock:result");
-
downloadBlobToFile
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadBlobToFile&fileDir=/var/mydir&serviceClient=#client") .to("mock:result");
-
downloadLink
from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadLink&serviceClient=#client") .process(exchange -> { String link = exchange.getMessage().getHeader(BlobConstants.DOWNLOAD_LINK, String.class); System.out.println("My link " + link); }) .to("mock:result");
-
uploadBlockBlob
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); exchange.getIn().setBody("Block Blob"); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadBlockBlob&serviceClient=#client") .to("mock:result");
-
stageBlockBlobList
from("direct:start") .process(exchange -> { final List<BlobBlock> blocks = new LinkedList<>(); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Hello".getBytes()))); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("From".getBytes()))); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Camel".getBytes()))); exchange.getIn().setBody(blocks); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=stageBlockBlobList&serviceClient=#client") .to("mock:result");
-
commitBlockBlobList
from("direct:start") .process(exchange -> { // We assume here you have the knowledge of these blocks you want to commit final List<Block> blocksIds = new LinkedList<>(); blocksIds.add(new Block().setName("id-1")); blocksIds.add(new Block().setName("id-2")); blocksIds.add(new Block().setName("id-3")); exchange.getIn().setBody(blocksIds); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitBlockBlobList&serviceClient=#client") .to("mock:result");
-
getBlobBlockList
from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlobBlockList&serviceClient=#client") .log("${body}") .to("mock:result");
-
createAppendBlob
from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createAppendBlob&serviceClient=#client") .to("mock:result");
-
commitAppendBlob
from("direct:start") .process(exchange -> { final String data = "Hello world from my awesome tests!"; final InputStream dataStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8)); exchange.getIn().setBody(dataStream); // of course you can set whatever headers you like, refer to the headers section to learn more }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitAppendBlob&serviceClient=#client") .to("mock:result");
-
createPageBlob
from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createPageBlob&serviceClient=#client") .to("mock:result");
-
uploadPageBlob
from("direct:start") .process(exchange -> { byte[] dataBytes = new byte[512]; // we set range for the page from 0-511 new Random().nextBytes(dataBytes); final InputStream dataStream = new ByteArrayInputStream(dataBytes); final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); exchange.getIn().setBody(dataStream); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadPageBlob&serviceClient=#client") .to("mock:result");
-
resizePageBlob
from("direct:start") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=resizePageBlob&serviceClient=#client") .to("mock:result");
-
clearPageBlob
from("direct:start") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=clearPageBlob&serviceClient=#client") .to("mock:result");
-
getPageBlobRanges
from("direct:start") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getPageBlobRanges&serviceClient=#client") .log("${body}") .to("mock:result");
-
copyBlob
from("direct:copyBlob") .process(exchange -> { exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "file.txt"); exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_CONTAINER_NAME, "containerblob1"); exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_ACCOUNT_NAME, "account"); }) .to("azure-storage-blob://account/containerblob2?operation=copyBlob&sourceBlobAccessKey=RAW(accessKey)") .to("mock:result");
In this way the file.txt in the container containerblob1 of the account 'account', will be copied to the container containerblob2 of the same account.
9.5.8. Development Notes (Important)
All integration tests use Testcontainers and run by default. Obtaining of Azure accessKey and accountName is needed to be able to run all integration tests using Azure services. In addition to the mocked unit tests you will need to run the integration tests with every change you make or even client upgrade as the Azure client can break things even on minor versions upgrade. To run the integration tests, on this component directory, run the following maven command:
mvn verify -PfullTests -DaccountName=myacc -DaccessKey=mykey
Whereby accountName
is your Azure account name and accessKey
is the access key being generated from Azure portal.
9.6. Spring Boot Auto-Configuration
When using azure-storage-blob with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-storage-blob-starter</artifactId> </dependency>
The component supports 32 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.azure-storage-blob.access-key | Access key for the associated azure account name to be used for authentication with azure blob services. | String | |
camel.component.azure-storage-blob.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.azure-storage-blob.blob-name | The blob name, to consume specific blob from a container. However on producer, is only required for the operations on the blob level. | String | |
camel.component.azure-storage-blob.blob-offset | Set the blob offset for the upload or download operations, default is 0. | 0 | Long |
camel.component.azure-storage-blob.blob-sequence-number | A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. | 0 | Long |
camel.component.azure-storage-blob.blob-type | The blob type in order to initiate the appropriate settings for each blob type. | BlobType | |
camel.component.azure-storage-blob.block-list-type | Specifies which type of blocks to return. | BlockListType | |
camel.component.azure-storage-blob.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.azure-storage-blob.change-feed-context | When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. The option is a com.azure.core.util.Context type. | Context | |
camel.component.azure-storage-blob.change-feed-end-time | When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the next hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. The option is a java.time.OffsetDateTime type. | OffsetDateTime | |
camel.component.azure-storage-blob.change-feed-start-time | When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the previous hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. The option is a java.time.OffsetDateTime type. | OffsetDateTime | |
camel.component.azure-storage-blob.close-stream-after-read | Close the stream after read or keep it open, default is true. | true | Boolean |
camel.component.azure-storage-blob.close-stream-after-write | Close the stream after write or keep it open, default is true. | true | Boolean |
camel.component.azure-storage-blob.commit-block-list-later | When is set to true, the staged blocks will not be committed directly. | true | Boolean |
camel.component.azure-storage-blob.configuration | The component configurations. The option is a org.apache.camel.component.azure.storage.blob.BlobConfiguration type. | BlobConfiguration | |
camel.component.azure-storage-blob.create-append-blob | When is set to true, the append blocks will be created when committing append blocks. | true | Boolean |
camel.component.azure-storage-blob.create-page-blob | When is set to true, the page blob will be created when uploading page blob. | true | Boolean |
camel.component.azure-storage-blob.credentials | StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. The option is a com.azure.storage.common.StorageSharedKeyCredential type. | StorageSharedKeyCredential | |
camel.component.azure-storage-blob.data-count | How many bytes to include in the range. Must be greater than or equal to 0 if specified. | Long | |
camel.component.azure-storage-blob.download-link-expiration | Override the default expiration (millis) of URL download link. | Long | |
camel.component.azure-storage-blob.enabled | Whether to enable auto configuration of the azure-storage-blob component. This is enabled by default. | Boolean | |
camel.component.azure-storage-blob.file-dir | The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. | String | |
camel.component.azure-storage-blob.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.azure-storage-blob.max-results-per-page | Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. | Integer | |
camel.component.azure-storage-blob.max-retry-requests | Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. | 0 | Integer |
camel.component.azure-storage-blob.operation | The blob operation that can be used with this component on the producer. | BlobOperationsDefinition | |
camel.component.azure-storage-blob.page-blob-size | Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. | 512 | Long |
camel.component.azure-storage-blob.prefix | Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. | String | |
camel.component.azure-storage-blob.regex | Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. | String | |
camel.component.azure-storage-blob.service-client | Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). The option is a com.azure.storage.blob.BlobServiceClient type. | BlobServiceClient | |
camel.component.azure-storage-blob.source-blob-access-key | Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it’s unsafe so we could set as key. | String | |
camel.component.azure-storage-blob.timeout | An optional timeout value beyond which a RuntimeException will be raised. The option is a java.time.Duration type. | Duration |
Chapter 10. Azure Storage Queue Service
Both producer and consumer are supported
The Azure Storage Queue component supports storing and retrieving the messages to/from Azure Storage Queue service using Azure APIs v12. However in case of versions above v12, we will see if this component can adopt these changes depending on how much breaking changes can result.
Prerequisites
You must have a valid Windows Azure Storage account. More information is available at Azure Documentation Portal.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-azure-storage-queue</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
10.1. URI Format
azure-storage-queue://accountName[/queueName][?options]
In case of consumer, accountName and queueName are required. In case of producer, it depends on the operation that being requested, for example if operation is on a service level, e.b: listQueues, only accountName is required, but in case of operation being requested on the queue level, for example, createQueue, sendMessage.. etc, both accountName and queueName are required.
The queue will be created if it does not already exist. You can append query options to the URI in the following format,
?options=value&option2=value&…
10.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
10.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
10.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
10.3. Component Options
The Azure Storage Queue Service component supports 15 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
configuration (common) | The component configurations. | QueueConfiguration | |
serviceClient (common) | Autowired Service client to a storage account to interact with the queue service. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. This client contains all the operations for interacting with a queue account in Azure Storage. Operations allowed by the client are creating, listing, and deleting queues, retrieving and updating properties of the account, and retrieving statistics of the account. | QueueServiceClient | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
createQueue (producer) | When is set to true, the queue will be automatically created when sending messages to the queue. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | Queue service operation hint to the producer. Enum values:
| QueueOperationDefinition | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
maxMessages (queue) | Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages will be returned. If left empty only 1 message will be retrieved, the allowed range is 1 to 32 messages. | 1 | Integer |
messageId (queue) | The ID of the message to be deleted or updated. | String | |
popReceipt (queue) | Unique identifier that must match for the message to be deleted or updated. | String | |
timeout (queue) | An optional timeout applied to the operation. If a response is not returned before the timeout concludes a RuntimeException will be thrown. | Duration | |
timeToLive (queue) | How long the message will stay alive in the queue. If unset the value will default to 7 days, if -1 is passed the message will not expire. The time to live must be -1 or any positive number. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S — parses as 20.345 seconds, P2D — parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe. | Duration | |
visibilityTimeout (queue) | The timeout period for how long the message is invisible in the queue. The timeout must be between 1 seconds and 7 days. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S — parses as 20.345 seconds, P2D — parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe. | Duration | |
accessKey (security) | Access key for the associated azure account name to be used for authentication with azure queue services. | String | |
credentials (security) | StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. | StorageSharedKeyCredential |
10.4. Endpoint Options
The Azure Storage Queue Service endpoint is configured using URI syntax:
azure-storage-queue:accountName/queueName
with the following path and query parameters:
10.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
accountName (common) | Azure account name to be used for authentication with azure queue services. | String | |
queueName (common) | The queue resource name. | String |
10.4.2. Query Parameters (31 parameters)
Name | Description | Default | Type |
---|---|---|---|
serviceClient (common) | Autowired Service client to a storage account to interact with the queue service. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. This client contains all the operations for interacting with a queue account in Azure Storage. Operations allowed by the client are creating, listing, and deleting queues, retrieving and updating properties of the account, and retrieving statistics of the account. | QueueServiceClient | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
createQueue (producer) | When is set to true, the queue will be automatically created when sending messages to the queue. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | Queue service operation hint to the producer. Enum values:
| QueueOperationDefinition | |
maxMessages (queue) | Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages will be returned. If left empty only 1 message will be retrieved, the allowed range is 1 to 32 messages. | 1 | Integer |
messageId (queue) | The ID of the message to be deleted or updated. | String | |
popReceipt (queue) | Unique identifier that must match for the message to be deleted or updated. | String | |
timeout (queue) | An optional timeout applied to the operation. If a response is not returned before the timeout concludes a RuntimeException will be thrown. | Duration | |
timeToLive (queue) | How long the message will stay alive in the queue. If unset the value will default to 7 days, if -1 is passed the message will not expire. The time to live must be -1 or any positive number. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S — parses as 20.345 seconds, P2D — parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe. | Duration | |
visibilityTimeout (queue) | The timeout period for how long the message is invisible in the queue. The timeout must be between 1 seconds and 7 days. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S — parses as 20.345 seconds, P2D — parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe. | Duration | |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
accessKey (security) | Access key for the associated azure account name to be used for authentication with azure queue services. | String | |
credentials (security) | StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. | StorageSharedKeyCredential |
Required information options
To use this component, you have 3 options in order to provide the required Azure authentication information:
-
Provide
accountName
andaccessKey
for your Azure account, this is the simplest way to get started. The accessKey can be generated through your Azure portal. -
Provide a StorageSharedKeyCredential instance which can be provided into
credentials
option. -
Provide a QueueServiceClient instance which can be provided into
serviceClient
. Note: You don’t need to create a specific client, e.g: QueueClient, the QueueServiceClient represents the upper level which can be used to retrieve lower level clients.
10.5. Usage
For example in order to get a message content from the queue messageQueue
in the storageAccount
storage account and, use the following snippet:
from("azure-storage-queue://storageAccount/messageQueue?accessKey=yourAccessKey"). to("file://queuedirectory");
10.5.1. Message headers evaluated by the component producer
Header | Variable Name | Type | Operations | Description |
---|---|---|---|---|
|
|
|
| Options for listing queues |
|
|
| All | An optional timeout value beyond which a \{@link RuntimeException} will be raised. |
|
|
|
| Metadata to associate with the queue |
|
|
|
| How long the message will stay alive in the queue. If unset the value will default to 7 days, if -1 is passed the message will not expire. The time to live must be -1 or any positive number. |
|
|
|
| The timeout period for how long the message is invisible in the queue. If unset the value will default to 0 and the message will be instantly visible. The timeout must be between 0 seconds and 7 days. |
|
|
|
|
When is set to |
|
|
|
| Unique identifier that must match for the message to be deleted or updated. |
|
|
|
| The ID of the message to be deleted or updated. |
|
|
|
| Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages will be returned. If left empty only 1 message will be retrieved, the allowed range is 1 to 32 messages. |
|
|
| All | Specify the producer operation to execute, please see the doc on this page related to producer operation. |
|
|
| All | Override the queue name. |
10.5.2. Message headers set by either component producer or consumer
Header | Variable Name | Type | Description |
---|---|---|---|
|
|
| The ID of message that being sent to the queue. |
|
|
| The time the Message was inserted into the Queue. |
|
|
| The time that the Message will expire and be automatically deleted. |
|
|
| This value is required to delete/update the Message. If deletion fails using this popreceipt then the message has been dequeued by another client. |
|
|
| The time that the message will again become visible in the Queue. |
|
|
| The number of times the message has been dequeued. |
|
|
| Returns non-parsed httpHeaders that can be used by the user. |
10.5.3. Advanced Azure Storage Queue configuration
If your Camel Application is running behind a firewall or if you need to have more control over the QueueServiceClient
instance configuration, you can create your own instance:
StorageSharedKeyCredential credential = new StorageSharedKeyCredential("yourAccountName", "yourAccessKey"); String uri = String.format("https://%s.queue.core.windows.net", "yourAccountName"); QueueServiceClient client = new QueueServiceClientBuilder() .endpoint(uri) .credential(credential) .buildClient(); // This is camel context context.getRegistry().bind("client", client);
Then refer to this instance in your Camel azure-storage-queue
component configuration:
from("azure-storage-queue://cameldev/queue1?serviceClient=#client") .to("file://outputFolder?fileName=output.txt&fileExist=Append");
10.5.4. Automatic detection of QueueServiceClient client in registry
The component is capable of detecting the presence of an QueueServiceClient bean into the registry. If it’s the only instance of that type it will be used as client and you won’t have to define it as uri parameter, like the example above. This may be really useful for smarter configuration of the endpoint.
10.5.5. Azure Storage Queue Producer operations
Camel Azure Storage Queue component provides wide range of operations on the producer side:
Operations on the service level
For these operations, accountName
is required.
Operation | Description |
---|---|
| Lists the queues in the storage account that pass the filter starting at the specified marker. |
Operations on the queue level
For these operations, accountName
and queueName
are required.
Operation | Description |
---|---|
| Creates a new queue. |
| Permanently deletes the queue. |
| Deletes all messages in the queue.. |
|
Default Producer Operation Sends a message with a given time-to-live and a timeout period where the message is invisible in the queue. The message text is evaluated from the exchange message body. By default, if the queue doesn`t exist, it will create an empty queue first. If you want to disable this, set the config |
| Deletes the specified message in the queue. |
| Retrieves up to the maximum number of messages from the queue and hides them from other operations for the timeout period. However it will not dequeue the message from the queue due to reliability reasons. |
| Peek messages from the front of the queue up to the maximum number of messages. |
| Updates the specific message in the queue with a new message and resets the visibility timeout. The message text is evaluated from the exchange message body. |
Refer to the example section in this page to learn how to use these operations into your camel application.
10.5.6. Consumer Examples
To consume a queue into a file component with maximum 5 messages in one batch, this can be done like this:
from("azure-storage-queue://cameldev/queue1?serviceClient=#client&maxMessages=5") .to("file://outputFolder?fileName=output.txt&fileExist=Append");
10.5.7. Producer Operations Examples
-
listQueues
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g, to only returns list of queues with 'awesome' prefix: exchange.getIn().setHeader(QueueConstants.QUEUES_SEGMENT_OPTIONS, new QueuesSegmentOptions().setPrefix("awesome")); }) .to("azure-storage-queue://cameldev?serviceClient=#client&operation=listQueues") .log("${body}") .to("mock:result");
-
createQueue
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(QueueConstants.QUEUE_NAME, "overrideName"); }) .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=createQueue");
-
deleteQueue
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(QueueConstants.QUEUE_NAME, "overrideName"); }) .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=deleteQueue");
-
clearQueue
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(QueueConstants.QUEUE_NAME, "overrideName"); }) .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=clearQueue");
-
sendMessage
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setBody("message to send"); // we set a visibility of 1min exchange.getIn().setHeader(QueueConstants.VISIBILITY_TIMEOUT, Duration.ofMinutes(1)); }) .to("azure-storage-queue://cameldev/test?serviceClient=#client");
-
deleteMessage
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: // Mandatory header: exchange.getIn().setHeader(QueueConstants.MESSAGE_ID, "1"); // Mandatory header: exchange.getIn().setHeader(QueueConstants.POP_RECEIPT, "PAAAAHEEERXXX-1"); }) .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=deleteMessage");
-
receiveMessages
:
from("direct:start") .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=receiveMessages") .process(exchange -> { final List<QueueMessageItem> messageItems = exchange.getMessage().getBody(List.class); messageItems.forEach(messageItem -> System.out.println(messageItem.getMessageText())); }) .to("mock:result");
-
peekMessages
:
from("direct:start") .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=peekMessages") .process(exchange -> { final List<PeekedMessageItem> messageItems = exchange.getMessage().getBody(List.class); messageItems.forEach(messageItem -> System.out.println(messageItem.getMessageText())); }) .to("mock:result");
-
updateMessage
:
from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setBody("new message text"); // Mandatory header: exchange.getIn().setHeader(QueueConstants.MESSAGE_ID, "1"); // Mandatory header: exchange.getIn().setHeader(QueueConstants.POP_RECEIPT, "PAAAAHEEERXXX-1"); // Mandatory header: exchange.getIn().setHeader(QueueConstants.VISIBILITY_TIMEOUT, Duration.ofMinutes(1)); }) .to("azure-storage-queue://cameldev/test?serviceClient=#client&operation=updateMessage");
10.5.8. Development Notes (Important)
When developing on this component, you will need to obtain your Azure accessKey in order to run the integration tests. In addition to the mocked unit tests you will need to run the integration tests with every change you make or even client upgrade as the Azure client can break things even on minor versions upgrade. To run the integration tests, on this component directory, run the following maven command:
mvn verify -PfullTests -DaccountName=myacc -DaccessKey=mykey
Whereby accountName
is your Azure account name and accessKey
is the access key being generated from Azure portal.
10.6. Spring Boot Auto-Configuration
When using azure-storage-queue with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-storage-queue-starter</artifactId> </dependency>
The component supports 16 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.azure-storage-queue.access-key | Access key for the associated azure account name to be used for authentication with azure queue services. | String | |
camel.component.azure-storage-queue.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.azure-storage-queue.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.azure-storage-queue.configuration | The component configurations. The option is a org.apache.camel.component.azure.storage.queue.QueueConfiguration type. | QueueConfiguration | |
camel.component.azure-storage-queue.create-queue | When is set to true, the queue will be automatically created when sending messages to the queue. | false | Boolean |
camel.component.azure-storage-queue.credentials | StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. The option is a com.azure.storage.common.StorageSharedKeyCredential type. | StorageSharedKeyCredential | |
camel.component.azure-storage-queue.enabled | Whether to enable auto configuration of the azure-storage-queue component. This is enabled by default. | Boolean | |
camel.component.azure-storage-queue.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.azure-storage-queue.max-messages | Maximum number of messages to get, if there are less messages exist in the queue than requested all the messages will be returned. If left empty only 1 message will be retrieved, the allowed range is 1 to 32 messages. | 1 | Integer |
camel.component.azure-storage-queue.message-id | The ID of the message to be deleted or updated. | String | |
camel.component.azure-storage-queue.operation | Queue service operation hint to the producer. | QueueOperationDefinition | |
camel.component.azure-storage-queue.pop-receipt | Unique identifier that must match for the message to be deleted or updated. | String | |
camel.component.azure-storage-queue.service-client | Service client to a storage account to interact with the queue service. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. This client contains all the operations for interacting with a queue account in Azure Storage. Operations allowed by the client are creating, listing, and deleting queues, retrieving and updating properties of the account, and retrieving statistics of the account. The option is a com.azure.storage.queue.QueueServiceClient type. | QueueServiceClient | |
camel.component.azure-storage-queue.time-to-live | How long the message will stay alive in the queue. If unset the value will default to 7 days, if -1 is passed the message will not expire. The time to live must be -1 or any positive number. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S — parses as 20.345 seconds, P2D — parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe. The option is a java.time.Duration type. | Duration | |
camel.component.azure-storage-queue.timeout | An optional timeout applied to the operation. If a response is not returned before the timeout concludes a RuntimeException will be thrown. The option is a java.time.Duration type. | Duration | |
camel.component.azure-storage-queue.visibility-timeout | The timeout period for how long the message is invisible in the queue. The timeout must be between 1 seconds and 7 days. The format should be in this form: PnDTnHnMn.nS., e.g: PT20.345S — parses as 20.345 seconds, P2D — parses as 2 days However, in case you are using EndpointDsl/ComponentDsl, you can do something like Duration.ofSeconds() since these Java APIs are typesafe. The option is a java.time.Duration type. | Duration |
Chapter 11. Bean
Only producer is supported
The Bean component binds beans to Camel message exchanges.
11.1. URI format
bean:beanName[?options]
Where beanID can be any string which is used to look up the bean in the Registry
11.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
11.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
11.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
11.3. Component Options
The Bean component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
cache (producer) | Deprecated Use singleton option instead. | true | Boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
scope (producer) | Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. Enum values:
| Singleton | BeanScope |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
11.4. Endpoint Options
The Bean endpoint is configured using URI syntax:
bean:beanName
with the following path and query parameters:
11.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
beanName (common) | Required Sets the name of the bean to invoke. | String |
11.4.2. Query Parameters (5 parameters)
Name | Description | Default | Type |
---|---|---|---|
cache (common) | Deprecated Use scope option instead. | Boolean | |
method (common) | Sets the name of the method to invoke on the bean. | String | |
scope (common) | Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. Enum values:
| Singleton | BeanScope |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
parameters (advanced) | Used for configuring additional properties on the bean. | Map |
11.5. Using
The object instance that is used to consume messages must be explicitly registered with the Registry. For example, if you are using Spring you must define the bean in the Spring configuration XML file.
You can also register beans manually via Camel’s Registry
with the bind
method.
Once an endpoint has been registered, you can build Camel routes that use it to process exchanges.
A bean: endpoint cannot be defined as the input to the route; i.e. you cannot consume from it, you can only route from some inbound message Endpoint to the bean endpoint as output. So consider using a direct: or queue: endpoint as the input.
You can use the createProxy()
methods on ProxyHelper to create a proxy that will generate exchanges and send them to any endpoint:
And the same route using XML DSL:
<route> <from uri="direct:hello"/> <to uri="bean:bye"/> </route>
11.6. Bean as endpoint
Camel also supports invoking Bean as an Endpoint. What happens is that when the exchange is routed to the myBean
Camel will use the Bean Binding to invoke the bean. The source for the bean is just a plain POJO.
Camel will use Bean Binding to invoke the sayHello
method, by converting the Exchange’s In body to the String
type and storing the output of the method on the Exchange Out body.
11.7. Java DSL bean syntax
Java DSL comes with syntactic sugar for the component. Instead of specifying the bean explicitly as the endpoint (i.e. to("bean:beanName")
) you can use the following syntax:
// Send message to the bean endpoint // and invoke method resolved using Bean Binding. from("direct:start").bean("beanName"); // Send message to the bean endpoint // and invoke given method. from("direct:start").bean("beanName", "methodName");
Instead of passing name of the reference to the bean (so that Camel will lookup for it in the registry), you can specify the bean itself:
// Send message to the given bean instance. from("direct:start").bean(new ExampleBean()); // Explicit selection of bean method to be invoked. from("direct:start").bean(new ExampleBean(), "methodName"); // Camel will create the instance of bean and cache it for you. from("direct:start").bean(ExampleBean.class);
11.8. Bean Binding
How bean methods to be invoked are chosen (if they are not specified explicitly through the method parameter) and how parameter values are constructed from the Message are all defined by the Bean Binding mechanism which is used throughout all of the various Bean Integration mechanisms in Camel.
11.9. Spring Boot Auto-Configuration
When using bean with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-starter</artifactId> </dependency>
The component supports 13 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.bean.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.bean.enabled | Whether to enable auto configuration of the bean component. This is enabled by default. | Boolean | |
camel.component.bean.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.bean.scope | Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. | BeanScope | |
camel.component.class.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.class.enabled | Whether to enable auto configuration of the class component. This is enabled by default. | Boolean | |
camel.component.class.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.class.scope | Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using delegate scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. so when using prototype then this depends on the delegated registry. | BeanScope | |
camel.language.bean.enabled | Whether to enable auto configuration of the bean language. This is enabled by default. | Boolean | |
camel.language.bean.scope | Scope of bean. When using singleton scope (default) the bean is created or looked up only once and reused for the lifetime of the endpoint. The bean should be thread-safe in case concurrent threads is calling the bean at the same time. When using request scope the bean is created or looked up once per request (exchange). This can be used if you want to store state on a bean while processing a request and you want to call the same bean instance multiple times while processing the request. The bean does not have to be thread-safe as the instance is only called from the same request. When using prototype scope, then the bean will be looked up or created per call. However in case of lookup then this is delegated to the bean registry such as Spring or CDI (if in use), which depends on their configuration can act as either singleton or prototype scope. So when using prototype scope then this depends on the bean registry implementation. | Singleton | String |
camel.language.bean.trim | Whether to trim the value to remove leading and trailing whitespaces and line breaks. | true | Boolean |
camel.component.bean.cache | Deprecated Use singleton option instead. | true | Boolean |
camel.component.class.cache | Deprecated Use singleton option instead. | true | Boolean |
Chapter 12. Bean Validator
Only producer is supported
The Validator component performs bean validation of the message body using the Java Bean Validation API (). Camel uses the reference implementation, which is Hibernate Validator.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bean-validator</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
12.1. URI format
bean-validator:label[?options]
Where label is an arbitrary text value describing the endpoint. You can append query options to the URI in the following format,
?option=value&option=value&…
12.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
12.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
12.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
12.3. Component Options
The Bean Validator component supports 8 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
ignoreXmlConfiguration (producer) | Whether to ignore data from the META-INF/validation.xml file. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
constraintValidatorFactory (advanced) | To use a custom ConstraintValidatorFactory. | ConstraintValidatorFactory | |
messageInterpolator (advanced) | To use a custom MessageInterpolator. | MessageInterpolator | |
traversableResolver (advanced) | To use a custom TraversableResolver. | TraversableResolver | |
validationProviderResolver (advanced) | To use a a custom ValidationProviderResolver. | ValidationProviderResolver | |
validatorFactory (advanced) | Autowired To use a custom ValidatorFactory. | ValidatorFactory |
12.4. Endpoint Options
The Bean Validator endpoint is configured using URI syntax:
bean-validator:label
with the following path and query parameters:
12.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
label (producer) | Required Where label is an arbitrary text value describing the endpoint. | String |
12.4.2. Query Parameters (8 parameters)
Name | Description | Default | Type |
---|---|---|---|
group (producer) | To use a custom validation group. | javax.validation.groups.Default | String |
ignoreXmlConfiguration (producer) | Whether to ignore data from the META-INF/validation.xml file. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
constraintValidatorFactory (advanced) | To use a custom ConstraintValidatorFactory. | ConstraintValidatorFactory | |
messageInterpolator (advanced) | To use a custom MessageInterpolator. | MessageInterpolator | |
traversableResolver (advanced) | To use a custom TraversableResolver. | TraversableResolver | |
validationProviderResolver (advanced) | To use a a custom ValidationProviderResolver. | ValidationProviderResolver | |
validatorFactory (advanced) | To use a custom ValidatorFactory. | ValidatorFactory |
12.5. OSGi deployment
To use Hibernate Validator in the OSGi environment use dedicated ValidationProviderResolver
implementation, just as org.apache.camel.component.bean.validator.HibernateValidationProviderResolver
. The snippet below demonstrates this approach. You can also use HibernateValidationProviderResolver
.
Using HibernateValidationProviderResolver
from("direct:test"). to("bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver");
<bean id="myValidationProviderResolver" class="org.apache.camel.component.bean.validator.HibernateValidationProviderResolver"/>
If no custom ValidationProviderResolver
is defined and the validator component has been deployed into the OSGi environment, the HibernateValidationProviderResolver
will be automatically used.
12.6. Example
Assumed we have a java bean with the following annotations
Car.java
public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter }
and an interface definition for our custom validation group
OptionalChecks.java
public interface OptionalChecks { }
with the following Camel route, only the @NotNull constraints on the attributes manufacturer and licensePlate will be validated (Camel uses the default group javax.validation.groups.Default
).
from("direct:start") .to("bean-validator://x") .to("mock:end")
If you want to check the constraints from the group OptionalChecks
, you have to define the route like this
from("direct:start") .to("bean-validator://x?group=OptionalChecks") .to("mock:end")
If you want to check the constraints from both groups, you have to define a new interface first
AllChecks.java
@GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { }
and then your route definition should looks like this
from("direct:start") .to("bean-validator://x?group=AllChecks") .to("mock:end")
And if you have to provide your own message interpolator, traversable resolver and constraint validator factory, you have to write a route like this
<bean id="myMessageInterpolator" class="my.ConstraintValidatorFactory" /> <bean id="myTraversableResolver" class="my.TraversableResolver" /> <bean id="myConstraintValidatorFactory" class="my.ConstraintValidatorFactory" />
from("direct:start") .to("bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory") .to("mock:end")
It’s also possible to describe your constraints as XML and not as Java annotations. In this case, you have to provide the file META-INF/validation.xml
which could looks like this
validation.xml
<validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config>
and the constraints-car.xml
file
constraints-car.xml
<constraint-mappings xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd" xmlns="http://jboss.org/xml/ns/javax/validation/mapping"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class="CarWithoutAnnotations" ignore-annotations="true"> <field name="manufacturer"> <constraint annotation="javax.validation.constraints.NotNull" /> </field> <field name="licensePlate"> <constraint annotation="javax.validation.constraints.NotNull" /> <constraint annotation="javax.validation.constraints.Size"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name="min">5</element> <element name="max">14</element> </constraint> </field> </bean> </constraint-mappings>
Here is the XML syntax for the example route definition for OrderedChecks.
Note that the body should include an instance of a class to validate.
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks"/> </route> </camelContext> </beans>
12.7. Spring Boot Auto-Configuration
When using bean-validator with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-validator-starter</artifactId> </dependency>
The component supports 9 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.bean-validator.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.bean-validator.constraint-validator-factory | To use a custom ConstraintValidatorFactory. The option is a javax.validation.ConstraintValidatorFactory type. | ConstraintValidatorFactory | |
camel.component.bean-validator.enabled | Whether to enable auto configuration of the bean-validator component. This is enabled by default. | Boolean | |
camel.component.bean-validator.ignore-xml-configuration | Whether to ignore data from the META-INF/validation.xml file. | false | Boolean |
camel.component.bean-validator.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.bean-validator.message-interpolator | To use a custom MessageInterpolator. The option is a javax.validation.MessageInterpolator type. | MessageInterpolator | |
camel.component.bean-validator.traversable-resolver | To use a custom TraversableResolver. The option is a javax.validation.TraversableResolver type. | TraversableResolver | |
camel.component.bean-validator.validation-provider-resolver | To use a a custom ValidationProviderResolver. The option is a javax.validation.ValidationProviderResolver type. | ValidationProviderResolver | |
camel.component.bean-validator.validator-factory | To use a custom ValidatorFactory. The option is a javax.validation.ValidatorFactory type. | ValidatorFactory |
Chapter 13. Browse
Both producer and consumer are supported
The Browse component provides a simple BrowsableEndpoint which can be useful for testing, visualisation tools or debugging. The exchanges sent to the endpoint are all available to be browsed.
13.1. URI format
browse:someName[?options]
Where someName can be any string to uniquely identify the endpoint.
13.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
13.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
13.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
13.3. Component Options
The Browse component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
13.4. Endpoint Options
The Browse endpoint is configured using URI syntax:
browse:name
with the following path and query parameters:
13.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (common) | Required A name which can be any string to uniquely identify the endpoint. | String |
13.4.2. Query Parameters (4 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
13.5. Sample
In the route below, we insert a browse:
component to be able to browse the Exchanges that are passing through:
from("activemq:order.in").to("browse:orderReceived").to("bean:processOrder");
We can now inspect the received exchanges from within the Java code:
private CamelContext context; public void inspectReceivedOrders() { BrowsableEndpoint browse = context.getEndpoint("browse:orderReceived", BrowsableEndpoint.class); List<Exchange> exchanges = browse.getExchanges(); // then we can inspect the list of received exchanges from Java for (Exchange exchange : exchanges) { String payload = exchange.getIn().getBody(); // do something with payload } }
13.6. Spring Boot Auto-Configuration
When using browse with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-browse-starter</artifactId> </dependency>
The component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.browse.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.browse.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.browse.enabled | Whether to enable auto configuration of the browse component. This is enabled by default. | Boolean | |
camel.component.browse.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 14. Cassandra CQL
Both producer and consumer are supported
Apache Cassandra is an open source NoSQL database designed to handle large amounts on commodity hardware. Like Amazon’s DynamoDB, Cassandra has a peer-to-peer and master-less architecture to avoid single point of failure and garanty high availability. Like Google’s BigTable, Cassandra data is structured using column families which can be accessed through the Thrift RPC API or a SQL-like API called CQL.
This component aims at integrating Cassandra 2.0+ using the CQL3 API (not the Thrift API). It’s based on Cassandra Java Driver provided by DataStax.
14.1. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
14.1.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
14.1.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
14.2. Component Options
The Cassandra CQL component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
14.3. Endpoint Options
The Cassandra CQL endpoint is configured using URI syntax:
cql:beanRef:hosts:port/keyspace
with the following path and query parameters:
14.3.1. Path Parameters (4 parameters)
Name | Description | Default | Type |
---|---|---|---|
beanRef (common) | beanRef is defined using bean:id. | String | |
hosts (common) | Hostname(s) Cassandra server(s). Multiple hosts can be separated by comma. | String | |
port (common) | Port number of Cassandra server(s). | Integer | |
keyspace (common) | Keyspace to use. | String |
14.3.2. Query Parameters (30 parameters)
Name | Description | Default | Type |
---|---|---|---|
clusterName (common) | Cluster name. | String | |
consistencyLevel (common) | Consistency level to use. Enum values:
| DefaultConsistencyLevel | |
cql (common) | CQL query to perform. Can be overridden with the message header with key CamelCqlQuery. | String | |
datacenter (common) | Datacenter to use. | datacenter1 | String |
loadBalancingPolicyClass (common) | To use a specific LoadBalancingPolicyClass. | String | |
password (common) | Password for session authentication. | String | |
prepareStatements (common) | Whether to use PreparedStatements or regular Statements. | true | boolean |
resultSetConversionStrategy (common) | To use a custom class that implements logic for converting ResultSet into message body ALL, ONE, LIMIT_10, LIMIT_100… | ResultSetConversionStrategy | |
session (common) | To use the Session instance (you would normally not use this option). | CqlSession | |
username (common) | Username for session authentication. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
14.4. Endpoint Connection Syntax
The endpoint can initiate the Cassandra connection or use an existing one.
URI | Description |
---|---|
| Single host, default port, usual for testing |
| Multi host, default port |
| Multi host, custom port |
| Default port and keyspace |
| Provided Session reference |
| Provided Cluster reference |
To fine tune the Cassandra connection (SSL options, pooling options, load balancing policy, retry policy, reconnection policy…), create your own Cluster instance and give it to the Camel endpoint.
14.5. Messages
14.5.1. Incoming Message
The Camel Cassandra endpoint expects a bunch of simple objects (Object
or Object[]
or Collection<Object>
) which will be bound to the CQL statement as query parameters. If message body is null or empty, then CQL query will be executed without binding parameters.
Headers
CamelCqlQuery
(optional,String
orRegularStatement
)CQL query either as a plain String or built using the
QueryBuilder
.
14.5.2. Outgoing Message
The Camel Cassandra endpoint produces one or many a Cassandra Row objects depending on the resultSetConversionStrategy
:
-
List<Row>
ifresultSetConversionStrategy
isALL
orLIMIT_[0-9]+
-
Single` Row` if
resultSetConversionStrategy
isONE
-
Anything else, if
resultSetConversionStrategy
is a custom implementation of theResultSetConversionStrategy
14.6. Repositories
Cassandra can be used to store message keys or messages for the idempotent and aggregation EIP.
Cassandra might not be the best tool for queuing use cases yet, read Cassandra anti-patterns queues and queue like datasets. It’s advised to use LeveledCompaction and a small GC grace setting for these tables to allow tombstoned rows to be removed quickly.
14.7. Idempotent repository
The NamedCassandraIdempotentRepository
stores messages keys in a Cassandra table like this:
CAMEL_IDEMPOTENT.cql
CREATE TABLE CAMEL_IDEMPOTENT ( NAME varchar, -- Repository name KEY varchar, -- Message key PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400;
This repository implementation uses lightweight transactions (also known as Compare and Set) and requires Cassandra 2.0.7+.
Alternatively, the CassandraIdempotentRepository
does not have a NAME
column and can be extended to use a different data model.
Option | Default | Description |
---|---|---|
|
| Table name |
|
| Primary key columns |
|
Repository name, value used for | |
| Key time to live | |
|
Consistency level used to insert/delete key: | |
|
Consistency level used to read/check key: |
14.8. Aggregation repository
The NamedCassandraAggregationRepository
stores exchanges by correlation key in a Cassandra table like this:
CAMEL_AGGREGATION.cql
CREATE TABLE CAMEL_AGGREGATION ( NAME varchar, -- Repository name KEY varchar, -- Correlation id EXCHANGE_ID varchar, -- Exchange id EXCHANGE blob, -- Serialized exchange PRIMARY KEY (NAME, KEY) ) WITH compaction = {'class':'LeveledCompactionStrategy'} AND gc_grace_seconds = 86400;
Alternatively, the CassandraAggregationRepository
does not have a NAME
column and can be extended to use a different data model.
Option | Default | Description |
---|---|---|
|
| Table name |
|
| Primary key columns |
|
| Exchange Id column |
|
| Exchange content column |
|
Repository name, value used for | |
| Exchange time to live | |
|
Consistency level used to insert/delete exchange: | |
|
Consistency level used to read/check exchange: |
14.9. Examples
To insert something on a table you can use the following code:
String CQL = "insert into camel_user(login, first_name, last_name) values (?, ?, ?)"; from("direct:input") .to("cql://localhost/camel_ks?cql=" + CQL);
At this point you should be able to insert data by using a list as body
Arrays.asList("davsclaus", "Claus", "Ibsen")
The same approach can be used for updating or querying the table.
14.10. Spring Boot Auto-Configuration
When using cql with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cassandraql-starter</artifactId> </dependency>
The component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.cql.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.cql.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.cql.enabled | Whether to enable auto configuration of the cql component. This is enabled by default. | Boolean | |
camel.component.cql.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 15. Control Bus
Only producer is supported
The Control Bus from the EIP patterns allows for the integration system to be monitored and managed from within the framework.
Use a Control Bus to manage an enterprise integration system. The Control Bus uses the same messaging mechanism used by the application data, but uses separate channels to transmit data that is relevant to the management of components involved in the message flow.
In Camel you can manage and monitor using JMX, or by using a Java API from the CamelContext
, or from the org.apache.camel.api.management
package, or use the event notifier which has an example here.
The ControlBus component provides easy management of Camel applications based on the Control Bus EIP pattern. For example, by sending a message to an Endpoint you can control the lifecycle of routes, or gather performance statistics.
controlbus:command[?options]
Where command can be any string to identify which type of command to use.
15.1. Commands
Command | Description |
---|---|
|
To control routes using the |
| Allows you to specify a to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. |
15.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
15.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
15.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
15.3. Component Options
The Control Bus component supports 2 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
15.4. Endpoint Options
The Control Bus endpoint is configured using URI syntax:
controlbus:command:language
with the following path and query parameters:
15.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
command (producer) | Required Command can be either route or language. Enum values:
| String | |
language (producer) | Allows you to specify the name of a Language to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. Enum values:
| Language |
15.4.1.1. Query Parameters (6 parameters)
Name | Description | Default | Type |
---|---|---|---|
action (producer) | To denote an action that can be either: start, stop, or status. To either start or stop a route, or to get the status of the route as output in the message body. You can use suspend and resume from Camel 2.11.1 onwards to either suspend or resume a route. And from Camel 2.11.1 onwards you can use stats to get performance statics returned in XML format; the routeId option can be used to define which route to get the performance stats for, if routeId is not defined, then you get statistics for the entire CamelContext. The restart action will restart the route. Enum values:
| String | |
async (producer) | Whether to execute the control bus task asynchronously. Important: If this option is enabled, then any result from the task is not set on the Exchange. This is only possible if executing tasks synchronously. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
loggingLevel (producer) | Logging level used for logging when task is done, or if any exceptions occurred during processing the task. Enum values:
| INFO | LoggingLevel |
restartDelay (producer) | The delay in millis to use when restarting a route. | 1000 | int |
routeId (producer) | To specify a route by its id. The special keyword current indicates the current route. | String |
15.5. Using route command
The route command allows you to do common tasks on a given route very easily, for example to start a route, you can send an empty message to this endpoint:
template.sendBody("controlbus:route?routeId=foo&action=start", null);
To get the status of the route, you can do:
String status = template.requestBody("controlbus:route?routeId=foo&action=status", null, String.class);
15.6. Getting performance statistics
This requires JMX to be enabled (is by default) then you can get the performance statistics per route, or for the CamelContext. For example to get the statistics for a route named foo, we can do:
String xml = template.requestBody("controlbus:route?routeId=foo&action=stats", null, String.class);
The returned statistics is in XML format. Its the same data you can get from JMX with the dumpRouteStatsAsXml
operation on the ManagedRouteMBean
.
To get statistics for the entire CamelContext you just omit the routeId parameter as shown below:
String xml = template.requestBody("controlbus:route?action=stats", null, String.class);
15.7. Using Simple language
You can use the Simple language with the control bus, for example to stop a specific route, you can send a message to the "controlbus:language:simple"
endpoint containing the following message:
template.sendBody("controlbus:language:simple", "${camelContext.getRouteController().stopRoute('myRoute')}");
As this is a void operation, no result is returned. However, if you want the route status you can do:
String status = template.requestBody("controlbus:language:simple", "${camelContext.getRouteStatus('myRoute')}", String.class);
It’s easier to use the route
command to control lifecycle of routes. The language
command allows you to execute a language script that has stronger powers such as Groovy or to some extend the Simple language.
For example to shutdown Camel itself you can do:
template.sendBody("controlbus:language:simple?async=true", "${camelContext.stop()}");
We use async=true
to stop Camel asynchronously as otherwise we would be trying to stop Camel while it was in-flight processing the message we sent to the control bus component.
You can also use other languages such as Groovy, etc.
15.8. Spring Boot Auto-Configuration
When using controlbus with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-controlbus-starter</artifactId> </dependency>
The component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.controlbus.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.controlbus.enabled | Whether to enable auto configuration of the controlbus component. This is enabled by default. | Boolean | |
camel.component.controlbus.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 16. Cron
Only consumer is supported
The Cron component is a generic interface component that allows triggering events at specific time interval specified using the Unix cron syntax (e.g. 0/2 * * * * ?
to trigger an event every two seconds).
Being an interface component, the Cron component does not contain a default implementation, instead it requires that the users plug the implementation of their choice.
The following standard Camel components support the Cron endpoints:
- Camel-quartz
- Camel-spring
The Cron component is also supported in Camel K, which can use the Kubernetes scheduler to trigger the routes when required by the cron expression. Camel K does not require additional libraries to be plugged when using cron expressions compatible with Kubernetes cron syntax.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cron</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
Additional libraries may be needed in order to plug a specific implementation.
16.1. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
16.1.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
16.1.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
16.2. Component Options
The Cron component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
cronService (advanced) | The id of the CamelCronService to use when multiple implementations are provided. | String |
16.3. Endpoint Options
The Cron endpoint is configured using URI syntax:
cron:name
with the following path and query parameters:
16.3.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (consumer) | Required The name of the cron trigger. | String |
16.3.2. Query Parameters (4 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
schedule (consumer) | Required A cron expression that will be used to generate events. | String | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern |
16.4. Usage
The component can be used to trigger events at specified times, as in the following example:
from("cron:tab?schedule=0/1+*+*+*+*+?") .setBody().constant("event") .log("${body}");
The schedule expression 0/3+10+*+?
can be also written as 0/3 10 * * * ?
and triggers an event every three seconds only in the tenth minute of each hour.
Parts in the schedule expression means (in order):
- Seconds (optional)
- Minutes
- Hours
- Day of month
- Month
- Day of week
- Year (optional)
Schedule expressions can be made of 5 to 7 parts. When expressions are composed of 6 parts, the first items is the "seconds" part (and year is considered missing).
Other valid examples of schedule expressions are:
-
0/2 * * * ?
(5 parts, an event every two minutes) -
0 0/2 * * * MON-FRI 2030
(7 parts, an event every two minutes only in year 2030)
Routes can also be written using the XML DSL.
<route> <from uri="cron:tab?schedule=0/1+*+*+*+*+?"/> <setBody> <constant>event</constant> </setBody> <to uri="log:info"/> </route>
16.5. Spring Boot Auto-Configuration
When using cron with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cron-starter</artifactId> </dependency>
The component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.cron.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.cron.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.cron.cron-service | The id of the CamelCronService to use when multiple implementations are provided. | String | |
camel.component.cron.enabled | Whether to enable auto configuration of the cron component. This is enabled by default. | Boolean |
Chapter 17. CXF
Both producer and consumer are supported
The CXF component provides integration with Apache CXF for connecting to JAX-WS services hosted in CXF.
When using CXF in streaming modes (see DataFormat option), then also read about Stream caching.
Maven users must add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cxf-soap</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
17.1. URI format
There are two URI formats for this endpoint: cxfEndpoint and someAddress.
cxf:bean:cxfEndpoint[?options]
Where cxfEndpoint represents a bean ID that references a bean in the Spring bean registry. With this URI format, most of the endpoint details are specified in the bean definition.
cxf://someAddress[?options]
Where someAddress specifies the CXF endpoint’s address. With this URI format, most of the endpoint details are specified using options.
For either style above, you can append options to the URI as follows:
cxf:bean:cxfEndpoint?wsdlURL=wsdl/hello_world.wsdl&dataFormat=PAYLOAD
17.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
17.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
17.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
17.3. Component Options
The CXF component supports 6 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
allowStreaming (advanced) | This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. | Boolean | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
headerFilterStrategy (filter) | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
useGlobalSslContextParameters (security) | Enable usage of global SSL context parameters. | false | boolean |
17.4. Endpoint Options
The CXF endpoint is configured using URI syntax:
cxf:beanId:address
with the following path and query parameters:
17.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
beanId (common) | To lookup an existing configured CxfEndpoint. Must used bean: as prefix. | String | |
address (service) | The service publish address. | String |
17.4.2. Query Parameters (35 parameters)
Name | Description | Default | Type |
---|---|---|---|
dataFormat (common) | The data type messages supported by the CXF endpoint. Enum values:
| POJO | DataFormat |
wrappedStyle (common) | The WSDL style that describes how parameters are represented in the SOAP body. If the value is false, CXF will chose the document-literal unwrapped style, If the value is true, CXF will chose the document-literal wrapped style. | Boolean | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
cookieHandler (producer) | Configure a cookie handler to maintain a HTTP session. | CookieHandler | |
defaultOperationName (producer) | This option will set the default operationName that will be used by the CxfProducer which invokes the remote service. | String | |
defaultOperationNamespace (producer) | This option will set the default operationNamespace that will be used by the CxfProducer which invokes the remote service. | String | |
hostnameVerifier (producer) | The hostname verifier to be used. Use the # notation to reference a HostnameVerifier from the registry. | HostnameVerifier | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
sslContextParameters (producer) | The Camel SSL setting reference. Use the # notation to reference the SSL Context. | SSLContextParameters | |
wrapped (producer) | Which kind of operation that CXF endpoint producer will invoke. | false | boolean |
synchronous (producer (advanced)) | Sets whether synchronous processing should be strictly used. | false | boolean |
allowStreaming (advanced) | This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. | Boolean | |
bus (advanced) | To use a custom configured CXF Bus. | Bus | |
continuationTimeout (advanced) | This option is used to set the CXF continuation timeout which could be used in CxfConsumer by default when the CXF server is using Jetty or Servlet transport. | 30000 | long |
cxfBinding (advanced) | To use a custom CxfBinding to control the binding between Camel Message and CXF Message. | CxfBinding | |
cxfConfigurer (advanced) | This option could apply the implementation of org.apache.camel.component.cxf.CxfEndpointConfigurer which supports to configure the CXF endpoint in programmatic way. User can configure the CXF server and client by implementing configure{ServerClient} method of CxfEndpointConfigurer. | CxfConfigurer | |
defaultBus (advanced) | Will set the default bus when CXF endpoint create a bus by itself. | false | boolean |
headerFilterStrategy (advanced) | To use a custom HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
mergeProtocolHeaders (advanced) | Whether to merge protocol headers. If enabled then propagating headers between Camel and CXF becomes more consistent and similar. For more details see CAMEL-6393. | false | boolean |
mtomEnabled (advanced) | To enable MTOM (attachments). This requires to use POJO or PAYLOAD data format mode. | false | boolean |
properties (advanced) | To set additional CXF options using the key/value pairs from the Map. For example to turn on stacktraces in SOAP faults, properties.faultStackTraceEnabled=true. | Map | |
skipPayloadMessagePartCheck (advanced) | Sets whether SOAP message validation should be disabled. | false | boolean |
loggingFeatureEnabled (logging) | This option enables CXF Logging Feature which writes inbound and outbound SOAP messages to log. | false | boolean |
loggingSizeLimit (logging) | To limit the total size of number of bytes the logger will output when logging feature has been enabled and -1 for no limit. | 49152 | int |
skipFaultLogging (logging) | This option controls whether the PhaseInterceptorChain skips logging the Fault that it catches. | false | boolean |
password (security) | This option is used to set the basic authentication information of password for the CXF client. | String | |
username (security) | This option is used to set the basic authentication information of username for the CXF client. | String | |
bindingId (service) | The bindingId for the service model to use. | String | |
portName (service) | The endpoint name this service is implementing, it maps to the wsdl:portname. In the format of ns:PORT_NAME where ns is a namespace prefix valid at this scope. | String | |
publishedEndpointUrl (service) | This option can override the endpointUrl that published from the WSDL which can be accessed with service address url plus wsd. | String | |
serviceClass (service) | The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. | Class | |
serviceName (service) | The service name this service is implementing, it maps to the wsdl:servicename. | String | |
wsdlURL (service) | The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. | String |
The serviceName
and portName
are QNames, so if you provide them be sure to prefix them with their {namespace} as shown in the examples above.
17.4.3. Descriptions of the dataformats
In Apache Camel, the Camel CXF component is the key to integrating routes with Web services. You can use the Camel CXF component to create a CXF endpoint, which can be used in either of the following ways:
- Consumer — (at the start of a route) represents a Web service instance, which integrates with the route. The type of payload injected into the route depends on the value of the endpoint’s dataFormat option.
- Producer — (at other points in the route) represents a WS client proxy, which converts the current exchange object into an operation invocation on a remote Web service. The format of the current exchange must match the endpoint’s dataFormat setting.
DataFormat | Description |
---|---|
| POJOs (Plain old Java objects) are the Java parameters to the method being invoked on the target server. Both Protocol and Logical JAX-WS handlers are supported. |
|
|
|
|
|
|
You can determine the data format mode of an exchange by retrieving the exchange property, CamelCXFDataFormat
. The exchange key constant is defined in org.apache.camel.component.cxf.common.message.CxfConstants.DATA_FORMAT_PROPERTY
.
17.4.4. How to enable CXF’s LoggingOutInterceptor in RAW mode
CXF’s LoggingOutInterceptor
outputs outbound message that goes on the wire to logging system (Java Util Logging). Since the LoggingOutInterceptor
is in PRE_STREAM
phase (but PRE_STREAM
phase is removed in RAW
mode), you have to configure LoggingOutInterceptor
to be run during the WRITE
phase. The following is an example.
@Bean public CxfEndpoint serviceEndpoint(LoggingOutInterceptor loggingOutInterceptor) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setAddress("http://localhost:" + port + "/services" + SERVICE_ADDRESS); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.HelloService.class); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "RAW"); cxfEndpoint.setProperties(properties); cxfEndpoint.getOutInterceptors().add(loggingOutInterceptor); return cxfEndpoint; } @Bean public LoggingOutInterceptor loggingOutInterceptor() { LoggingOutInterceptor logger = new LoggingOutInterceptor("write"); return logger; }
17.4.5. Description of relayHeaders option
There are in-band and out-of-band on-the-wire headers from the perspective of a JAXWS WSDL-first developer.
The in-band headers are headers that are explicitly defined as part of the WSDL binding contract for an endpoint such as SOAP headers.
The out-of-band headers are headers that are serialized over the wire, but are not explicitly part of the WSDL binding contract.
Headers relaying/filtering is bi-directional.
When a route has a CXF endpoint and the developer needs to have on-the-wire headers, such as SOAP headers, be relayed along the route to be consumed say by another JAXWS endpoint, then relayHeaders
should be set to true
, which is the default value.
17.4.6. Available only in POJO mode
The relayHeaders=true
expresses an intent to relay the headers. The actual decision on whether a given header is relayed is delegated to a pluggable instance that implements the MessageHeadersRelay
interface. A concrete implementation of MessageHeadersRelay
will be consulted to decide if a header needs to be relayed or not. There is already an implementation of SoapMessageHeadersRelay
which binds itself to well-known SOAP name spaces. Currently only out-of-band headers are filtered, and in-band headers will always be relayed when relayHeaders=true
. If there is a header on the wire whose name space is unknown to the runtime, then a fall back DefaultMessageHeadersRelay
will be used, which simply allows all headers to be relayed.
The relayHeaders=false
setting specifies that all headers in-band and out-of-band should be dropped.
You can plugin your own MessageHeadersRelay
implementations overriding or adding additional ones to the list of relays. In order to override a preloaded relay instance just make sure that your MessageHeadersRelay
implementation services the same name spaces as the one you looking to override. Also note, that the overriding relay has to service all of the name spaces as the one you looking to override, or else a runtime exception on route start up will be thrown as this would introduce an ambiguity in name spaces to relay instance mappings.
<cxf:cxfEndpoint ...> <cxf:properties> <entry key="org.apache.camel.cxf.message.headers.relays"> <list> <ref bean="customHeadersRelay"/> </list> </entry> </cxf:properties> </cxf:cxfEndpoint> <bean id="customHeadersRelay" class="org.apache.camel.component.cxf.soap.headers.CustomHeadersRelay"/>
Take a look at the tests that show how you’d be able to relay/drop headers here:
-
POJO
andPAYLOAD
modes are supported. InPOJO
mode, only out-of-band message headers are available for filtering as the in-band headers have been processed and removed from header list by CXF. The in-band headers are incorporated into theMessageContentList
in POJO mode. Thecamel-cxf
component does make any attempt to remove the in-band headers from theMessageContentList
. If filtering of in-band headers is required, please usePAYLOAD
mode or plug in a (pretty straightforward) CXF interceptor/JAXWS Handler to the CXF endpoint. -
The Message Header Relay mechanism has been merged into
CxfHeaderFilterStrategy
. TherelayHeaders
option, its semantics, and default value remain the same, but it is a property ofCxfHeaderFilterStrategy
. Here is an example of configuring it.
@Bean public HeaderFilterStrategy dropAllMessageHeadersStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); headerFilterStrategy.setRelayHeaders(false); return headerFilterStrategy; }
Then, your endpoint can reference the CxfHeaderFilterStrategy
.
@Bean public CxfEndpoint routerNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpoint"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } @Bean public CxfEndpoint serviceNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpointBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; }
Then configure the route as follows:
rom("cxf:bean:routerNoRelayEndpoint") .to("cxf:bean:serviceNoRelayEndpoint");
-
The
MessageHeadersRelay
interface has changed slightly and has been renamed toMessageHeaderFilter
. It is a property ofCxfHeaderFilterStrategy
. Here is an example of configuring user defined Message Header Filters:
@Bean public HeaderFilterStrategy customMessageFilterStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); List<MessageHeaderFilter> headerFilterList = new ArrayList<MessageHeaderFilter>(); headerFilterList.add(new SoapMessageHeaderFilter()); headerFilterList.add(new CustomHeaderFilter()); headerFilterStrategy.setMessageHeaderFilters(headerFilterList); return headerFilterStrategy; }
-
In addition to
relayHeaders
, the following properties can be configured inCxfHeaderFilterStrategy
.
Name | Required | Description |
---|---|---|
| No |
All message headers will be processed by Message Header Filters Type: |
| No |
All message headers will be propagated (without processing by Message Header Filters) Type: |
| No |
If two filters overlap in activation namespace, the property control how it should be handled. If the value is |
17.5. Configure the CXF endpoints with Spring
You can configure the CXF endpoint with the Spring configuration file shown below, and you can also embed the endpoint into the camelContext
tags. When you are invoking the service endpoint, you can set the operationName
and operationNamespace
headers to explicitly state which operation you are calling.
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cxf="http://camel.apache.org/schema/cxf" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <cxf:cxfEndpoint id="routerEndpoint" address="http://localhost:9003/CamelContext/RouterPort" serviceClass="org.apache.hello_world_soap_http.GreeterImpl"/> <cxf:cxfEndpoint id="serviceEndpoint" address="http://localhost:9000/SoapContext/SoapPort" wsdlURL="testutils/hello_world.wsdl" serviceClass="org.apache.hello_world_soap_http.Greeter" endpointName="s:SoapPort" serviceName="s:SOAPService" xmlns:s="http://apache.org/hello_world_soap_http" /> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="cxf:bean:routerEndpoint" /> <to uri="cxf:bean:serviceEndpoint" /> </route> </camelContext> </beans>
Be sure to include the JAX-WS schemaLocation
attribute specified on the root beans element. This allows CXF to validate the file and is required. Also note the namespace declarations at the end of the <cxf:cxfEndpoint/>
tag. These declarations are required because the combined {namespace}localName
syntax is presently not supported for this tag’s attribute values.
The cxf:cxfEndpoint
element supports many additional attributes:
Name | Value |
---|---|
|
The endpoint name this service is implementing, it maps to the |
|
The service name this service is implementing, it maps to the |
| The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. |
|
The |
| The service publish address. |
| The bus name that will be used in the JAX-WS endpoint. |
| The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. |
It also supports many child elements:
Name | Value |
---|---|
|
The incoming interceptors for this endpoint. A list of |
|
The incoming fault interceptors for this endpoint. A list of |
|
The outgoing interceptors for this endpoint. A list of |
|
The outgoing fault interceptors for this endpoint. A list of |
| A properties map which should be supplied to the JAX-WS endpoint. See below. |
| A JAX-WS handler list which should be supplied to the JAX-WS endpoint. See below. |
|
You can specify the which |
|
You can specify the |
| The features that hold the interceptors for this endpoint. A list of beans or refs |
| The schema locations for endpoint to use. A list of schemaLocations |
|
The service factory for this endpoint to use. This can be supplied using the Spring |
You can find more advanced examples that show how to provide interceptors, properties and handlers on the CXF JAX-WS Configuration page.
You can use cxf:properties to set the camel-cxf endpoint’s dataFormat and setDefaultBus properties from spring configuration file.
<cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/router" serviceClass="org.apache.camel.component.cxf.HelloService" endpointName="s:PortName" serviceName="s:ServiceName" xmlns:s="http://www.example.com/test"> <cxf:properties> <entry key="dataFormat" value="RAW"/> <entry key="setDefaultBus" value="true"/> </cxf:properties> </cxf:cxfEndpoint>
In SpringBoot, you can use Spring XML files to configure camel-cxf
and use code similar to the following example to create XML configured beans:
@ImportResource({ "classpath:spring-configuration.xml" })
However, the use of Java code configured beans (as shown in other examples) is best practice in SpringBoot.
17.6. How to make the camel-cxf component use log4j instead of java.util.logging
CXF’s default logger is java.util.logging
. If you want to change it to log4j, proceed as follows. Create a file, in the classpath, named META-INF/cxf/org.apache.cxf.logger
. This file should contain the fully-qualified name of the class, org.apache.cxf.common.logging.Log4jLogger
, with no comments, on a single line.
17.7. How to let camel-cxf response start with xml processing instruction
If you are using some SOAP client such as PHP, you will get this kind of error, because CXF doesn’t add the XML processing instruction <?xml version="1.0" encoding="utf-8"?>
:
Error:sendSms: SoapFault exception: [Client] looks like we got no XML document in [...]
To resolve this issue, you just need to tell StaxOutInterceptor to write the XML start document for you, as in the WriteXmlDeclarationInterceptor below:
public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor<SoapMessage> { public WriteXmlDeclarationInterceptor() { super(Phase.PRE_STREAM); addBefore(StaxOutInterceptor.class.getName()); } public void handleMessage(SoapMessage message) throws Fault { message.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); } }
As an alternative you can add a message header for it as demonstrated in CxfConsumerTest:
// set up the response context which force start document Map<String, Object> map = new HashMap<String, Object>(); map.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); exchange.getOut().setHeader(Client.RESPONSE_CONTEXT, map);
17.8. How to override the CXF producer address from message header
The camel-cxf
producer supports to override the target service address by setting a message header CamelDestinationOverrideUrl
.
// set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress()));
17.9. How to consume a message from a camel-cxf endpoint in POJO data format
The camel-cxf
endpoint consumer POJO data format is based on the CXF invoker, so the message header has a property with the name of CxfConstants.OPERATION_NAME
and the message body is a list of the SEI method parameters.
Consider the PersonProcessor example code:
public class PersonProcessor implements Processor { private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); @Override @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { LOG.info("processing exchange in camel"); BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); if (boi != null) { LOG.info("boi.isUnwrapped" + boi.isUnwrapped()); } // Get the parameters list which element is the holder. MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); Holder<String> personId = (Holder<String>) msgList.get(0); Holder<String> ssn = (Holder<String>) msgList.get(1); Holder<String> name = (Holder<String>) msgList.get(2); if (personId.value == null || personId.value.length() == 0) { LOG.info("person id 123, so throwing exception"); // Try to throw out the soap fault message org.apache.camel.wsdl_first.types.UnknownPersonFault personFault = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); personFault.setPersonId(""); org.apache.camel.wsdl_first.UnknownPersonFault fault = new org.apache.camel.wsdl_first.UnknownPersonFault("Get the null value of person name", personFault); exchange.getMessage().setBody(fault); return; } name.value = "Bonjour"; ssn.value = "123"; LOG.info("setting Bonjour as the response"); // Set the response message, first element is the return value of the operation, // the others are the holders of method parameters exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); } }
17.10. How to prepare the message for the camel-cxf endpoint in POJO data format
The camel-cxf
endpoint producer is based on the CXF client API. First you need to specify the operation name in the message header, then add the method parameters to a list, and initialize the message with this parameter list. The response message’s body is a messageContentsList, you can get the result from that list.
If you don’t specify the operation name in the message header, CxfProducer
will try to use the defaultOperationName
from CxfEndpoint
, if there is no defaultOperationName
set on CxfEndpoint
, it will pick up the first operationName from the Operation list.
If you want to get the object array from the message body, you can get the body using message.getBody(Object[].class)
, as shown in CxfProducerRouterTest.testInvokingSimpleServerWithParams:
Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); final List<String> params = new ArrayList<>(); // Prepare the request message for the camel-cxf procedure params.add(TEST_MESSAGE); senderExchange.getIn().setBody(params); senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); Exchange exchange = template.send("direct:EndpointA", senderExchange); org.apache.camel.Message out = exchange.getMessage(); // The response message's body is an MessageContentsList which first element is the return value of the operation, // If there are some holder parameters, the holder parameter will be filled in the reset of List. // The result will be extract from the MessageContentsList with the String class type MessageContentsList result = (MessageContentsList) out.getBody(); LOG.info("Received output text: " + result.get(0)); Map<String, Object> responseContext = CastUtils.cast((Map<?, ?>) out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("UTF-8", responseContext.get(org.apache.cxf.message.Message.ENCODING), "We should get the response context here"); assertEquals("echo " + TEST_MESSAGE, result.get(0), "Reply body on Camel is wrong");
17.11. How to deal with the message for a camel-cxf endpoint in PAYLOAD data format
PAYLOAD
means that you process the payload from the SOAP envelope as a native CxfPayload. Message.getBody()
will return a org.apache.camel.component.cxf.CxfPayload
object, with getters for SOAP message headers and the SOAP body.
protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(simpleEndpointURI + "&dataFormat=PAYLOAD").to("log:info").process(new Processor() { @SuppressWarnings("unchecked") public void process(final Exchange exchange) throws Exception { CxfPayload<SoapHeader> requestPayload = exchange.getIn().getBody(CxfPayload.class); List<Source> inElements = requestPayload.getBodySources(); List<Source> outElements = new ArrayList<>(); // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want String request = exchange.getIn().getBody(String.class); XmlConverter converter = new XmlConverter(); String documentString = ECHO_RESPONSE; Element in = new XmlConverter().toDOMElement(inElements.get(0)); // Just check the element namespace if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { throw new IllegalArgumentException("Wrong element namespace"); } if (in.getLocalName().equals("echoBoolean")) { documentString = ECHO_BOOLEAN_RESPONSE; checkRequest("ECHO_BOOLEAN_REQUEST", request); } else { documentString = ECHO_RESPONSE; checkRequest("ECHO_REQUEST", request); } Document outDocument = converter.toDOMDocument(documentString, exchange); outElements.add(new DOMSource(outDocument.getDocumentElement())); // set the payload header with null CxfPayload<SoapHeader> responsePayload = new CxfPayload<>(null, outElements, null); exchange.getMessage().setBody(responsePayload); } }); } }; }
17.12. How to get and set SOAP headers in POJO mode
POJO
means that the data format is a "list of Java objects" when the camel-cxf endpoint produces or consumes Camel exchanges. Even though Camel exposes the message body as POJOs in this mode, camel-cxf still provides access to read and write SOAP headers. However, since CXF interceptors remove in-band SOAP headers from the header list after they have been processed, only out-of-band SOAP headers are available to camel-cxf in POJO mode.
The following example illustrates how to get/set SOAP headers. Suppose we have a route that forwards from one Camel-cxf endpoint to another. That is, SOAP Client → Camel → CXF service. We can attach two processors to obtain/insert SOAP headers at (1) before a request goes out to the CXF service and (2) before the response comes back to the SOAP Client. Processor (1) and (2) in this example are InsertRequestOutHeaderProcessor and InsertResponseOutHeaderProcessor. Our route looks like this:
from("cxf:bean:routerRelayEndpointWithInsertion") .process(new InsertRequestOutHeaderProcessor()) .to("cxf:bean:serviceRelayEndpointWithInsertion") .process(new InsertResponseOutHeaderProcessor());
The Bean routerRelayEndpointWithInsertion
and serviceRelayEndpointWithInsertion
are defined as follows:
@Bean public CxfEndpoint routerRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertion"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } @Bean public CxfEndpoint serviceRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertionBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; }
SOAP headers are propagated to and from Camel Message headers. The Camel message header name is "org.apache.cxf.headers.Header.list" which is a constant defined in CXF (org.apache.cxf.headers.Header.HEADER_LIST). The header value is a List of CXF SoapHeader objects (org.apache.cxf.binding.soap.SoapHeader). The following snippet is the InsertResponseOutHeaderProcessor (that insert a new SOAP header in the response message). The way to access SOAP headers in both InsertResponseOutHeaderProcessor and InsertRequestOutHeaderProcessor are actually the same. The only difference between the two processors is setting the direction of the inserted SOAP header.
You can find the InsertResponseOutHeaderProcessor
example in CxfMessageHeadersRelayTest:
public static class InsertResponseOutHeaderProcessor implements Processor { public void process(Exchange exchange) throws Exception { List<SoapHeader> soapHeaders = CastUtils.cast((List<?>)exchange.getIn().getHeader(Header.HEADER_LIST)); // Insert a new header String xml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><outofbandHeader " + "xmlns=\"http://cxf.apache.org/outofband/Header\" hdrAttribute=\"testHdrAttribute\" " + "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" soap:mustUnderstand=\"1\">" + "<name>New_testOobHeader</name><value>New_testOobHeaderValue</value></outofbandHeader>"; SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); // make sure direction is OUT since it is a response message. newHeader.setDirection(Direction.DIRECTION_OUT); //newHeader.setMustUnderstand(false); soapHeaders.add(newHeader); } }
17.13. How to get and set SOAP headers in PAYLOAD mode
We’ve already shown how to access the SOAP message as CxfPayload object in PAYLOAD mode inm the section How to deal with the message for a camel-cxf endpoint in PAYLOAD data format.
Once you obtain a CxfPayload object, you can invoke the CxfPayload.getHeaders() method that returns a List of DOM Elements (SOAP headers).
For an example see CxfPayLoadSoapHeaderTest:
from(getRouterEndpointURI()).process(new Processor() { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> payload = exchange.getIn().getBody(CxfPayload.class); List<Source> elements = payload.getBodySources(); assertNotNull(elements, "We should get the elements here"); assertEquals(1, elements.size(), "Get the wrong elements size"); Element el = new XmlConverter().toDOMElement(elements.get(0)); elements.set(0, new DOMSource(el)); assertEquals("http://camel.apache.org/pizza/types", el.getNamespaceURI(), "Get the wrong namespace URI"); List<SoapHeader> headers = payload.getHeaders(); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); // alternatively you can also get the SOAP header via the camel header: headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); } }) .to(getServiceEndpointURI());
You can also use the same way as described in sub-chapter "How to get and set SOAP headers in POJO mode" to set or get the SOAP headers. So, you can use the header "org.apache.cxf.headers.Header.list" to get and set a list of SOAP headers.This does also mean that if you have a route that forwards from one Camel-cxf endpoint to another (SOAP Client → Camel → CXF service), now also the SOAP headers sent by the SOAP client are forwarded to the CXF service. If you do not want that these headers are forwarded you have to remove them in the Camel header "org.apache.cxf.headers.Header.list".
17.14. SOAP headers are not available in RAW mode
SOAP headers are not available in RAW mode as SOAP processing is skipped.
17.15. How to throw a SOAP Fault from Camel
If you are using a camel-cxf
endpoint to consume the SOAP request, you may need to throw the SOAP Fault from the camel context.
Basically, you can use the throwFault
DSL to do that; it works for POJO
, PAYLOAD
and MESSAGE
data format.
You can define the soap fault as shown in CxfCustomizedExceptionTest:
SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); Element detail = SOAP_FAULT.getOrCreateDetail(); Document doc = detail.getOwnerDocument(); Text tn = doc.createTextNode(DETAIL_TEXT); detail.appendChild(tn);
Then throw it as you like
from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT));
If your CXF endpoint is working in the MESSAGE
data format, you could set the SOAP Fault message in the message body and set the response code in the message header as demonstrated by CxfMessageStreamExceptionTest
from(routerEndpointURI).process(new Processor() { public void process(Exchange exchange) throws Exception { Message out = exchange.getOut(); // Set the message body with the out.setBody(this.getClass().getResourceAsStream("SoapFaultMessage.xml")); // Set the response code here out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); } });
Same for using POJO data format. You can set the SOAPFault on the out body.
17.16. How to propagate a camel-cxf endpoint’s request and response context
CXF client API provides a way to invoke the operation with request and response context. If you are using a camel-cxf
endpoint producer to invoke the outside web service, you can set the request context and get response context with the following code:
CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { public void process(final Exchange exchange) { final List<String> params = new ArrayList<String>(); params.add(TEST_MESSAGE); // Set the request context to the inMessage Map<String, Object> requestContext = new HashMap<String, Object>(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); exchange.getIn().setBody(params); exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); } }); org.apache.camel.Message out = exchange.getOut(); // The output is an object array, the first element of the array is the return value Object\[\] output = out.getBody(Object\[\].class); LOG.info("Received output text: " + output\[0\]); // Get the response context form outMessage Map<String, Object> responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("Get the wrong wsdl operation name", "{http://apache.org/hello_world_soap_http}greetMe", responseContext.get("javax.xml.ws.wsdl.operation").toString());
17.17. Attachment Support
POJO Mode: Both SOAP with Attachment and MTOM are supported (see example in Payload Mode for enabling MTOM). However, SOAP with Attachment is not tested. Since attachments are marshalled and unmarshalled into POJOs, users typically do not need to deal with the attachment themself. Attachments are propagated to Camel message’s attachments if the MTOM is not enabled. So, it is possible to retrieve attachments by Camel Message API
DataHandler Message.getAttachment(String id)
Payload Mode: MTOM is supported by the component. Attachments can be retrieved by Camel Message APIs mentioned above. SOAP with Attachment (SwA) is supported and attachments can be retrieved. SwA is the default (same as setting the CXF endpoint property "mtom-enabled" to false).
To enable MTOM, set the CXF endpoint property "mtom-enabled" to true.
@Bean public CxfEndpoint routerEndpoint() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceNameAsQName(SERVICE_QNAME); cxfEndpoint.setEndpointNameAsQName(PORT_QNAME); cxfEndpoint.setAddress("/" + getClass().getSimpleName()+ "/jaxws-mtom/hello"); cxfEndpoint.setWsdlURL("mtom.wsdl"); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); properties.put("mtom-enabled", true); cxfEndpoint.setProperties(properties); return cxfEndpoint; }
You can produce a Camel message with attachment to send to a CXF endpoint in Payload mode.
Exchange exchange = context.createProducerTemplate().send("direct:testEndpoint", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> body = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getIn().setBody(body); exchange.getIn().addAttachment(MtomTestHelper.REQ_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, "application/octet-stream"))); exchange.getIn().addAttachment(MtomTestHelper.REQ_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, "image/jpeg"))); } }); // process response CxfPayload<SoapHeader> out = exchange.getOut().getBody(CxfPayload.class); Assert.assertEquals(1, out.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:DetailResponse/ns:photo/xop:Include", oute, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" ele = (Element)xu.getValue("//ns:DetailResponse/ns:image/xop:Include", oute, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" DataHandler dr = exchange.getOut().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getOut().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); BufferedImage image = ImageIO.read(dr.getInputStream()); Assert.assertEquals(560, image.getWidth()); Assert.assertEquals(300, image.getHeight());
You can also consume a Camel message received from a CXF endpoint in Payload mode. The CxfMtomConsumerPayloadModeTest illustrates how this works:
public static class MyProcessor implements Processor { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> in = exchange.getIn().getBody(CxfPayload.class); // verify request Assert.assertEquals(1, in.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:Detail/ns:photo/xop:Include", body, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); ele = (Element)xu.getValue("//ns:Detail/ns:image/xop:Include", body, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); DataHandler dr = exchange.getIn().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getIn().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); // create response List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> sbody = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getOut().setBody(sbody); exchange.getOut().addAttachment(MtomTestHelper.RESP_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, "application/octet-stream"))); exchange.getOut().addAttachment(MtomTestHelper.RESP_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, "image/jpeg"))); } }
Raw Mode: Attachments are not supported as it does not process the message at all.
CXF_RAW Mode: MTOM is supported, and Attachments can be retrieved by Camel Message APIs mentioned above. Note that when receiving a multipart (i.e. MTOM) message the default SOAPMessage to String converter will provide the complete multipart payload on the body. If you require just the SOAP XML as a String, you can set the message body with message.getSOAPPart(), and Camel convert can do the rest of work for you.
17.18. Streaming Support in PAYLOAD mode
The camel-cxf component now supports streaming of incoming messages when using PAYLOAD mode. Previously, the incoming messages would have been completely DOM parsed. For large messages, this is time consuming and uses a significant amount of memory. The incoming messages can remain as a javax.xml.transform.Source while being routed and, if nothing modifies the payload, can then be directly streamed out to the target destination. For common "simple proxy" use cases (example: from("cxf:…").to("cxf:…")), this can provide very significant performance increases as well as significantly lowered memory requirements.
However, there are cases where streaming may not be appropriate or desired. Due to the streaming nature, invalid incoming XML may not be caught until later in the processing chain. Also, certain actions may require the message to be DOM parsed anyway (like WS-Security or message tracing and such) in which case the advantages of the streaming is limited. At this point, there are two ways to control the streaming:
- Endpoint property: you can add "allowStreaming=false" as an endpoint property to turn the streaming on/off.
- Component property: the CxfComponent object also has an allowStreaming property that can set the default for endpoints created from that component.
Global system property: you can add a system property of "org.apache.camel.component.cxf.streaming" to "false" to turn it off. That sets the global default, but setting the endpoint property above will override this value for that endpoint.
17.19. Using the generic CXF Dispatch mode
The camel-cxf component supports the generic CXF dispatch mode that can transport messages of arbitrary structures (i.e., not bound to a specific XML schema). To use this mode, you simply omit specifying the wsdlURL and serviceClass attributes of the CXF endpoint.
<cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/SoapContext/SoapAnyPort"> <cxf:properties> <entry key="dataFormat" value="PAYLOAD"/> </cxf:properties> </cxf:cxfEndpoint>
It is noted that the default CXF dispatch client does not send a specific SOAPAction header. Therefore, when the target service requires a specific SOAPAction value, it is supplied in the Camel header using the key SOAPAction (case-insensitive).
17.20. Spring Boot Auto-Configuration
When using cxf with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-soap-starter</artifactId> </dependency>
The component supports 13 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.cxf.allow-streaming | This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. | Boolean | |
camel.component.cxf.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.cxf.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.cxf.enabled | Whether to enable auto configuration of the cxf component. This is enabled by default. | Boolean | |
camel.component.cxf.header-filter-strategy | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.cxf.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.cxf.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
camel.component.cxfrs.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.cxfrs.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.cxfrs.enabled | Whether to enable auto configuration of the cxfrs component. This is enabled by default. | Boolean | |
camel.component.cxfrs.header-filter-strategy | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.cxfrs.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.cxfrs.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
Chapter 18. Data Format
Only producer is supported
The Dataformat component allows to use the Data Format as a Camel Component.
18.1. URI format
dataformat:name:(marshal|unmarshal)[?options]
Where name is the name of the Data Format. And then followed by the operation which must either be marshal
or unmarshal
. The options is used for configuring the Data Format in use. See the Data Format documentation for which options it support.
18.2. DataFormat Options
18.2.1. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
18.2.1.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
18.2.1.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
18.3. Component Options
The Data Format component supports 2 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
18.4. Endpoint Options
The Data Format endpoint is configured using URI syntax:
dataformat:name:operation
with the following path and query parameters:
18.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (producer) | Required Name of data format. | String | |
operation (producer) | Required Operation to use either marshal or unmarshal. Enum values:
| String |
18.4.2. Query Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
18.5. Samples
For example to use the JAXB Data Format we can do as follows:
from("activemq:My.Queue"). to("dataformat:jaxb:unmarshal?contextPath=com.acme.model"). to("mqseries:Another.Queue");
And in XML DSL you do:
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="activemq:My.Queue"/> <to uri="dataformat:jaxb:unmarshal?contextPath=com.acme.model"/> <to uri="mqseries:Another.Queue"/> </route> </camelContext>
18.6. Spring Boot Auto-Configuration
When using dataformat with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataformat-starter</artifactId> </dependency>
The component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.dataformat.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.dataformat.enabled | Whether to enable auto configuration of the dataformat component. This is enabled by default. | Boolean | |
camel.component.dataformat.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 19. Dataset
Both producer and consumer are supported
Testing of distributed and asynchronous processing is notoriously difficult. The Mock, Test and DataSet endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel’s large range of Components together with the powerful Bean Integration.
The DataSet component provides a mechanism to easily perform load & soak testing of your system. It works by allowing you to create DataSet instances both as a source of messages and as a way to assert that the data set is received.
Camel will use the throughput logger when sending datasets.
19.1. URI format
dataset:name[?options]
Where name is used to find the DataSet instance in the Registry
Camel ships with a support implementation of org.apache.camel.component.dataset.DataSet
, the org.apache.camel.component.dataset.DataSetSupport
class, that can be used as a base for implementing your own DataSet. Camel also ships with some implementations that can be used for testing: org.apache.camel.component.dataset.SimpleDataSet
, org.apache.camel.component.dataset.ListDataSet
and org.apache.camel.component.dataset.FileDataSet
, all of which extend DataSetSupport
.
19.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
19.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
19.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
19.3. Component Options
The Dataset component supports 5 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
log (producer) | To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
exchangeFormatter (advanced) | Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. | ExchangeFormatter |
19.4. Endpoint Options
The Dataset endpoint is configured using URI syntax:
dataset:name
with the following path and query parameters:
19.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (common) | Required Name of DataSet to lookup in the registry. | DataSet |
19.4.2. Query Parameters (21 parameters)
Name | Description | Default | Type |
---|---|---|---|
dataSetIndex (common) | Controls the behaviour of the CamelDataSetIndex header. For Consumers: - off = the header will not be set - strict/lenient = the header will be set For Producers: - off = the header value will not be verified, and will not be set if it is not present = strict = the header value must be present and will be verified = lenient = the header value will be verified if it is present, and will be set if it is not present. Enum values:
| lenient | String |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
initialDelay (consumer) | Time period in millis to wait before starting sending messages. | 1000 | long |
minRate (consumer) | Wait until the DataSet contains at least this number of messages. | 0 | int |
preloadSize (consumer) | Sets how many messages should be preloaded (sent) before the route completes its initialization. | 0 | long |
produceDelay (consumer) | Allows a delay to be specified which causes a delay when a message is sent by the consumer (to simulate slow processing). | 3 | long |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
assertPeriod (producer) | Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default this period is disabled. | long | |
consumeDelay (producer) | Allows a delay to be specified which causes a delay when a message is consumed by the producer (to simulate slow processing). | 0 | long |
expectedCount (producer) | Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n’th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. | -1 | int |
failFast (producer) | Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
log (producer) | To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. | false | boolean |
reportGroup (producer) | A number that is used to turn on throughput logging based on groups of the size. | int | |
resultMinimumWaitTime (producer) | Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. | long | |
resultWaitTime (producer) | Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. | long | |
retainFirst (producer) | Specifies to only retain the first n’th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object…) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. | -1 | int |
retainLast (producer) | Specifies to only retain the last n’th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object…) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. | -1 | int |
sleepForEmptyTest (producer) | Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero. | long | |
copyOnExchange (producer (advanced)) | Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. | true | boolean |
19.5. Configuring DataSet
Camel will lookup in the Registry for a bean implementing the DataSet interface. So you can register your own DataSet as:
<bean id="myDataSet" class="com.mycompany.MyDataSet"> <property name="size" value="100"/> </bean>
19.6. Example
For example, to test that a set of messages are sent to a queue and then consumed from the queue without losing any messages:
// send the dataset to a queue from("dataset:foo").to("activemq:SomeQueue"); // now lets test that the messages are consumed correctly from("activemq:SomeQueue").to("dataset:foo");
The above would look in the Registry to find the foo DataSet instance which is used to create the messages.
Then you create a DataSet implementation, such as using the SimpleDataSet
as described below, configuring things like how big the data set is and what the messages look like etc.
19.7. DataSetSupport (abstract class)
The DataSetSupport abstract class is a nice starting point for new DataSets, and provides some useful features to derived classes.
19.7.1. Properties on DataSetSupport
Property | Type | Default | Description |
---|---|---|---|
|
|
|
Specifies the default message body. For SimpleDataSet it is a constant payload; though if you want to create custom payloads per message, create your own derivation of |
|
| null | |
|
|
| Specifies how many messages to send/consume. |
|
|
|
Specifies the number of messages to be received before reporting progress. Useful for showing progress of a large load test. If < 0, then |
19.8. SimpleDataSet
The SimpleDataSet
extends DataSetSupport
, and adds a default body.
19.8.1. Additional Properties on SimpleDataSet
Property | Type | Default | Description |
---|---|---|---|
|
|
|
Specifies the default message body. By default, the |
19.9. ListDataSet
The List`DataSet` extends DataSetSupport
, and adds a list of default bodies.
19.9.1. Additional Properties on ListDataSet
Property | Type | Default | Description |
---|---|---|---|
|
|
|
Specifies the default message body. By default, the |
|
| the size of the defaultBodies list |
Specifies how many messages to send/consume. This value can be different from the size of the |
19.10. FileDataSet
The FileDataSet
extends ListDataSet
, and adds support for loading the bodies from a file.
19.10.1. Additional Properties on FileDataSet
Property | Type | Default | Description |
---|---|---|---|
|
| null | Specifies the source file for payloads |
|
| \z |
Specifies the delimiter pattern used by a |
19.11. Spring Boot Auto-Configuration
When using dataset with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-dataset-starter</artifactId> </dependency>
The component supports 11 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.dataset-test.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.dataset-test.enabled | Whether to enable auto configuration of the dataset-test component. This is enabled by default. | Boolean | |
camel.component.dataset-test.exchange-formatter | Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. | ExchangeFormatter | |
camel.component.dataset-test.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.dataset-test.log | To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. | false | Boolean |
camel.component.dataset.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.dataset.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.dataset.enabled | Whether to enable auto configuration of the dataset component. This is enabled by default. | Boolean | |
camel.component.dataset.exchange-formatter | Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. | ExchangeFormatter | |
camel.component.dataset.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.dataset.log | To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. | false | Boolean |
Chapter 20. Direct
Both producer and consumer are supported
The Direct component provides direct, synchronous invocation of any consumers when a producer sends a message exchange.
This endpoint can be used to connect existing routes in the same camel context.
Asynchronous
The SEDA component provides asynchronous invocation of any consumers when a producer sends a message exchange.
20.1. URI format
direct:someName[?options]
Where someName can be any string to uniquely identify the endpoint
20.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
20.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
20.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
20.3. Component Options
The Direct component supports 5 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
block (producer) | If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
timeout (producer) | The timeout value to use if block is enabled. | 30000 | long |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
20.4. Endpoint Options
The Direct endpoint is configured using URI syntax:
direct:name
with the following path and query parameters:
20.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (common) | Required Name of direct endpoint. | String |
20.4.2. Query Parameters (8 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
block (producer) | If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. | true | boolean |
failIfNoConsumers (producer) | Whether the producer should fail by throwing an exception, when sending to a DIRECT endpoint with no active consumers. | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
timeout (producer) | The timeout value to use if block is enabled. | 30000 | long |
synchronous (advanced) | Whether synchronous processing is forced. If enabled then the producer thread, will be forced to wait until the message has been completed before the same thread will continue processing. If disabled (default) then the producer thread may be freed and can do other work while the message is continued processed by other threads (reactive). | false | boolean |
20.5. Samples
In the route below we use the direct component to link the two routes together:
from("activemq:queue:order.in") .to("bean:orderServer?method=validate") .to("direct:processOrder"); from("direct:processOrder") .to("bean:orderService?method=process") .to("activemq:queue:order.out");
And the sample using spring DSL:
<route> <from uri="activemq:queue:order.in"/> <to uri="bean:orderService?method=validate"/> <to uri="direct:processOrder"/> </route> <route> <from uri="direct:processOrder"/> <to uri="bean:orderService?method=process"/> <to uri="activemq:queue:order.out"/> </route>
See also samples from the SEDA component, how they can be used together.
20.6. Spring Boot Auto-Configuration
When using direct with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-direct-starter</artifactId> </dependency>
The component supports 6 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.direct.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.direct.block | If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. | true | Boolean |
camel.component.direct.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.direct.enabled | Whether to enable auto configuration of the direct component. This is enabled by default. | Boolean | |
camel.component.direct.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.direct.timeout | The timeout value to use if block is enabled. | 30000 | Long |
Chapter 21. Elasticsearch
Since Camel 3.18.3
Only producer is supported
The ElasticSearch component allows you to interface with an ElasticSearch 8.x API using the Java API Client library.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-elasticsearch</artifactId> <version>${camel-version}</version> <!-- use the same version as your Camel core version --> </dependency>
21.1. URI format
elasticsearch://clusterName[?options]
21.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
21.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
21.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
21.3. Component Options
The Elasticsearch component supports 14 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
connectionTimeout (producer) | The time in ms to wait before connection will timeout. | 30000 | int |
hostAddresses (producer) | Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
maxRetryTimeout (producer) | The time in ms before retry. | 30000 | int |
socketTimeout (producer) | The timeout in ms to wait before the socket will timeout. | 30000 | int |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
client (advanced) | Autowired To use an existing configured Elasticsearch client, instead of creating a client per endpoint. This allow to customize the client with specific settings. | RestClient | |
enableSniffer (advanced) | Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it’s managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). | false | boolean |
sniffAfterFailureDelay (advanced) | The delay of a sniff execution scheduled after a failure (in milliseconds). | 60000 | int |
snifferInterval (advanced) | The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. | 300000 | int |
certificatePath (security) | The path of the self-signed certificate to use to access to Elasticsearch. | String | |
enableSSL (security) | Enable SSL. | false | boolean |
password (security) | Password for authenticate. | String | |
user (security) | Basic authenticate user. | String |
21.4. Endpoint Options
The Elasticsearch endpoint is configured using URI syntax:
elasticsearch:clusterName
with the following path and query parameters:
21.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
clusterName (producer) | Required Name of the cluster. | String |
21.4.2. Query Parameters (19 parameters)
Name | Description | Default | Type |
---|---|---|---|
connectionTimeout (producer) | The time in ms to wait before connection will timeout. | 30000 | int |
disconnect (producer) | Disconnect after it finish calling the producer. | false | boolean |
from (producer) | Starting index of the response. | Integer | |
hostAddresses (producer) | Comma separated list with ip:port formatted remote transport addresses to use. | String | |
indexName (producer) | The name of the index to act against. | String | |
maxRetryTimeout (producer) | The time in ms before retry. | 30000 | int |
operation (producer) | What operation to perform. Enum values:
| ElasticsearchOperation | |
scrollKeepAliveMs (producer) | Time in ms during which elasticsearch will keep search context alive. | 60000 | int |
size (producer) | Size of the response. | Integer | |
socketTimeout (producer) | The timeout in ms to wait before the socket will timeout. | 30000 | int |
useScroll (producer) | Enable scroll usage. | false | boolean |
waitForActiveShards (producer) | Index creation waits for the write consistency number of shards to be available. | 1 | int |
lazyStartProducer (producer (advanced)) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
documentClass (advanced) | The class to use when deserializing the documents. | ObjectNode | Class |
enableSniffer (advanced) | Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it’s managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). | false | boolean |
sniffAfterFailureDelay (advanced) | The delay of a sniff execution scheduled after a failure (in milliseconds). | 60000 | int |
snifferInterval (advanced) | The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. | 300000 | int |
certificatePath (security) | The path of the self-signed certificate to use to access to Elasticsearch. | String | |
enableSSL (security) | Enable SSL. | false | boolean |
21.4.3. Message Headers
The Elasticsearch component supports 9 message header(s), which is/are listed below:
Name | Description | Default | Type |
---|---|---|---|
operation (producer) Constant: PARAM_OPERATION | The operation to perform. Enum values:
| ElasticsearchOperation | |
indexId (producer) Constant: PARAM_INDEX_ID | The id of the indexed document. | String | |
indexName (producer) Constant: PARAM_INDEX_NAME | The name of the index to act against. | String | |
documentClass (producer) Constant: PARAM_DOCUMENT_CLASS | The full qualified name of the class of the document to unmarshall. | ObjectNode | Class |
waitForActiveShards (producer) Constant: PARAM_WAIT_FOR_ACTIVE_SHARDS | The index creation waits for the write consistency number of shards to be available. | Integer | |
scrollKeepAliveMs (producer) Constant: PARAM_SCROLL_KEEP_ALIVE_MS | The starting index of the response. | Integer | |
useScroll (producer) Constant: PARAM_SCROLL | Set to true to enable scroll usage. | Boolean | |
size (producer) Constant: PARAM_SIZE | The size of the response. | Integer | |
from (producer) Constant: PARAM_FROM | The starting index of the response. | Integer |
21.5. Message Operations
The following ElasticSearch operations are currently supported. Simply set an endpoint URI option or exchange header with a key of "operation" and a value set to one of the following. Some operations also require other parameters or the message body to be set.
operation | message body | description |
---|---|---|
Index | Map, String, byte[], Reader, InputStream or IndexRequest.Builder content to index | Adds content to an index and returns the content’s indexId in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the indexId by setting the message header with the key "indexId". |
GetById | String or GetRequest.Builder index id of content to retrieve | Retrieves the document corresponding to the given index id and returns a GetResponse object in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the type of document by setting the message header with the key "documentClass". |
Delete | String or DeleteRequest.Builder index id of content to delete | Deletes the specified indexName and returns a Result object in the body. You can set the name of the target index by setting the message header with the key "indexName". |
DeleteIndex | String or DeleteIndexRequest.Builder index name of the index to delete | Deletes the specified indexName and returns a status code in the body. You can set the name of the target index by setting the message header with the key "indexName". |
Bulk | Iterable or BulkRequest.Builder of any type that is already accepted (DeleteOperation.Builder for delete operation, UpdateOperation.Builder for update operation, CreateOperation.Builder for create operation, byte[], InputStream, String, Reader, Map or any document type for index operation) | Adds/Updates/Deletes content from/to an index and returns a List<BulkResponseItem> object in the body You can set the name of the target index by setting the message header with the key "indexName". |
Search | Map, String or SearchRequest.Builder | Search the content with the map of query string. You can set the name of the target index by setting the message header with the key "indexName". You can set the number of hits to return by setting the message header with the key "size". You can set the starting document offset by setting the message header with the key "from". |
MultiSearch | MsearchRequest.Builder | Multiple search in one |
MultiGet | Iterable<String> or MgetRequest.Builder the id of the document to retrieve | Multiple get in one You can set the name of the target index by setting the message header with the key "indexName". |
Exists | None | Checks whether the index exists or not and returns a Boolean flag in the body. You must set the name of the target index by setting the message header with the key "indexName". |
Update | byte[], InputStream, String, Reader, Map or any document type content to update | Updates content to an index and returns the content’s indexId in the body. You can set the name of the target index by setting the message header with the key "indexName". You can set the indexId by setting the message header with the key "indexId". |
Ping | None | Pings the Elasticsearch cluster and returns true if the ping succeeded, false otherwise |
21.6. Configure the component and enable basic authentication
To use the Elasticsearch component it has to be configured with a minimum configuration.
ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); elasticsearchComponent.setHostAddresses("myelkhost:9200"); camelContext.addComponent("elasticsearch", elasticsearchComponent);
For basic authentication with elasticsearch or using reverse http proxy in front of the elasticsearch cluster, simply setup basic authentication and SSL on the component like the example below
ElasticsearchComponent elasticsearchComponent = new ElasticsearchComponent(); elasticsearchComponent.setHostAddresses("myelkhost:9200"); elasticsearchComponent.setUser("elkuser"); elasticsearchComponent.setPassword("secure!!"); elasticsearchComponent.setEnableSSL(true); elasticsearchComponent.setCertificatePath(certPath); camelContext.addComponent("elasticsearch", elasticsearchComponent);
21.7. Index Example
Below is a simple INDEX example
from("direct:index") .to("elasticsearch://elasticsearch?operation=Index&indexName=twitter");
<route> <from uri="direct:index"/> <to uri="elasticsearch://elasticsearch?operation=Index&indexName=twitter"/> </route>
For this operation you need to specify a indexId header.
A client would simply need to pass a body message containing a Map to the route. The result body contains the indexId created.
Map<String, String> map = new HashMap<String, String>(); map.put("content", "test"); String indexId = template.requestBody("direct:index", map, String.class);
21.8. Search Example
Searching on specific field(s) and value use the Operation ´Search´. Pass in the query JSON String or the Map
from("direct:search") .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter");
<route> <from uri="direct:search"/> <to uri="elasticsearch://elasticsearch?operation=Search&indexName=twitter"/> </route>
String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; HitsMetadata<?> response = template.requestBody("direct:search", query, HitsMetadata.class);
Search on specific field(s) using Map.
Map<String, Object> actualQuery = new HashMap<>(); actualQuery.put("doc.content", "new release of ApacheCamel"); Map<String, Object> match = new HashMap<>(); match.put("match", actualQuery); Map<String, Object> query = new HashMap<>(); query.put("query", match); HitsMetadata<?> response = template.requestBody("direct:search", query, HitsMetadata.class);
Search using Elasticsearch scroll api in order to fetch all results.
from("direct:search") .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000");
<route> <from uri="direct:search"/> <to uri="elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000"/> </route>
String query = "{\"query\":{\"match\":{\"doc.content\":\"new release of ApacheCamel\"}}}"; try (ElasticsearchScrollRequestIterator response = template.requestBody("direct:search", query, ElasticsearchScrollRequestIterator.class)) { // do something smart with results }
can also be used.
from("direct:search") .to("elasticsearch://elasticsearch?operation=Search&indexName=twitter&useScroll=true&scrollKeepAliveMs=30000") .split() .body() .streaming() .to("mock:output") .end();
21.9. MultiSearch Example
MultiSearching on specific field(s) and value use the Operation ´MultiSearch´. Pass in the MultiSearchRequest instance
from("direct:multiSearch") .to("elasticsearch://elasticsearch?operation=MultiSearch");
<route> <from uri="direct:multiSearch"/> <to uri="elasticsearch://elasticsearch?operation=MultiSearch"/> </route>
MultiSearch on specific field(s)
MsearchRequest.Builder builder = new MsearchRequest.Builder().index("twitter").searches( new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build(), new RequestItem.Builder().header(new MultisearchHeader.Builder().build()) .body(new MultisearchBody.Builder().query(b -> b.matchAll(x -> x)).build()).build()); List<MultiSearchResponseItem<?>> response = template.requestBody("direct:multiSearch", builder, List.class);
21.10. Document type
For all the search operations, it is possible to indicate the type of document to retrieve in order to get the result already unmarshalled with the expected type.
The document type can be set using the header "documentClass" or via the uri parameter of the same name.
21.11. Using Camel Elasticsearch with Spring Boot
When you use camel-elasticsearch-starter
with Spring Boot v2, then you must declare the following dependency in your own pom.xml
.
<dependency> <groupId>jakarta.json</groupId> <artifactId>jakarta.json-api</artifactId> <version>2.0.2</version> </dependency>
This is needed because Spring Boot v2 provides jakarta.json-api:1.1.6, and Elasticsearch requires to use json-api v2.
21.11.1. Use RestClient provided by Spring Boot
By default Spring Boot will auto configure an Elasticsearch RestClient that will be used by camel, it is possible to customize the client with the following basic properties:
spring.elasticsearch.uris=myelkhost:9200 spring.elasticsearch.username=elkuser spring.elasticsearch.password=secure!!
More information can be found in application-properties.data.spring.elasticsearch.connection-timeout.
21.11.2. Disable Sniffer when using Spring Boot
When Spring Boot is on the classpath the Sniffer client for Elasticsearch is enabled by default. This option can be disabled in the Spring Boot Configuration:
spring: autoconfigure: exclude: org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration
21.12. Spring Boot Auto-Configuration
When using elasticsearch with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-elasticsearch-starter</artifactId> </dependency>
The component supports 15 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.elasticsearch.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.elasticsearch.certificate-path | The path of the self-signed certificate to use to access to Elasticsearch. | String | |
camel.component.elasticsearch.client | To use an existing configured Elasticsearch client, instead of creating a client per endpoint. This allow to customize the client with specific settings. The option is a org.elasticsearch.client.RestClient type. | RestClient | |
camel.component.elasticsearch.connection-timeout | The time in ms to wait before connection will timeout. | 30000 | Integer |
camel.component.elasticsearch.enable-s-s-l | Enable SSL. | false | Boolean |
camel.component.elasticsearch.enable-sniffer | Enable automatically discover nodes from a running Elasticsearch cluster. If this option is used in conjunction with Spring Boot then it’s managed by the Spring Boot configuration (see: Disable Sniffer in Spring Boot). | false | Boolean |
camel.component.elasticsearch.enabled | Whether to enable auto configuration of the elasticsearch component. This is enabled by default. | Boolean | |
camel.component.elasticsearch.host-addresses | Comma separated list with ip:port formatted remote transport addresses to use. The ip and port options must be left blank for hostAddresses to be considered instead. | String | |
camel.component.elasticsearch.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.elasticsearch.max-retry-timeout | The time in ms before retry. | 30000 | Integer |
camel.component.elasticsearch.password | Password for authenticate. | String | |
camel.component.elasticsearch.sniff-after-failure-delay | The delay of a sniff execution scheduled after a failure (in milliseconds). | 60000 | Integer |
camel.component.elasticsearch.sniffer-interval | The interval between consecutive ordinary sniff executions in milliseconds. Will be honoured when sniffOnFailure is disabled or when there are no failures between consecutive sniff executions. | 300000 | Integer |
camel.component.elasticsearch.socket-timeout | The timeout in ms to wait before the socket will timeout. | 30000 | Integer |
camel.component.elasticsearch.user | Basic authenticate user. | String |
Chapter 22. FHIR
Both producer and consumer are supported
The FHIR component integrates with the HAPI-FHIR library which is an open-source implementation of the FHIR (Fast Healthcare Interoperability Resources) specification in Java.
Maven users will need to add the following dependency to their pom.xml for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-fhir</artifactId> <version>${camel-version}</version> </dependency>
22.1. URI Format
The FHIR Component uses the following URI format:
fhir://endpoint-prefix/endpoint?[options]
Endpoint prefix can be one of:
- capabilities
- create
- delete
- history
- load-page
- meta
- operation
- patch
- read
- search
- transaction
- update
- validate
22.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
22.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
22.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
22.3. Component Options
The FHIR component supports 27 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
encoding (common) | Encoding to use for all request. Enum values:
| String | |
fhirVersion (common) | The FHIR Version to use. Enum values:
| R4 | String |
log (common) | Will log every requests and responses. | false | boolean |
prettyPrint (common) | Pretty print all request. | false | boolean |
serverUrl (common) | The FHIR server base URL. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
client (advanced) | To use the custom client. | IGenericClient | |
clientFactory (advanced) | To use the custom client factory. | IRestfulClientFactory | |
compress (advanced) | Compresses outgoing (POST/PUT) contents to the GZIP format. | false | boolean |
configuration (advanced) | To use the shared configuration. | FhirConfiguration | |
connectionTimeout (advanced) | How long to try and establish the initial TCP connection (in ms). | 10000 | Integer |
deferModelScanning (advanced) | When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. | false | boolean |
fhirContext (advanced) | FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. | FhirContext | |
forceConformanceCheck (advanced) | Force conformance check. | false | boolean |
sessionCookie (advanced) | HTTP session cookie to add to every request. | String | |
socketTimeout (advanced) | How long to block for individual read/write operations (in ms). | 10000 | Integer |
summary (advanced) | Request that the server modify the response using the _summary param. Enum values:
| String | |
validationMode (advanced) | When should Camel validate the FHIR Server’s conformance statement. Enum values:
| ONCE | String |
proxyHost (proxy) | The proxy host. | String | |
proxyPassword (proxy) | The proxy password. | String | |
proxyPort (proxy) | The proxy port. | Integer | |
proxyUser (proxy) | The proxy username. | String | |
accessToken (security) | OAuth access token. | String | |
password (security) | Username to use for basic authentication. | String | |
username (security) | Username to use for basic authentication. | String |
22.4. Endpoint Options
The FHIR endpoint is configured using URI syntax:
fhir:apiName/methodName
with the following path and query parameters:
22.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
apiName (common) | Required What kind of operation to perform. Enum values:
| FhirApiName | |
methodName (common) | Required What sub operation to use for the selected operation. | String |
22.4.2. Query Parameters (44 parameters)
Name | Description | Default | Type |
---|---|---|---|
encoding (common) | Encoding to use for all request. Enum values:
| String | |
fhirVersion (common) | The FHIR Version to use. Enum values:
| R4 | String |
inBody (common) | Sets the name of a parameter to be passed in the exchange In Body. | String | |
log (common) | Will log every requests and responses. | false | boolean |
prettyPrint (common) | Pretty print all request. | false | boolean |
serverUrl (common) | The FHIR server base URL. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
client (advanced) | To use the custom client. | IGenericClient | |
clientFactory (advanced) | To use the custom client factory. | IRestfulClientFactory | |
compress (advanced) | Compresses outgoing (POST/PUT) contents to the GZIP format. | false | boolean |
connectionTimeout (advanced) | How long to try and establish the initial TCP connection (in ms). | 10000 | Integer |
deferModelScanning (advanced) | When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. | false | boolean |
fhirContext (advanced) | FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. | FhirContext | |
forceConformanceCheck (advanced) | Force conformance check. | false | boolean |
sessionCookie (advanced) | HTTP session cookie to add to every request. | String | |
socketTimeout (advanced) | How long to block for individual read/write operations (in ms). | 10000 | Integer |
summary (advanced) | Request that the server modify the response using the _summary param. Enum values:
| String | |
validationMode (advanced) | When should Camel validate the FHIR Server’s conformance statement. Enum values:
| ONCE | String |
proxyHost (proxy) | The proxy host. | String | |
proxyPassword (proxy) | The proxy password. | String | |
proxyPort (proxy) | The proxy port. | Integer | |
proxyUser (proxy) | The proxy username. | String | |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
accessToken (security) | OAuth access token. | String | |
password (security) | Username to use for basic authentication. | String | |
username (security) | Username to use for basic authentication. | String |
22.5. API Parameters (13 APIs)
The @FHIR endpoint is an API based component and has additional parameters based on which API name and API method is used. The API name and API method is located in the endpoint URI as the apiName/methodName
path parameters:
fhir:apiName/methodName
There are 13 API names as listed in the table below:
API Name | Type | Description |
---|---|---|
Both | API to Fetch the capability statement for the server | |
Both | API for the create operation, which creates a new resource instance on the server | |
Both | API for the delete operation, which performs a logical delete on a server resource | |
Both | API for the history method | |
Both | API that Loads the previous/next bundle of resources from a paged set, using the link specified in the link type=next tag within the atom bundle | |
Both | API for the meta operations, which can be used to get, add and remove tags and other Meta elements from a resource or across the server | |
Both | API for extended FHIR operations | |
Both | API for the patch operation, which performs a logical patch on a server resource | |
Both | API method for read operations | |
Both | API to search for resources matching a given set of criteria | |
Both | API for sending a transaction (collection of resources) to the server to be executed as a single unit | |
Both | API for the update operation, which performs a logical delete on a server resource | |
Both | API for validating resources |
Each API is documented in the following sections to come.
22.5.1. API: capabilities
Both producer and consumer are supported
The capabilities API is defined in the syntax as follows:
fhir:capabilities/methodName?[parameters]
The method is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Retrieve the conformance statement using the given model type |
22.5.1.1. Method ofType
Signatures:
- org.hl7.fhir.instance.model.api.IBaseConformance ofType(Class<org.hl7.fhir.instance.model.api.IBaseConformance> type, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/ofType API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
type | The model type | Class |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.2. API: create
Both producer and consumer are supported
The create API is defined in the syntax as follows:
fhir:create/methodName?[parameters]
The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Creates a IBaseResource on the server |
22.5.2.1. Method resource
Signatures:
- ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resource API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
preferReturn | Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() , may be null | PreferReturnEnum |
resource | The resource to create | IBaseResource |
resourceAsString | The resource to create | String |
url | The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366, may be null | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.3. API: delete
Both producer and consumer are supported
The delete API is defined in the syntax as follows:
fhir:delete/methodName?[parameters]
The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Deletes the given resource | |
Deletes the resource by resource type e | |
Specifies that the delete should be performed as a conditional delete against a given search URL |
22.5.3.1. Method resource
Signatures:
- org.hl7.fhir.instance.model.api.IBaseOperationOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resource API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
resource | The IBaseResource to delete | IBaseResource |
22.5.3.2. Method resourceById
Signatures:
- org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceById(String type, String stringId, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceById(org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resourceById API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | The IIdType referencing the resource | IIdType |
stringId | It’s id | String |
type | The resource type e.g Patient | String |
22.5.3.3. Method resourceConditionalByUrl
Signatures:
- org.hl7.fhir.instance.model.api.IBaseOperationOutcome resourceConditionalByUrl(String url, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resourceConditionalByUrl API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
url | The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366 | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.4. API: history
Both producer and consumer are supported
The history API is defined in the syntax as follows:
fhir:history/methodName?[parameters]
The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Perform the operation across all versions of a specific resource (by ID and type) on the server | |
Perform the operation across all versions of all resources of all types on the server | |
Perform the operation across all versions of all resources of the given type on the server |
22.5.4.1. Method onInstance
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle onInstance(org.hl7.fhir.instance.model.api.IIdType id, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/onInstance API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
count | Request that the server return only up to theCount number of resources, may be NULL | Integer |
cutoff | Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL | Date |
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
iCutoff | Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL | IPrimitiveType |
id | The IIdType which must be populated with both a resource type and a resource ID at | IIdType |
returnType | Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. | Class |
22.5.4.2. Method onServer
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle onServer(Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/onServer API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
count | Request that the server return only up to theCount number of resources, may be NULL | Integer |
cutoff | Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL | Date |
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
iCutoff | Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL | IPrimitiveType |
returnType | Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. | Class |
22.5.4.3. Method onType
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle onType(Class<org.hl7.fhir.instance.model.api.IBaseResource> resourceType, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, Integer count, java.util.Date cutoff, org.hl7.fhir.instance.model.api.IPrimitiveType<java.util.Date> iCutoff, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/onType API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
count | Request that the server return only up to theCount number of resources, may be NULL | Integer |
cutoff | Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL | Date |
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
iCutoff | Request that the server return only resource versions that were created at or after the given time (inclusive), may be NULL | IPrimitiveType |
resourceType | The resource type to search for | Class |
returnType | Request that the method return a Bundle resource (such as ca.uhn.fhir.model.dstu2.resource.Bundle). Use this method if you are accessing a DSTU2 server. | Class |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.5. API: load-page
Both producer and consumer are supported
The load-page API is defined in the syntax as follows:
fhir:load-page/methodName?[parameters]
The 3 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Load a page of results using the given URL and bundle type and return a DSTU1 Atom bundle | |
Load the next page of results using the link with relation next in the bundle | |
Load the previous page of results using the link with relation prev in the bundle |
22.5.5.1. Method byUrl
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle byUrl(String url, Class<org.hl7.fhir.instance.model.api.IBaseBundle> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/byUrl API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
returnType | The return type | Class |
url | The search url | String |
22.5.5.2. Method next
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle next(org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/next API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
bundle | The IBaseBundle | IBaseBundle |
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
22.5.5.3. Method previous
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle previous(org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/previous API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
bundle | The IBaseBundle | IBaseBundle |
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.6. API: meta
Both producer and consumer are supported
The meta API is defined in the syntax as follows:
fhir:meta/methodName?[parameters]
The 5 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Add the elements in the given metadata to the already existing set (do not remove any) | |
Delete the elements in the given metadata from the given id | |
Fetch the current metadata from a specific resource | |
Fetch the current metadata from the whole Server | |
Fetch the current metadata from a specific type |
22.5.6.1. Method add
Signatures:
- org.hl7.fhir.instance.model.api.IBaseMetaType add(org.hl7.fhir.instance.model.api.IBaseMetaType meta, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/add API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | The id | IIdType |
meta | The IBaseMetaType class | IBaseMetaType |
22.5.6.2. Method delete
Signatures:
- org.hl7.fhir.instance.model.api.IBaseMetaType delete(org.hl7.fhir.instance.model.api.IBaseMetaType meta, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/delete API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | The id | IIdType |
meta | The IBaseMetaType class | IBaseMetaType |
22.5.6.3. Method getFromResource
Signatures:
- org.hl7.fhir.instance.model.api.IBaseMetaType getFromResource(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, org.hl7.fhir.instance.model.api.IIdType id, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/getFromResource API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | The id | IIdType |
metaType | The IBaseMetaType class | Class |
22.5.6.4. Method getFromServer
Signatures:
- org.hl7.fhir.instance.model.api.IBaseMetaType getFromServer(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/getFromServer API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
metaType | The type of the meta datatype for the given FHIR model version (should be MetaDt.class or MetaType.class) | Class |
22.5.6.5. Method getFromType
Signatures:
- org.hl7.fhir.instance.model.api.IBaseMetaType getFromType(Class<org.hl7.fhir.instance.model.api.IBaseMetaType> metaType, String resourceType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/getFromType API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
metaType | The IBaseMetaType class | Class |
resourceType | The resource type e.g Patient | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.7. API: operation
Both producer and consumer are supported
The operation API is defined in the syntax as follows:
fhir:operation/methodName?[parameters]
The 5 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Perform the operation across all versions of a specific resource (by ID and type) on the server | |
This operation operates on a specific version of a resource | |
Perform the operation across all versions of all resources of all types on the server | |
Perform the operation across all versions of all resources of the given type on the server | |
This operation is called $process-message as defined by the FHIR specification |
22.5.7.1. Method onInstance
Signatures:
- org.hl7.fhir.instance.model.api.IBaseResource onInstance(org.hl7.fhir.instance.model.api.IIdType id, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/onInstance API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | Resource (version will be stripped) | IIdType |
name | Operation name | String |
outputParameterType | The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL | Class |
parameters | The parameters to use as input. May also be null if the operation does not require any input parameters. | IBaseParameters |
returnType | If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/$everything) which return a bundle instead of a Parameters resource, may be NULL | Class |
useHttpGet | Use HTTP GET verb | Boolean |
22.5.7.2. Method onInstanceVersion
Signatures:
- org.hl7.fhir.instance.model.api.IBaseResource onInstanceVersion(org.hl7.fhir.instance.model.api.IIdType id, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/onInstanceVersion API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | Resource version | IIdType |
name | Operation name | String |
outputParameterType | The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL | Class |
parameters | The parameters to use as input. May also be null if the operation does not require any input parameters. | IBaseParameters |
returnType | If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/$everything) which return a bundle instead of a Parameters resource, may be NULL | Class |
useHttpGet | Use HTTP GET verb | Boolean |
22.5.7.3. Method onServer
Signatures:
- org.hl7.fhir.instance.model.api.IBaseResource onServer(String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/onServer API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
name | Operation name | String |
outputParameterType | The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL | Class |
parameters | The parameters to use as input. May also be null if the operation does not require any input parameters. | IBaseParameters |
returnType | If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/$everything) which return a bundle instead of a Parameters resource, may be NULL | Class |
useHttpGet | Use HTTP GET verb | Boolean |
22.5.7.4. Method onType
Signatures:
- org.hl7.fhir.instance.model.api.IBaseResource onType(Class<org.hl7.fhir.instance.model.api.IBaseResource> resourceType, String name, org.hl7.fhir.instance.model.api.IBaseParameters parameters, Class<org.hl7.fhir.instance.model.api.IBaseParameters> outputParameterType, boolean useHttpGet, Class<org.hl7.fhir.instance.model.api.IBaseResource> returnType, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/onType API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
name | Operation name | String |
outputParameterType | The type to use for the output parameters (this should be set to Parameters.class drawn from the version of the FHIR structures you are using), may be NULL | Class |
parameters | The parameters to use as input. May also be null if the operation does not require any input parameters. | IBaseParameters |
resourceType | The resource type to operate on | Class |
returnType | If this operation returns a single resource body as its return type instead of a Parameters resource, use this method to specify that resource type. This is useful for certain operations (e.g. Patient/NNN/$everything) which return a bundle instead of a Parameters resource, may be NULL | Class |
useHttpGet | Use HTTP GET verb | Boolean |
22.5.7.5. Method processMessage
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle processMessage(String respondToUri, org.hl7.fhir.instance.model.api.IBaseBundle msgBundle, boolean asynchronous, Class<org.hl7.fhir.instance.model.api.IBaseBundle> responseClass, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/processMessage API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
asynchronous | Whether to process the message asynchronously or synchronously, defaults to synchronous. | Boolean |
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
msgBundle | Set the Message Bundle to POST to the messaging server | IBaseBundle |
respondToUri | An optional query parameter indicating that responses from the receiving server should be sent to this URI, may be NULL | String |
responseClass | The response class | Class |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.8. API: patch
Both producer and consumer are supported
The patch API is defined in the syntax as follows:
fhir:patch/methodName?[parameters]
The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Applies the patch to the given resource ID | |
Specifies that the update should be performed as a conditional create against a given search URL |
22.5.8.1. Method patchById
Signatures:
- ca.uhn.fhir.rest.api.MethodOutcome patchById(String patchBody, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- ca.uhn.fhir.rest.api.MethodOutcome patchById(String patchBody, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/patchById API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | The resource ID to patch | IIdType |
patchBody | The body of the patch document serialized in either XML or JSON which conforms to | String |
preferReturn | Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() | PreferReturnEnum |
stringId | The resource ID to patch | String |
22.5.8.2. Method patchByUrl
Signatures:
- ca.uhn.fhir.rest.api.MethodOutcome patchByUrl(String patchBody, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/patchByUrl API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
patchBody | The body of the patch document serialized in either XML or JSON which conforms to | String |
preferReturn | Add a Prefer header to the request, which requests that the server include or suppress the resource body as a part of the result. If a resource is returned by the server it will be parsed an accessible to the client via MethodOutcome#getResource() | PreferReturnEnum |
url | The search URL to use. The format of this URL should be of the form ResourceTypeParameters, for example: Patientname=Smith&identifier=13.2.4.11.4%7C847366 | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.9. API: read
Both producer and consumer are supported
The read API is defined in the syntax as follows:
fhir:read/methodName?[parameters]
The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Reads a IBaseResource on the server by id | |
Reads a IBaseResource on the server by url |
22.5.9.1. Method resourceById
Signatures:
- org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, Long longId, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, String stringId, String version, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceById(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, org.hl7.fhir.instance.model.api.IIdType id, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, Long longId, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, String stringId, String ifVersionMatches, String version, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceById(String resourceClass, org.hl7.fhir.instance.model.api.IIdType id, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resourceById API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | The IIdType referencing the resource | IIdType |
ifVersionMatches | A version to match against the newest version on the server | String |
longId | The resource ID | Long |
resource | The resource to read (e.g. Patient) | Class |
resourceClass | The resource to read (e.g. Patient) | String |
returnNull | Return null if version matches | Boolean |
returnResource | Return the resource if version matches | IBaseResource |
stringId | The resource ID | String |
throwError | Throw error if the version matches | Boolean |
version | The resource version | String |
22.5.9.2. Method resourceByUrl
Signatures:
- org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, String url, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(Class<org.hl7.fhir.instance.model.api.IBaseResource> resource, org.hl7.fhir.instance.model.api.IIdType iUrl, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(String resourceClass, String url, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseResource resourceByUrl(String resourceClass, org.hl7.fhir.instance.model.api.IIdType iUrl, String ifVersionMatches, Boolean returnNull, org.hl7.fhir.instance.model.api.IBaseResource returnResource, Boolean throwError, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resourceByUrl API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
iUrl | The IIdType referencing the resource by absolute url | IIdType |
ifVersionMatches | A version to match against the newest version on the server | String |
resource | The resource to read (e.g. Patient) | Class |
resourceClass | The resource to read (e.g. Patient.class) | String |
returnNull | Return null if version matches | Boolean |
returnResource | Return the resource if version matches | IBaseResource |
throwError | Throw error if the version matches | Boolean |
url | Referencing the resource by absolute url | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.10. API: search
Both producer and consumer are supported
The search API is defined in the syntax as follows:
fhir:search/methodName?[parameters]
The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Perform a search directly by URL |
22.5.10.1. Method searchByUrl
Signatures:
- org.hl7.fhir.instance.model.api.IBaseBundle searchByUrl(String url, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/searchByUrl API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
url | The URL to search for. Note that this URL may be complete (e.g. ) in which case the client’s base URL will be ignored. Or it can be relative (e.g. Patientname=foo) in which case the client’s base URL will be used. | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.11. API: transaction
Both producer and consumer are supported
The transaction API is defined in the syntax as follows:
fhir:transaction/methodName?[parameters]
The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Use the given raw text (should be a Bundle resource) as the transaction input | |
Use a list of resources as the transaction input |
22.5.11.1. Method withBundle
Signatures:
- String withBundle(String stringBundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- org.hl7.fhir.instance.model.api.IBaseBundle withBundle(org.hl7.fhir.instance.model.api.IBaseBundle bundle, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/withBundle API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
bundle | Bundle to use in the transaction | IBaseBundle |
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
stringBundle | Bundle to use in the transaction | String |
22.5.11.2. Method withResources
Signatures:
- java.util.List<org.hl7.fhir.instance.model.api.IBaseResource> withResources(java.util.List<org.hl7.fhir.instance.model.api.IBaseResource> resources, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/withResources API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
resources | Resources to use in the transaction | List |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.12. API: update
Both producer and consumer are supported
The update API is defined in the syntax as follows:
fhir:update/methodName?[parameters]
The 2 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Updates a IBaseResource on the server by id | |
Updates a IBaseResource on the server by search url |
22.5.12.1. Method resource
Signatures:
- ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, String stringId, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, org.hl7.fhir.instance.model.api.IIdType id, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resource API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
id | The IIdType referencing the resource | IIdType |
preferReturn | Whether the server include or suppress the resource body as a part of the result | PreferReturnEnum |
resource | The resource to update (e.g. Patient) | IBaseResource |
resourceAsString | The resource body to update | String |
stringId | The ID referencing the resource | String |
22.5.12.2. Method resourceBySearchUrl
Signatures:
- ca.uhn.fhir.rest.api.MethodOutcome resourceBySearchUrl(String resourceAsString, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- ca.uhn.fhir.rest.api.MethodOutcome resourceBySearchUrl(org.hl7.fhir.instance.model.api.IBaseResource resource, String url, ca.uhn.fhir.rest.api.PreferReturnEnum preferReturn, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resourceBySearchUrl API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
preferReturn | Whether the server include or suppress the resource body as a part of the result | PreferReturnEnum |
resource | The resource to update (e.g. Patient) | IBaseResource |
resourceAsString | The resource body to update | String |
url | Specifies that the update should be performed as a conditional create against a given search URL | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.5.13. API: validate
Both producer and consumer are supported
The validate API is defined in the syntax as follows:
fhir:validate/methodName?[parameters]
The 1 method(s) is listed in the table below, followed by detailed syntax for each method. (API methods can have a shorthand alias name which can be used in the syntax instead of the name)
Method | Description |
---|---|
Validates the resource |
22.5.13.1. Method resource
Signatures:
- ca.uhn.fhir.rest.api.MethodOutcome resource(String resourceAsString, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
- ca.uhn.fhir.rest.api.MethodOutcome resource(org.hl7.fhir.instance.model.api.IBaseResource resource, java.util.Map<org.apache.camel.component.fhir.api.ExtraParameters, Object> extraParameters);
The fhir/resource API method has the parameters listed in the table below:
Parameter | Description | Type |
---|---|---|
extraParameters | See ExtraParameters for a full list of parameters that can be passed, may be NULL | Map |
resource | The IBaseResource to validate | IBaseResource |
resourceAsString | Raw resource to validate | String |
In addition to the parameters above, the fhir API can also use any of the Query Parameters.
Any of the parameters can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFhir.parameter
. The inBody
parameter overrides message header, i.e. the endpoint parameter inBody=myParameterNameHere
would override a CamelFhir.myParameterNameHere
header.
22.6. Spring Boot Auto-Configuration
When using fhir with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-fhir-starter</artifactId> </dependency>
The component supports 56 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.fhir.access-token | OAuth access token. | String | |
camel.component.fhir.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.fhir.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.fhir.client | To use the custom client. The option is a ca.uhn.fhir.rest.client.api.IGenericClient type. | IGenericClient | |
camel.component.fhir.client-factory | To use the custom client factory. The option is a ca.uhn.fhir.rest.client.api.IRestfulClientFactory type. | IRestfulClientFactory | |
camel.component.fhir.compress | Compresses outgoing (POST/PUT) contents to the GZIP format. | false | Boolean |
camel.component.fhir.configuration | To use the shared configuration. The option is a org.apache.camel.component.fhir.FhirConfiguration type. | FhirConfiguration | |
camel.component.fhir.connection-timeout | How long to try and establish the initial TCP connection (in ms). | 10000 | Integer |
camel.component.fhir.defer-model-scanning | When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. | false | Boolean |
camel.component.fhir.enabled | Whether to enable auto configuration of the fhir component. This is enabled by default. | Boolean | |
camel.component.fhir.encoding | Encoding to use for all request. | String | |
camel.component.fhir.fhir-context | FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. The option is a ca.uhn.fhir.context.FhirContext type. | FhirContext | |
camel.component.fhir.fhir-version | The FHIR Version to use. | R4 | String |
camel.component.fhir.force-conformance-check | Force conformance check. | false | Boolean |
camel.component.fhir.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.fhir.log | Will log every requests and responses. | false | Boolean |
camel.component.fhir.password | Username to use for basic authentication. | String | |
camel.component.fhir.pretty-print | Pretty print all request. | false | Boolean |
camel.component.fhir.proxy-host | The proxy host. | String | |
camel.component.fhir.proxy-password | The proxy password. | String | |
camel.component.fhir.proxy-port | The proxy port. | Integer | |
camel.component.fhir.proxy-user | The proxy username. | String | |
camel.component.fhir.server-url | The FHIR server base URL. | String | |
camel.component.fhir.session-cookie | HTTP session cookie to add to every request. | String | |
camel.component.fhir.socket-timeout | How long to block for individual read/write operations (in ms). | 10000 | Integer |
camel.component.fhir.summary | Request that the server modify the response using the _summary param. | String | |
camel.component.fhir.username | Username to use for basic authentication. | String | |
camel.component.fhir.validation-mode | When should Camel validate the FHIR Server’s conformance statement. | ONCE | String |
camel.dataformat.fhirjson.content-type-header | Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. | true | Boolean |
camel.dataformat.fhirjson.dont-encode-elements | If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don’t encode patient and all its children Patient.name - Don’t encode the patient’s name Patient.name.family - Don’t encode the patient’s family name .text - Don’t encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. | Set | |
camel.dataformat.fhirjson.dont-strip-versions-from-references-at-paths | If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(String) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(String) has been set to true (which is the default). | List | |
camel.dataformat.fhirjson.enabled | Whether to enable auto configuration of the fhirJson data format. This is enabled by default. | Boolean | |
camel.dataformat.fhirjson.encode-elements | If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient’s name Patient.name.family - Encode only the patient’s family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded. | Set | |
camel.dataformat.fhirjson.encode-elements-applies-to-child-resources-only | If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle). | false | Boolean |
camel.dataformat.fhirjson.fhir-version | The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4. | DSTU3 | String |
camel.dataformat.fhirjson.omit-resource-id | If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. | false | Boolean |
camel.dataformat.fhirjson.override-resource-id-with-bundle-entry-full-url | If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource’s resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). | false | Boolean |
camel.dataformat.fhirjson.pretty-print | Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. | false | Boolean |
camel.dataformat.fhirjson.server-base-url | Sets the server’s base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. | String | |
camel.dataformat.fhirjson.strip-versions-from-references | If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List). | false | Boolean |
camel.dataformat.fhirjson.summary-mode | If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. | false | Boolean |
camel.dataformat.fhirjson.suppress-narratives | If set to true (default is false), narratives will not be included in the encoded values. | false | Boolean |
camel.dataformat.fhirxml.content-type-header | Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. | true | Boolean |
camel.dataformat.fhirxml.dont-encode-elements | If provided, specifies the elements which should NOT be encoded. Valid values for this field would include: Patient - Don’t encode patient and all its children Patient.name - Don’t encode the patient’s name Patient.name.family - Don’t encode the patient’s family name .text - Don’t encode the text element on any resource (only the very first position may contain a wildcard) DSTU2 note: Note that values including meta, such as Patient.meta will work for DSTU2 parsers, but values with subelements on meta such as Patient.meta.lastUpdated will only work in DSTU3 mode. | Set | |
camel.dataformat.fhirxml.dont-strip-versions-from-references-at-paths | If supplied value(s), any resource references at the specified paths will have their resource versions encoded instead of being automatically stripped during the encoding process. This setting has no effect on the parsing process. This method provides a finer-grained level of control than setStripVersionsFromReferences(String) and any paths specified by this method will be encoded even if setStripVersionsFromReferences(String) has been set to true (which is the default). | List | |
camel.dataformat.fhirxml.enabled | Whether to enable auto configuration of the fhirXml data format. This is enabled by default. | Boolean | |
camel.dataformat.fhirxml.encode-elements | If provided, specifies the elements which should be encoded, to the exclusion of all others. Valid values for this field would include: Patient - Encode patient and all its children Patient.name - Encode only the patient’s name Patient.name.family - Encode only the patient’s family name .text - Encode the text element on any resource (only the very first position may contain a wildcard) .(mandatory) - This is a special case which causes any mandatory fields (min 0) to be encoded. | Set | |
camel.dataformat.fhirxml.encode-elements-applies-to-child-resources-only | If set to true (default is false), the values supplied to setEncodeElements(Set) will not be applied to the root resource (typically a Bundle), but will be applied to any sub-resources contained within it (i.e. search result resources in that bundle). | false | Boolean |
camel.dataformat.fhirxml.fhir-version | The version of FHIR to use. Possible values are: DSTU2,DSTU2_HL7ORG,DSTU2_1,DSTU3,R4. | DSTU3 | String |
camel.dataformat.fhirxml.omit-resource-id | If set to true (default is false) the ID of any resources being encoded will not be included in the output. Note that this does not apply to contained resources, only to root resources. In other words, if this is set to true, contained resources will still have local IDs but the outer/containing ID will not have an ID. | false | Boolean |
camel.dataformat.fhirxml.override-resource-id-with-bundle-entry-full-url | If set to true (which is the default), the Bundle.entry.fullUrl will override the Bundle.entry.resource’s resource id if the fullUrl is defined. This behavior happens when parsing the source data into a Bundle object. Set this to false if this is not the desired behavior (e.g. the client code wishes to perform additional validation checks between the fullUrl and the resource id). | false | Boolean |
camel.dataformat.fhirxml.pretty-print | Sets the pretty print flag, meaning that the parser will encode resources with human-readable spacing and newlines between elements instead of condensing output as much as possible. | false | Boolean |
camel.dataformat.fhirxml.server-base-url | Sets the server’s base URL used by this parser. If a value is set, resource references will be turned into relative references if they are provided as absolute URLs but have a base matching the given base. | String | |
camel.dataformat.fhirxml.strip-versions-from-references | If set to true (which is the default), resource references containing a version will have the version removed when the resource is encoded. This is generally good behaviour because in most situations, references from one resource to another should be to the resource by ID, not by ID and version. In some cases though, it may be desirable to preserve the version in resource links. In that case, this value should be set to false. This method provides the ability to globally disable reference encoding. If finer-grained control is needed, use setDontStripVersionsFromReferencesAtPaths(List). | false | Boolean |
camel.dataformat.fhirxml.summary-mode | If set to true (default is false) only elements marked by the FHIR specification as being summary elements will be included. | false | Boolean |
camel.dataformat.fhirxml.suppress-narratives | If set to true (default is false), narratives will not be included in the encoded values. | false | Boolean |
Chapter 23. File
Both producer and consumer are supported
The File component provides access to file systems, allowing files to be processed by any other Camel Components or messages from other components to be saved to disk.
23.1. URI format
file:directoryName[?options]
Where directoryName represents the underlying file directory.
Only directories
Camel supports only endpoints configured with a starting directory. So the directoryName must be a directory. If you want to consume a single file only, you can use the fileName option, e.g. by setting fileName=thefilename
. Also, the starting directory must not contain dynamic expressions with ${ }
placeholders. Again use the fileName
option to specify the dynamic part of the filename.
Avoid reading files currently being written by another application
Beware the JDK File IO API is a bit limited in detecting whether another application is currently writing/copying a file. And the implementation can be different depending on OS platform as well. This could lead to that Camel thinks the file is not locked by another process and start consuming it. Therefore you have to do you own investigation what suites your environment. To help with this Camel provides different readLock
options and doneFileName
option that you can use. See also the section Consuming files from folders where others drop files directly.
23.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
23.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
23.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
23.3. Component Options
The File component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
23.4. Endpoint Options
The File endpoint is configured using URI syntax:
file:directoryName
with the following path and query parameters:
23.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
directoryName (common) | Required The starting directory. | File |
23.4.2. Query Parameters (94 parameters)
Name | Description | Default | Type |
---|---|---|---|
charset (common) | This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. | String | |
doneFileName (common) | Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only $\\{file.name} and $\\{file.name.next} is supported as dynamic placeholders. | String | |
fileName (common) | Use Expression such as File Language to dynamically set the filename. For consumers, it’s used as a filename filter. For producers, it’s used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today’s file using the File Language syntax: mydata-$\\{date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
delete (consumer) | If true, the file will be deleted after it is processed successfully. | false | boolean |
moveFailed (consumer) | Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. | String | |
noop (consumer) | If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. | false | boolean |
preMove (consumer) | Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. | String | |
preSort (consumer) | When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. | false | boolean |
recursive (consumer) | If a directory, will look for files in all the sub-directories as well. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
directoryMustExist (consumer (advanced)) | Similar to the startingDirectoryMustExist option but this applies during polling (after starting the consumer). | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
extendedAttributes (consumer (advanced)) | To define which file attributes of interest. Like posix:permissions,posix:owner,basic:lastAccessTime, it supports basic wildcard like posix:, basic:lastAccessTime. | String | |
inProgressRepository (consumer (advanced)) | A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. | IdempotentRepository | |
localWorkDirectory (consumer (advanced)) | When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. | String | |
onCompletionExceptionHandler (consumer (advanced)) | To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. | ExceptionHandler | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
probeContentType (consumer (advanced)) | Whether to enable probing of the content type. If enable then the consumer uses Files#probeContentType(java.nio.file.Path) to determine the content-type of the file, and store that as a header with key Exchange#FILE_CONTENT_TYPE on the Message. | false | boolean |
processStrategy (consumer (advanced)) | A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. | GenericFileProcessStrategy | |
resumeStrategy (consumer (advanced)) | Set a resume strategy for files. This makes it possible to define a strategy for resuming reading files after the last point before stopping the application. See the FileConsumerResumeStrategy for implementation details. | FileConsumerResumeStrategy | |
startingDirectoryMustExist (consumer (advanced)) | Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn’t exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will thrown an exception if the directory doesn’t exist. | false | boolean |
startingDirectoryMustHaveAccess (consumer (advanced)) | Whether the starting directory has access permissions. Mind that the startingDirectoryMustExist parameter must be set to true in order to verify that the directory exists. Will thrown an exception if the directory doesn’t have read and write permissions. | false | boolean |
appendChars (producer) | Used to append characters (text) after writing files. This can for example be used to add new lines or other separators when writing and appending new files or existing files. To specify new-line (slash-n or slash-r) or tab (slash-t) characters then escape with an extra slash, eg slash-slash-n. | String | |
fileExist (producer) | What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Enum values:
| Override | GenericFileExist |
flatten (producer) | Flatten is used to flatten the file name path to strip any leading paths, so it’s just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. | false | boolean |
jailStartingDirectory (producer) | Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
moveExisting (producer) | Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. | String | |
tempFileName (producer) | The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir. | String | |
tempPrefix (producer) | This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. | String | |
allowNullBody (producer (advanced)) | Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. | false | boolean |
chmod (producer (advanced)) | Specify the file permissions which is sent by the producer, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. | String | |
chmodDirectory (producer (advanced)) | Specify the directory permissions used when the producer creates missing directories, the chmod value must be between 000 and 777; If there is a leading digit like in 0755 we will ignore it. | String | |
eagerDeleteTargetFile (producer (advanced)) | Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. | true | boolean |
forceWrites (producer (advanced)) | Whether to force syncing writes to the file system. You can turn this off if you do not want this level of guarantee, for example if writing to logs / audit logs etc; this would yield better performance. | true | boolean |
keepLastModified (producer (advanced)) | Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. | false | boolean |
moveExistingFileStrategy (producer (advanced)) | Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided. | FileMoveExistingStrategy | |
autoCreate (advanced) | Automatically create missing directories in the file’s pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. | true | boolean |
bufferSize (advanced) | Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files). | 131072 | int |
copyAndDeleteOnRenameFail (advanced) | Whether to fallback and do a copy and delete file, in case the file could not be renamed directly. This option is not available for the FTP component. | true | boolean |
renameUsingCopy (advanced) | Perform rename operations using a copy and delete strategy. This is primarily used in environments where the regular rename operation is unreliable (e.g. across different file systems or networks). This option takes precedence over the copyAndDeleteOnRenameFail parameter that will automatically fall back to the copy and delete strategy, but only after additional delays. | false | boolean |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
antExclude (filter) | Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. | String | |
antFilterCaseSensitive (filter) | Sets case sensitive flag on ant filter. | true | boolean |
antInclude (filter) | Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. | String | |
eagerMaxMessagesPerPoll (filter) | Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. | true | boolean |
exclude (filter) | Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. | String | |
excludeExt (filter) | Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. | String | |
filter (filter) | Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. | GenericFileFilter | |
filterDirectory (filter) | Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as $\\{date:now:yyyMMdd}. | String | |
filterFile (filter) | Filters the file based on Simple language. For example to filter on file size, you can use $\\{file:size} 5000. | String | |
idempotent (filter) | Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. | false | Boolean |
idempotentKey (filter) | To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=$\\{file:name}-$\\{file:size}. | String | |
idempotentRepository (filter) | A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true. | IdempotentRepository | |
include (filter) | Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. | String | |
includeExt (filter) | Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. | String | |
maxDepth (filter) | The maximum depth to traverse when recursively processing a directory. | 2147483647 | int |
maxMessagesPerPoll (filter) | To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. | int | |
minDepth (filter) | The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. | int | |
move (filter) | Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. | String | |
exclusiveReadLockStrategy (lock) | Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. | GenericFileExclusiveReadLockStrategy | |
readLock (lock) | Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. Enum values:
| none | String |
readLockCheckInterval (lock) | Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. | 1000 | long |
readLockDeleteOrphanLockFiles (lock) | Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. | true | boolean |
readLockIdempotentReleaseAsync (lock) | Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option. | false | boolean |
readLockIdempotentReleaseAsyncPoolSize (lock) | The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option. | int | |
readLockIdempotentReleaseDelay (lock) | Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true. | int | |
readLockIdempotentReleaseExecutorService (lock) | To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option. | ScheduledExecutorService | |
readLockLoggingLevel (lock) | Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. Enum values:
| DEBUG | LoggingLevel |
readLockMarkerFile (lock) | Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. | true | boolean |
readLockMinAge (lock) | This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. | 0 | long |
readLockMinLength (lock) | This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. | 1 | long |
readLockRemoveOnCommit (lock) | This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. | false | boolean |
readLockRemoveOnRollback (lock) | This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). | true | boolean |
readLockTimeout (lock) | Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At next poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. | 10000 | long |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
shuffle (sort) | To shuffle the list of files (sort in random order). | false | boolean |
sortBy (sort) | Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. | String | |
sorter (sort) | Pluggable sorter as a java.util.Comparator class. | Comparator |
Default behavior for file producer
By default it will override any existing file, if one exist with the same name.
23.5. Move and Delete operations
Any move or delete operations is executed after (post command) the routing has completed; so during processing of the Exchange
the file is still located in the inbox folder.
Lets illustrate this with an example:
from("file://inbox?move=.done").to("bean:handleOrder");
When a file is dropped in the inbox
folder, the file consumer notices this and creates a new FileExchange
that is routed to the handleOrder
bean. The bean then processes the File
object. At this point in time the file is still located in the inbox
folder. After the bean completes, and thus the route is completed, the file consumer will perform the move operation and move the file to the .done
sub-folder.
The move and the preMove options are considered as a directory name (though if you use an expression such as File Language, or Simple then the result of the expression evaluation is the file name to be used. For example, if you set:
move=../backup/copy-of-${file:name}
then that’s using the File language which we use return the file name to be used), which can be either relative or absolute. If relative, the directory is created as a sub-folder from within the folder where the file was consumed.
By default, Camel will move consumed files to the .camel
sub-folder relative to the directory where the file was consumed.
If you want to delete the file after processing, the route should be:
from("file://inbox?delete=true").to("bean:handleOrder");
We have introduced a pre move operation to move files before they are processed. This allows you to mark which files have been scanned as they are moved to this sub folder before being processed.
from("file://inbox?preMove=inprogress").to("bean:handleOrder");
You can combine the pre move and the regular move:
from("file://inbox?preMove=inprogress&move=.done").to("bean:handleOrder");
So in this situation, the file is in the inprogress
folder when being processed and after it’s processed, it’s moved to the .done
folder.
23.6. Fine grained control over Move and PreMove option
The move and preMove options are Expression-based, so we have the full power of the File Language to do advanced configuration of the directory and name pattern.
Camel will, in fact, internally convert the directory name you enter into a File Language expression. So when we enter move=.done
Camel will convert this into: ${file:parent}/.done/${file:onlyname}
. This is only done if Camel detects that you have not provided a $\{ } in the option value yourself. So when you enter a $\{ } Camel will not convert it and thus you have the full power.
So if we want to move the file into a backup folder with today’s date as the pattern, we can do:
move=backup/${date:now:yyyyMMdd}/${file:name}
23.7. About moveFailed
The moveFailed
option allows you to move files that could not be processed successfully to another location such as an error folder of your choice. For example to move the files in an error folder with a timestamp you can use moveFailed=/error/${
}.
file:name.noext
}-${date:now:yyyyMMddHHmmssSSS}.${\'\'file:ext
See more examples at
23.8. Message Headers
The following headers are supported by this component:
23.8.1. File producer only
Header | Description |
---|---|
|
Specifies the name of the file to write (relative to the endpoint directory). This name can be a |
| The actual absolute filepath (path + name) for the output file that was written. This header is set by Camel and its purpose is providing end-users with the name of the file that was written. |
|
Is used for overruling |
23.8.2. File consumer only
Header | Description |
---|---|
| Name of the consumed file as a relative file path with offset from the starting directory configured on the endpoint. |
| Only the file name (the name with no leading paths). |
|
A |
| The absolute path to the file. For relative files this path holds the relative path instead. |
| The file path. For relative files this is the starting directory + the relative filename. For absolute files this is the absolute path. |
| The relative path. |
| The parent path. |
|
A |
|
A |
23.9. Batch Consumer
This component implements the Batch Consumer.
23.10. Exchange Properties, file consumer only
As the file consumer implements the BatchConsumer
it supports batching the files it polls. By batching we mean that Camel will add the following additional properties to the Exchange, so you know the number of files polled, the current index, and whether the batch is already completed.
Property | Description |
---|---|
| The total number of files that was polled in this batch. |
| The current index of the batch. Starts from 0. |
|
A |
This allows you for instance to know how many files exist in this batch and for instance let the Aggregator2 aggregate this number of files.
23.11. Using charset
The charset option allows for configuring an encoding of the files on both the consumer and producer endpoints. For example if you read utf-8 files, and want to convert the files to iso-8859-1, you can do:
from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1")
You can also use the convertBodyTo
in the route. In the example below we have still input files in utf-8 format, but we want to convert the file content to a byte array in iso-8859-1 format. And then let a bean process the data. Before writing the content to the outbox folder using the current charset.
from("file:inbox?charset=utf-8") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox");
If you omit the charset on the consumer endpoint, then Camel does not know the charset of the file, and would by default use "UTF-8". However you can configure a JVM system property to override and use a different default encoding with the key org.apache.camel.default.charset
.
In the example below this could be a problem if the files is not in UTF-8 encoding, which would be the default encoding for read the files.
In this example when writing the files, the content has already been converted to a byte array, and thus would write the content directly as is (without any further encodings).
from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .to("file:outbox");
You can also override and control the encoding dynamic when writing files, by setting a property on the exchange with the key Exchange.CHARSET_NAME
. For example in the route below we set the property with a value from a message header.
from("file:inbox") .convertBodyTo(byte[].class, "iso-8859-1") .to("bean:myBean") .setProperty(Exchange.CHARSET_NAME, header("someCharsetHeader")) .to("file:outbox");
We suggest to keep things simpler, so if you pickup files with the same encoding, and want to write the files in a specific encoding, then favor to use the charset
option on the endpoints.
Notice that if you have explicit configured a charset
option on the endpoint, then that configuration is used, regardless of the Exchange.CHARSET_NAME
property.
If you have some issues then you can enable DEBUG logging on org.apache.camel.component.file
, and Camel logs when it reads/write a file using a specific charset.
For example the route below will log the following:
from("file:inbox?charset=utf-8") .to("file:outbox?charset=iso-8859-1")
And the logs:
DEBUG GenericFileConverter - Read file /Users/davsclaus/workspace/camel/camel-core/target/charset/input/input.txt with charset utf-8 DEBUG FileOperations - Using Reader to write file: target/charset/output.txt with charset: iso-8859-1
23.12. Common gotchas with folder and filenames
When Camel is producing files (writing files) there are a few gotchas affecting how to set a filename of your choice. By default, Camel will use the message ID as the filename, and since the message ID is normally a unique generated ID, you will end up with filenames such as: ID-MACHINENAME-2443-1211718892437-1-0
. If such a filename is not desired, then you must provide a filename in the CamelFileName
message header. The constant, Exchange.FILE_NAME
, can also be used.
The sample code below produces files using the message ID as the filename:
from("direct:report").to("file:target/reports");
To use report.txt
as the filename you have to do:
from("direct:report").setHeader(Exchange.FILE_NAME, constant("report.txt")).to( "file:target/reports");
-
the same as above, but with
CamelFileName
:
from("direct:report").setHeader("CamelFileName", constant("report.txt")).to( "file:target/reports");
And a syntax where we set the filename on the endpoint with the fileName URI option.
from("direct:report").to("file:target/reports/?fileName=report.txt");
23.13. Filename Expression
Filename can be set either using the expression option or as a string-based File language expression in the CamelFileName
header. See the File language for syntax and samples.
23.14. Consuming files from folders where others drop files directly
Beware if you consume files from a folder where other applications write files to directly. Take a look at the different readLock options to see what suits your use cases. The best approach is however to write to another folder and after the write move the file in the drop folder. However if you write files directly to the drop folder then the option changed could better detect whether a file is currently being written/copied as it uses a file changed algorithm to see whether the file size / modification changes over a period of time. The other readLock options rely on Java File API that sadly is not always very good at detecting this. You may also want to look at the doneFileName option, which uses a marker file (done file) to signal when a file is done and ready to be consumed.
23.15. Using done files
See also section writing done files below.
If you want only to consume files when a done file exists, then you can use the doneFileName
option on the endpoint.
from("file:bar?doneFileName=done");
Will only consume files from the bar folder, if a done file exists in the same directory as the target files. Camel will automatically delete the done file when it’s done consuming the files. Camel does not delete automatically the done file if noop=true
is configured.
However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName
option. Currently Camel supports the following two dynamic tokens: file:name
and file:name.noext
which must be enclosed in $\{ }. The consumer only supports the static part of the done file name as either prefix or suffix (not both).
from("file:bar?doneFileName=${file:name}.done");
In this example only files will be polled if there exists a done file with the name file name.done. For example
-
hello.txt
- is the file to be consumed -
hello.txt.done
- is the associated done file
You can also use a prefix for the done file, such as:
from("file:bar?doneFileName=ready-${file:name}");
-
hello.txt
- is the file to be consumed -
ready-hello.txt
- is the associated done file
23.16. Writing done files
After you have written a file you may want to write an additional donefile as a kind of marker, to indicate to others that the file is finished and has been written. To do that you can use the doneFileName
option on the file producer endpoint.
.to("file:bar?doneFileName=done");
Will simply create a file named done
in the same directory as the target file.
However it is more common to have one done file per target file. This means there is a 1:1 correlation. To do this you must use dynamic placeholders in the doneFileName
option. Currently Camel supports the following two dynamic tokens: file:name
and file:name.noext
which must be enclosed in $\{ }.
.to("file:bar?doneFileName=done-${file:name}");
Will for example create a file named done-foo.txt
if the target file was foo.txt
in the same directory as the target file.
.to("file:bar?doneFileName=${file:name}.done");
Will for example create a file named foo.txt.done
if the target file was foo.txt
in the same directory as the target file.
.to("file:bar?doneFileName=${file:name.noext}.done");
Will for example create a file named foo.done
if the target file was foo.txt
in the same directory as the target file.
23.17. Samples
23.17.1. Read from a directory and write to another directory
from("file://inputdir/?delete=true").to("file://outputdir")
23.17.2. Read from a directory and write to another directory using a overrule dynamic name
from("file://inputdir/?delete=true").to("file://outputdir?overruleFile=copy-of-${file:name}")
Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir
and delete the file in the inputdir
.
23.17.3. Reading recursively from a directory and writing to another
from("file://inputdir/?recursive=true&delete=true").to("file://outputdir")
Listen on a directory and create a message for each file dropped there. Copy the contents to the outputdir
and delete the file in the inputdir
. Will scan recursively into sub-directories. Will lay out the files in the same directory structure in the outputdir
as the inputdir
, including any sub-directories.
inputdir/foo.txt inputdir/sub/bar.txt
Will result in the following output layout:
outputdir/foo.txt outputdir/sub/bar.txt
23.18. Using flatten
If you want to store the files in the outputdir directory in the same directory, disregarding the source directory layout (e.g. to flatten out the path), you just add the flatten=true
option on the file producer side:
from("file://inputdir/?recursive=true&delete=true").to("file://outputdir?flatten=true")
Will result in the following output layout:
outputdir/foo.txt outputdir/bar.txt
23.19. Reading from a directory and the default move operation
Camel will by default move any processed file into a .camel
subdirectory in the directory the file was consumed from.
from("file://inputdir/?recursive=true&delete=true").to("file://outputdir")
Affects the layout as follows:
before
inputdir/foo.txt inputdir/sub/bar.txt
after
inputdir/.camel/foo.txt inputdir/sub/.camel/bar.txt outputdir/foo.txt outputdir/sub/bar.txt
23.20. Read from a directory and process the message in java
from("file://inputdir/").process(new Processor() { public void process(Exchange exchange) throws Exception { Object body = exchange.getIn().getBody(); // do some business logic with the input body } });
The body will be a File
object that points to the file that was just dropped into the inputdir
directory.
23.21. Writing to files
Camel is of course also able to write files, i.e. produce files. In the sample below we receive some reports on the SEDA queue that we process before they are being written to a directory.
23.21.1. Write to subdirectory using Exchange.FILE_NAME
Using a single route, it is possible to write a file to any number of subdirectories. If you have a route setup as such:
<route> <from uri="bean:myBean"/> <to uri="file:/rootDirectory"/> </route>
You can have myBean
set the header Exchange.FILE_NAME
to values such as:
Exchange.FILE_NAME = hello.txt => /rootDirectory/hello.txt Exchange.FILE_NAME = foo/bye.txt => /rootDirectory/foo/bye.txt
This allows you to have a single route to write files to multiple destinations.
23.21.2. Writing file through the temporary directory relative to the final destination
Sometime you need to temporarily write the files to some directory relative to the destination directory. Such situation usually happens when some external process with limited filtering capabilities is reading from the directory you are writing to. In the example below files will be written to the /var/myapp/filesInProgress
directory and after data transfer is done, they will be atomically moved to the` /var/myapp/finalDirectory `directory.
from("direct:start"). to("file:///var/myapp/finalDirectory?tempPrefix=/../filesInProgress/");
23.22. Using expression for filenames
In this sample we want to move consumed files to a backup folder using today’s date as a sub-folder name:
from("file://inbox?move=backup/${date:now:yyyyMMdd}/${file:name}").to("...");
See File language for more samples.
23.23. Avoiding reading the same file more than once (idempotent consumer)
Camel supports Idempotent Consumer directly within the component so it will skip already processed files. This feature can be enabled by setting the idempotent=true
option.
from("file://inbox?idempotent=true").to("...");
Camel uses the absolute file name as the idempotent key, to detect duplicate files. You can customize this key by using an expression in the idempotentKey option. For example to use both the name and the file size as the key
<route> <from uri="file://inbox?idempotent=true&idempotentKey=${file:name}-${file:size}"/> <to uri="bean:processInbox"/> </route>
By default Camel uses a in memory based store for keeping track of consumed files, it uses a least recently used cache holding up to 1000 entries. You can plugin your own implementation of this store by using the idempotentRepository
option using the # sign in the value to indicate it’s a referring to a bean in the Registry with the specified id
.
<!-- define our store as a plain spring bean --> <bean id="myStore" class="com.mycompany.MyIdempotentStore"/> <route> <from uri="file://inbox?idempotent=true&idempotentRepository=#myStore"/> <to uri="bean:processInbox"/> </route>
Camel will log at DEBUG
level if it skips a file because it has been consumed before:
DEBUG FileConsumer is idempotent and the file has been consumed before. Will skip this file: target\idempotent\report.txt
23.24. Using a file based idempotent repository
In this section we will use the file based idempotent repository org.apache.camel.processor.idempotent.FileIdempotentRepository
instead of the in-memory based that is used as default.
This repository uses a 1st level cache to avoid reading the file repository. It will only use the file repository to store the content of the 1st level cache. Thereby the repository can survive server restarts. It will load the content of the file into the 1st level cache upon startup. The file structure is very simple as it stores the key in separate lines in the file. By default, the file store has a size limit of 1mb. When the file grows larger Camel will truncate the file store, rebuilding the content by flushing the 1st level cache into a fresh empty file.
We configure our repository using Spring XML creating our file idempotent repository and define our file consumer to use our repository with the idempotentRepository
using # sign to indicate Registry lookup:
23.25. Using a JPA based idempotent repository
In this section we will use the JPA based idempotent repository instead of the in-memory based that is used as default.
First we need a persistence-unit in META-INF/persistence.xml
where we need to use the class org.apache.camel.processor.idempotent.jpa.MessageProcessed
as model.
<persistence-unit name="idempotentDb" transaction-type="RESOURCE_LOCAL"> <class>org.apache.camel.processor.idempotent.jpa.MessageProcessed</class> <properties> <property name="openjpa.ConnectionURL" value="jdbc:derby:target/idempotentTest;create=true"/> <property name="openjpa.ConnectionDriverName" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema"/> <property name="openjpa.Log" value="DefaultLevel=WARN, Tool=INFO"/> <property name="openjpa.Multithreaded" value="true"/> </properties> </persistence-unit>
Next, we can create our JPA idempotent repository in the spring XML file as well:
<!-- we define our jpa based idempotent repository we want to use in the file consumer --> <bean id="jpaStore" class="org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository"> <!-- Here we refer to the entityManagerFactory --> <constructor-arg index="0" ref="entityManagerFactory"/> <!-- This 2nd parameter is the name (= a category name). You can have different repositories with different names --> <constructor-arg index="1" value="FileConsumer"/> </bean>
And yes then we just need to refer to the jpaStore bean in the file consumer endpoint using the idempotentRepository
using the # syntax option:
<route> <from uri="file://inbox?idempotent=true&idempotentRepository=#jpaStore"/> <to uri="bean:processInbox"/> </route>
23.26. Filter using org.apache.camel.component.file.GenericFileFilter
Camel supports pluggable filtering strategies. You can then configure the endpoint with such a filter to skip certain files being processed.
In the sample we have built our own filter that skips files starting with skip
in the filename:
And then we can configure our route using the filter attribute to reference our filter (using # notation) that we have defined in the spring XML file:
<!-- define our filter as a plain spring bean --> <bean id="myFilter" class="com.mycompany.MyFileFilter"/> <route> <from uri="file://inbox?filter=#myFilter"/> <to uri="bean:processInbox"/> </route>
23.27. Filtering using ANT path matcher
The ANT path matcher is based on AntPathMatcher.
The file paths is matched with the following rules:
-
?
matches one character -
*
matches zero or more characters -
**
matches zero or more directories in a path
The antInclude
and antExclude
options make it easy to specify ANT style include/exclude without having to define the filter. See the URI options above for more information.
The sample below demonstrates how to use it:
23.27.1. Sorting using Comparator
Camel supports pluggable sorting strategies. This strategy it to use the build in java.util.Comparator
in Java. You can then configure the endpoint with such a comparator and have Camel sort the files before being processed.
In the sample we have built our own comparator that just sorts by file name:
And then we can configure our route using the sorter option to reference to our sorter (mySorter
) we have defined in the spring XML file:
<!-- define our sorter as a plain spring bean --> <bean id="mySorter" class="com.mycompany.MyFileSorter"/> <route> <from uri="file://inbox?sorter=#mySorter"/> <to uri="bean:processInbox"/> </route>
URI options can reference beans using the # syntax
In the Spring DSL route above notice that we can refer to beans in the Registry by prefixing the id with #. So writing sorter=#mySorter
, will instruct Camel to go look in the Registry for a bean with the ID, mySorter
.
23.27.2. Sorting using sortBy
Camel supports pluggable sorting strategies. This strategy it to use the File language to configure the sorting. The sortBy
option is configured as follows:
sortBy=group 1;group 2;group 3;...
Where each group is separated with semi colon. In the simple situations you just use one group, so a simple example could be:
sortBy=file:name
This will sort by file name, you can reverse the order by prefixing reverse:
to the group, so the sorting is now Z..A:
sortBy=reverse:file:name
As we have the full power of File language we can use some of the other parameters, so if we want to sort by file size we do:
sortBy=file:length
You can configure to ignore the case, using ignoreCase:
for string comparison, so if you want to use file name sorting but to ignore the case then we do:
sortBy=ignoreCase:file:name
You can combine ignore case and reverse, however reverse must be specified first:
sortBy=reverse:ignoreCase:file:name
In the sample below we want to sort by last modified file, so we do:
sortBy=file:modified
And then we want to group by name as a 2nd option so files with same modifcation is sorted by name:
sortBy=file:modified;file:name
Now there is an issue here, can you spot it? Well the modified timestamp of the file is too fine as it will be in milliseconds, but what if we want to sort by date only and then subgroup by name?
Well as we have the true power of File language we can use its date command that supports patterns. So this can be solved as:
sortBy=date:file:yyyyMMdd;file:name
Yeah, that is pretty powerful, oh by the way you can also use reverse per group, so we could reverse the file names:
sortBy=date:file:yyyyMMdd;reverse:file:name
23.28. Using GenericFileProcessStrategy
The option processStrategy
can be used to use a custom GenericFileProcessStrategy
that allows you to implement your own begin, commit and rollback logic.
For instance lets assume a system writes a file in a folder you should consume. But you should not start consuming the file before another ready file has been written as well.
So by implementing our own GenericFileProcessStrategy
we can implement this as:
-
In the
begin()
method we can test whether the special ready file exists. The begin method returns aboolean
to indicate if we can consume the file or not. -
In the
abort()
method special logic can be executed in case thebegin
operation returnedfalse
, for example to cleanup resources etc. -
in the
commit()
method we can move the actual file and also delete the ready file.
23.29. Using filter
The filter
option allows you to implement a custom filter in Java code by implementing the org.apache.camel.component.file.GenericFileFilter
interface. This interface has an accept
method that returns a boolean. Return true
to include the file, and false
to skip the file. There is a isDirectory
method on GenericFile
whether the file is a directory. This allows you to filter unwanted directories, to avoid traversing down unwanted directories.
For example to skip any directories which starts with "skip"
in the name, can be implemented as follows:
23.30. Using bridgeErrorHandler
If you want to use the Camel Error Handler to deal with any exception occurring in the file consumer, then you can enable the bridgeErrorHandler
option as shown below:
// to handle any IOException being thrown onException(IOException.class) .handled(true) .log("IOException occurred due: ${exception.message}") .transform().simple("Error ${exception.message}") .to("mock:error"); // this is the file route that pickup files, notice how we bridge the consumer to use the Camel routing error handler // the exclusiveReadLockStrategy is only configured because this is from an unit test, so we use that to simulate exceptions from("file:target/nospace?bridgeErrorHandler=true") .convertBodyTo(String.class) .to("mock:result");
So all you have to do is to enable this option, and the error handler in the route will take it from there.
When using bridgeErrorHandler
When using bridgeErrorHandler, then interceptors, OnCompletions does not apply. The Exchange is processed directly by the Camel Error Handler, and does not allow prior actions such as interceptors, onCompletion to take action.
23.31. Debug logging
This component has log level TRACE that can be helpful if you have problems.
23.32. Spring Boot Auto-Configuration
When using file with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-file-starter</artifactId> </dependency>
The component supports 11 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.cluster.file.acquire-lock-delay | The time to wait before starting to try to acquire lock. | String | |
camel.cluster.file.acquire-lock-interval | The time to wait between attempts to try to acquire lock. | String | |
camel.cluster.file.attributes | Custom service attributes. | Map | |
camel.cluster.file.enabled | Sets if the file cluster service should be enabled or not, default is false. | false | Boolean |
camel.cluster.file.id | Cluster Service ID. | String | |
camel.cluster.file.order | Service lookup order/priority. | Integer | |
camel.cluster.file.root | The root path. | String | |
camel.component.file.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.file.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.file.enabled | Whether to enable auto configuration of the file component. This is enabled by default. | Boolean | |
camel.component.file.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 24. FTP
Both producer and consumer are supported
This component provides access to remote file systems over the FTP and SFTP protocols.
When consuming from remote FTP server make sure you read the section titled Default when consuming files further below for details related to consuming files.
Absolute path is not supported. Camel translates absolute path to relative by trimming all leading slashes from directoryname
. There’ll be WARN message printed in the logs.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ftp</artifactId> <version>{CamelSBVersion}</version>See the documentation of the Apache Commons <!-- use the same version as your Camel core version --> </dependency>
24.1. URI format
ftp://[username@]hostname[:port]/directoryname[?options] sftp://[username@]hostname[:port]/directoryname[?options] ftps://[username@]hostname[:port]/directoryname[?options]
Where directoryname represents the underlying directory. The directory name is a relative path. Absolute path’s is not supported. The relative path can contain nested folders, such as /inbox/us.
The autoCreate
option is supported. When consumer starts, before polling is scheduled, there’s additional FTP operation performed to create the directory configured for endpoint. The default value for autoCreate
is true
.
If no username is provided, then anonymous
login is attempted using no password.
If no port number is provided, Camel will provide default values according to the protocol (ftp = 21, sftp = 22, ftps = 2222).
You can append query options to the URI in the following format, ?option=value&option=value&…
This component uses two different libraries for the actual FTP work. FTP and FTPS uses Apache Commons Net while SFTP uses JCraft JSCH.
FTPS (also known as FTP Secure) is an extension to FTP that adds support for the Transport Layer Security (TLS) and the Secure Sockets Layer (SSL) cryptographic protocols.
24.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
24.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
24.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
24.3. Component Options
The FTP component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
24.4. Endpoint Options
The FTP endpoint is configured using URI syntax:
ftp:host:port/directoryName
with the following path and query parameters:
24.4.1. Path Parameters (3 parameters)
Name | Description | Default | Type |
---|---|---|---|
host (common) | Required Hostname of the FTP server. | String | |
port (common) | Port of the FTP server. | int | |
directoryName (common) | The starting directory. | String |
24.4.2. Query Parameters (111 parameters)
Name | Description | Default | Type |
---|---|---|---|
binary (common) | Specifies the file transfer mode, BINARY or ASCII. Default is ASCII (false). | false | boolean |
charset (common) | This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. | String | |
disconnect (common) | Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead. | false | boolean |
doneFileName (common) | Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only $\\{file.name} and $\\{file.name.next} is supported as dynamic placeholders. | String | |
fileName (common) | Use Expression such as File Language to dynamically set the filename. For consumers, it’s used as a filename filter. For producers, it’s used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today’s file using the File Language syntax: mydata-$\\{date:now:yyyyMMdd}.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. | String | |
passiveMode (common) | Sets passive mode connections. Default is active mode connections. | false | boolean |
separator (common) | Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name. Enum values:
| UNIX | PathSeparator |
transferLoggingIntervalSeconds (common) | Configures the interval in seconds to use when logging the progress of upload and download operations that are in-flight. This is used for logging progress when operations takes longer time. | 5 | int |
transferLoggingLevel (common) | Configure the logging level to use when logging the progress of upload and download operations. Enum values:
| DEBUG | LoggingLevel |
transferLoggingVerbose (common) | Configures whether the perform verbose (fine grained) logging of the progress of upload and download operations. | false | boolean |
fastExistsCheck (common (advanced)) | If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
delete (consumer) | If true, the file will be deleted after it is processed successfully. | false | boolean |
moveFailed (consumer) | Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. | String | |
noop (consumer) | If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. | false | boolean |
preMove (consumer) | Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. | String | |
preSort (consumer) | When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. | false | boolean |
recursive (consumer) | If a directory, will look for files in all the sub-directories as well. | false | boolean |
resumeDownload (consumer) | Configures whether resume download is enabled. This must be supported by the FTP server (almost all FTP servers support it). In addition the options localWorkDirectory must be configured so downloaded files are stored in a local directory, and the option binary must be enabled, which is required to support resuming of downloads. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
streamDownload (consumer) | Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. If enabling this option then you must set stepwise=false as both cannot be enabled at the same time. | false | boolean |
download (consumer (advanced)) | Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It’s just that the file will not be downloaded. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
handleDirectoryParserAbsoluteResult (consumer (advanced)) | Allows you to set how the consumer will handle subfolders and files in the path if the directory parser results in with absolute paths The reason for this is that some FTP servers may return file names with absolute paths, and if so then the FTP component needs to handle this by converting the returned path into a relative path. | false | boolean |
ignoreFileNotFoundOrPermissionError (consumer (advanced)) | Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exists or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead. | false | boolean |
inProgressRepository (consumer (advanced)) | A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. | IdempotentRepository | |
localWorkDirectory (consumer (advanced)) | When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. | String | |
onCompletionExceptionHandler (consumer (advanced)) | To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. | ExceptionHandler | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
processStrategy (consumer (advanced)) | A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. | GenericFileProcessStrategy | |
useList (consumer (advanced)) | Whether to allow using LIST command when downloading a file. Default is true. In some use cases you may want to download a specific file and are not allowed to use the LIST command, and therefore you can set this option to false. Notice when using this option, then the specific file to download does not include meta-data information such as file size, timestamp, permissions etc, because those information is only possible to retrieve when LIST command is in use. | true | boolean |
fileExist (producer) | What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. - Append - adds content to the existing file. - Fail - throws a GenericFileOperationException, indicating that there is already an existing file. - Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. - Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. - TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Enum values:
| Override | GenericFileExist |
flatten (producer) | Flatten is used to flatten the file name path to strip any leading paths, so it’s just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. | false | boolean |
jailStartingDirectory (producer) | Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
moveExisting (producer) | Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. | String | |
tempFileName (producer) | The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. The location for tempFilename is relative to the final file location in the option 'fileName', not the target directory in the base uri. For example if option fileName includes a directory prefix: dir/finalFilename then tempFileName is relative to that subdirectory dir. | String | |
tempPrefix (producer) | This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. | String | |
allowNullBody (producer (advanced)) | Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. | false | boolean |
chmod (producer (advanced)) | Allows you to set chmod on the stored file. For example chmod=640. | String | |
disconnectOnBatchComplete (producer (advanced)) | Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server. | false | boolean |
eagerDeleteTargetFile (producer (advanced)) | Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. | true | boolean |
keepLastModified (producer (advanced)) | Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. | false | boolean |
moveExistingFileStrategy (producer (advanced)) | Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided. | FileMoveExistingStrategy | |
sendNoop (producer (advanced)) | Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off. | true | boolean |
activePortRange (advanced) | Set the client side port range in active mode. The syntax is: minPort-maxPort Both port numbers are inclusive, eg 10000-19999 to include all 1xxxx ports. | String | |
autoCreate (advanced) | Automatically create missing directories in the file’s pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. | true | boolean |
bufferSize (advanced) | Buffer size in bytes used for writing files (or in case of FTP for downloading and uploading files). | 131072 | int |
connectTimeout (advanced) | Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH. | 10000 | int |
ftpClient (advanced) | To use a custom instance of FTPClient. | FTPClient | |
ftpClientConfig (advanced) | To use a custom instance of FTPClientConfig to configure the FTP client the endpoint should use. | FTPClientConfig | |
ftpClientConfigParameters (advanced) | Used by FtpComponent to provide additional parameters for the FTPClientConfig. | Map | |
ftpClientParameters (advanced) | Used by FtpComponent to provide additional parameters for the FTPClient. | Map | |
maximumReconnectAttempts (advanced) | Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior. | int | |
reconnectDelay (advanced) | Delay in millis Camel will wait before performing a reconnect attempt. | 1000 | long |
siteCommand (advanced) | Sets optional site command(s) to be executed after successful login. Multiple site commands can be separated using a new line character. | String | |
soTimeout (advanced) | Sets the so timeout FTP and FTPS Is the SocketOptions.SO_TIMEOUT value in millis. Recommended option is to set this to 300000 so as not have a hanged connection. On SFTP this option is set as timeout on the JSCH Session instance. | 300000 | int |
stepwise (advanced) | Sets whether we should stepwise change directories while traversing file structures when downloading files, or as well when uploading a file to a directory. You can disable this if you for example are in a situation where you cannot change directory on the FTP server due security reasons. Stepwise cannot be used together with streamDownload. | true | boolean |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
throwExceptionOnConnectFailed (advanced) | Should an exception be thrown if connection failed (exhausted)By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method. | false | boolean |
timeout (advanced) | Sets the data timeout for waiting for reply Used only by FTPClient. | 30000 | int |
antExclude (filter) | Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. | String | |
antFilterCaseSensitive (filter) | Sets case sensitive flag on ant filter. | true | boolean |
antInclude (filter) | Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. | String | |
eagerMaxMessagesPerPoll (filter) | Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. | true | boolean |
exclude (filter) | Is used to exclude files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. | String | |
excludeExt (filter) | Is used to exclude files matching file extension name (case insensitive). For example to exclude bak files, then use excludeExt=bak. Multiple extensions can be separated by comma, for example to exclude bak and dat files, use excludeExt=bak,dat. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. | String | |
filter (filter) | Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. | GenericFileFilter | |
filterDirectory (filter) | Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as $\\{date:now:yyyMMdd}. | String | |
filterFile (filter) | Filters the file based on Simple language. For example to filter on file size, you can use $\\{file:size} 5000. | String | |
idempotent (filter) | Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. | false | Boolean |
idempotentKey (filter) | To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=$\\{file:name}-$\\{file:size}. | String | |
idempotentRepository (filter) | A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryIdempotentRepository if none is specified and idempotent is true. | IdempotentRepository | |
include (filter) | Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris. | String | |
includeExt (filter) | Is used to include files matching file extension name (case insensitive). For example to include txt files, then use includeExt=txt. Multiple extensions can be separated by comma, for example to include txt and xml files, use includeExt=txt,xml. Note that the file extension includes all parts, for example having a file named mydata.tar.gz will have extension as tar.gz. For more flexibility then use the include/exclude options. | String | |
maxDepth (filter) | The maximum depth to traverse when recursively processing a directory. | 2147483647 | int |
maxMessagesPerPoll (filter) | To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. | int | |
minDepth (filter) | The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. | int | |
move (filter) | Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. | String | |
exclusiveReadLockStrategy (lock) | Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. | GenericFileExclusiveReadLockStrategy | |
readLock (lock) | Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: - none - No read lock is in use - markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component - changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. - fileLock - is for using java.nio.channels.FileLock. This option is not avail for Windows OS and the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. - rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. - idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. - idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that.Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. Enum values:
| none | String |
readLockCheckInterval (lock) | Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. | 1000 | long |
readLockDeleteOrphanLockFiles (lock) | Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. | true | boolean |
readLockLoggingLevel (lock) | Logging level used when a read lock could not be acquired. By default a DEBUG is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. Enum values:
| DEBUG | LoggingLevel |
readLockMarkerFile (lock) | Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. | true | boolean |
readLockMinAge (lock) | This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. | 0 | long |
readLockMinLength (lock) | This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. | 1 | long |
readLockRemoveOnCommit (lock) | This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. | false | boolean |
readLockRemoveOnRollback (lock) | This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). | true | boolean |
readLockTimeout (lock) | Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At next poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. | 10000 | long |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
account (security) | Account to use for login. | String | |
password (security) | Password to use for login. | String | |
username (security) | Username to use for login. | String | |
shuffle (sort) | To shuffle the list of files (sort in random order). | false | boolean |
sortBy (sort) | Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. | String | |
sorter (sort) | Pluggable sorter as a java.util.Comparator class. | Comparator |
24.5. FTPS component default trust store
When using the ftpClient.
properties related to SSL with the FTPS component, the trust store accept all certificates. If you only want trust selective certificates, you have to configure the trust store with the ftpClient.trustStore.xxx
options or by configuring a custom ftpClient
.
When using sslContextParameters
, the trust store is managed by the configuration of the provided SSLContextParameters instance.
You can configure additional options on the ftpClient
and ftpClientConfig
from the URI directly by using the ftpClient.
or ftpClientConfig.
prefix.
For example to set the setDataTimeout
on the FTPClient
to 30 seconds you can do:
from("ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000").to("bean:foo");
You can mix and match and have use both prefixes, for example to configure date format or timezones.
from("ftp://foo@myserver?password=secret&ftpClient.dataTimeout=30000&ftpClientConfig.serverLanguageCode=fr").to("bean:foo");
You can have as many of these options as you like.
See the documentation of the Apache Commons FTP FTPClientConfig for possible options and more details. And as well for Apache Commons FTP FTPClient.
If you do not like having many and long configuration in the url you can refer to the ftpClient
or ftpClientConfig
to use by letting Camel lookup in the Registry for it.
For example:
<bean id="myConfig" class="org.apache.commons.net.ftp.FTPClientConfig"> <property name="lenientFutureDates" value="true"/> <property name="serverLanguageCode" value="fr"/> </bean>
And then let Camel lookup this bean when you use the # notation in the url.
from("ftp://foo@myserver?password=secret&ftpClientConfig=#myConfig").to("bean:foo");
24.6. Examples
ftp://someone@someftpserver.com/public/upload/images/holiday2008?password=secret&binary=true ftp://someoneelse@someotherftpserver.co.uk:12049/reports/2008/password=secret&binary=false ftp://publicftpserver.com/download
24.7. Concurrency
FTP Consumer does not support concurrency
The FTP consumer (with the same endpoint) does not support concurrency (the backing FTP client is not thread safe).
You can use multiple FTP consumers to poll from different endpoints. It is only a single endpoint that does not support concurrent consumers.
The FTP producer does not have this issue, it supports concurrency.
24.8. More information
This component is an extension of the File component. So there are more samples and details on the File component page.
24.9. Default when consuming files
The FTP consumer will by default leave the consumed files untouched on the remote FTP server. You have to configure it explicitly if you want it to delete the files or move them to another location. For example you can use delete=true
to delete the files, or use move=.done
to move the files into a hidden done sub directory.
The regular File consumer is different as it will by default move files to a .camel
sub directory. The reason Camel does not do this by default for the FTP consumer is that it may lack permissions by default to be able to move or delete files.
24.9.1. limitations
The option readLock can be used to force Camel not to consume files that is currently in the progress of being written. However, this option is turned off by default, as it requires that the user has write access. See the options table at File2 for more details about read locks.
There are other solutions to avoid consuming files that are currently being written over FTP; for instance, you can write to a temporary destination and move the file after it has been written.
When moving files using move
or preMove
option the files are restricted to the FTP_ROOT folder. That prevents you from moving files outside the FTP area. If you want to move files to another area you can use soft links and move files into a soft linked folder.
24.10. Message Headers
The following message headers can be used to affect the behavior of the component
Header | Description |
---|---|
| Specifies the output file name (relative to the endpoint directory) to be used for the output message when sending to the endpoint. If this is not present and no expression either, then a generated message ID is used as the filename instead. |
| The actual filepath (path + name) for the output file that was written. This header is set by Camel and its purpose is providing end-users the name of the file that was written. |
| The file name of the file consumed |
| The remote hostname. |
| Path to the local work file, if local work directory is used. |
In addition the FTP/FTPS consumer and producer will enrich the Camel Message
with the following headers
Header | Description |
---|---|
| The FTP client reply code (the type is a integer) |
| The FTP client reply string |
24.10.1. Exchange Properties
Camel sets the following exchange properties
Header | Description |
---|---|
| Current index out of total number of files being consumed in this batch. |
| Total number of files being consumed in this batch. |
| True if there are no more files in this batch. |
24.11. About timeouts
The two set of libraries (see top) has different API for setting timeout. You can use the connectTimeout
option for both of them to set a timeout in millis to establish a network connection. An individual soTimeout
can also be set on the FTP/FTPS, which corresponds to using ftpClient.soTimeout
. Notice SFTP will automatically use connectTimeout
as its soTimeout
. The timeout
option only applies for FTP/FTPS as the data timeout, which corresponds to the ftpClient.dataTimeout
value. All timeout values are in millis.
24.12. Using Local Work Directory
Camel supports consuming from remote FTP servers and downloading the files directly into a local work directory. This avoids reading the entire remote file content into memory as it is streamed directly into the local file using FileOutputStream
.
Camel will store to a local file with the same name as the remote file, though with .inprogress
as extension while the file is being downloaded. Afterwards, the file is renamed to remove the .inprogress
suffix. And finally, when the Exchange is complete the local file is deleted.
So if you want to download files from a remote FTP server and store it as files then you need to route to a file endpoint such as:
from("ftp://someone@someserver.com?password=secret&localWorkDirectory=/tmp").to("file://inbox");
The route above is ultra efficient as it avoids reading the entire file content into memory. It will download the remote file directly to a local file stream. The java.io.File
handle is then used as the Exchange body. The file producer leverages this fact and can work directly on the work file java.io.File
handle and perform a java.io.File.rename
to the target filename. As Camel knows it’s a local work file, it can optimize and use a rename instead of a file copy, as the work file is meant to be deleted anyway.
24.13. Stepwise changing directories
Camel FTP can operate in two modes in terms of traversing directories when consuming files (eg downloading) or producing files (eg uploading)
- stepwise
- not stepwise
You may want to pick either one depending on your situation and security issues. Some Camel end users can only download files if they use stepwise, while others can only download if they do not.
You can use the stepwise
option to control the behavior.
Note that stepwise changing of directory will in most cases only work when the user is confined to it’s home directory and when the home directory is reported as "/"
.
The difference between the two of them is best illustrated with an example. Suppose we have the following directory structure on the remote FTP server we need to traverse and download files:
/ /one /one/two /one/two/sub-a /one/two/sub-b
And that we have a file in each of sub-a (a.txt) and sub-b (b.txt) folder.
24.14. Using stepwise=true (default mode)
TYPE A 200 Type set to A PWD 257 "/" is current directory. CWD one 250 CWD successful. "/one" is current directory. CWD two 250 CWD successful. "/one/two" is current directory. SYST 215 UNIX emulated by FileZilla PORT 127,0,0,1,17,94 200 Port command successful LIST 150 Opening data channel for directory list. 226 Transfer OK CWD sub-a 250 CWD successful. "/one/two/sub-a" is current directory. PORT 127,0,0,1,17,95 200 Port command successful LIST 150 Opening data channel for directory list. 226 Transfer OK CDUP 200 CDUP successful. "/one/two" is current directory. CWD sub-b 250 CWD successful. "/one/two/sub-b" is current directory. PORT 127,0,0,1,17,96 200 Port command successful LIST 150 Opening data channel for directory list. 226 Transfer OK CDUP 200 CDUP successful. "/one/two" is current directory. CWD / 250 CWD successful. "/" is current directory. PWD 257 "/" is current directory. CWD one 250 CWD successful. "/one" is current directory. CWD two 250 CWD successful. "/one/two" is current directory. PORT 127,0,0,1,17,97 200 Port command successful RETR foo.txt 150 Opening data channel for file transfer. 226 Transfer OK CWD / 250 CWD successful. "/" is current directory. PWD 257 "/" is current directory. CWD one 250 CWD successful. "/one" is current directory. CWD two 250 CWD successful. "/one/two" is current directory. CWD sub-a 250 CWD successful. "/one/two/sub-a" is current directory. PORT 127,0,0,1,17,98 200 Port command successful RETR a.txt 150 Opening data channel for file transfer. 226 Transfer OK CWD / 250 CWD successful. "/" is current directory. PWD 257 "/" is current directory. CWD one 250 CWD successful. "/one" is current directory. CWD two 250 CWD successful. "/one/two" is current directory. CWD sub-b 250 CWD successful. "/one/two/sub-b" is current directory. PORT 127,0,0,1,17,99 200 Port command successful RETR b.txt 150 Opening data channel for file transfer. 226 Transfer OK CWD / 250 CWD successful. "/" is current directory. QUIT 221 Goodbye disconnected.
As you can see when stepwise is enabled, it will traverse the directory structure using CD xxx.
24.15. Using stepwise=false
230 Logged on TYPE A 200 Type set to A SYST 215 UNIX emulated by FileZilla PORT 127,0,0,1,4,122 200 Port command successful LIST one/two 150 Opening data channel for directory list 226 Transfer OK PORT 127,0,0,1,4,123 200 Port command successful LIST one/two/sub-a 150 Opening data channel for directory list 226 Transfer OK PORT 127,0,0,1,4,124 200 Port command successful LIST one/two/sub-b 150 Opening data channel for directory list 226 Transfer OK PORT 127,0,0,1,4,125 200 Port command successful RETR one/two/foo.txt 150 Opening data channel for file transfer. 226 Transfer OK PORT 127,0,0,1,4,126 200 Port command successful RETR one/two/sub-a/a.txt 150 Opening data channel for file transfer. 226 Transfer OK PORT 127,0,0,1,4,127 200 Port command successful RETR one/two/sub-b/b.txt 150 Opening data channel for file transfer. 226 Transfer OK QUIT 221 Goodbye disconnected.
As you can see when not using stepwise, there are no CD operation invoked at all.
24.16. Samples
In the sample below we set up Camel to download all the reports from the FTP server once every hour (60 min) as BINARY content and store it as files on the local file system.
And the route using XML DSL:
<route> <from uri="ftp://scott@localhost/public/reports?password=tiger&binary=true&delay=60000"/> <to uri="file://target/test-reports"/> </route>
24.16.1. Consuming a remote FTPS server (implicit SSL) and client authentication
from("ftps://admin@localhost:2222/public/camel?password=admin&securityProtocol=SSL&implicit=true &ftpClient.keyStore.file=./src/test/resources/server.jks &ftpClient.keyStore.password=password&ftpClient.keyStore.keyPassword=password") .to("bean:foo");
24.16.2. Consuming a remote FTPS server (explicit TLS) and a custom trust store configuration
from("ftps://admin@localhost:2222/public/camel?password=admin&ftpClient.trustStore.file=./src/test/resources/server.jks&ftpClient.trustStore.password=password") .to("bean:foo");
24.17. Custom filtering
Camel supports pluggable filtering strategies. This strategy it to use the build in org.apache.camel.component.file.GenericFileFilter
in Java. You can then configure the endpoint with such a filter to skip certain filters before being processed.
In the sample we have built our own filter that only accepts files starting with report in the filename.
And then we can configure our route using the filter attribute to reference our filter (using # notation) that we have defined in the spring XML file:
<!-- define our sorter as a plain spring bean --> <bean id="myFilter" class="com.mycompany.MyFileFilter"/> <route> <from uri="ftp://someuser@someftpserver.com?password=secret&filter=#myFilter"/> <to uri="bean:processInbox"/> </route>
24.18. Filtering using ANT path matcher
The ANT path matcher is a filter that is shipped out-of-the-box in the camel-spring jar. So you need to depend on camel-spring if you are using Maven.
The reason is that we leverage Spring’s AntPathMatcher to do the actual matching.
The file paths are matched with the following rules:
-
?
matches one character -
*
matches zero or more characters -
**
matches zero or more directories in a path
The sample below demonstrates how to use it:
24.19. Using a proxy with SFTP
To use an HTTP proxy to connect to your remote host, you can configure your route in the following way:
<!-- define our sorter as a plain spring bean --> <bean id="proxy" class="com.jcraft.jsch.ProxyHTTP"> <constructor-arg value="localhost"/> <constructor-arg value="7777"/> </bean> <route> <from uri="sftp://localhost:9999/root?username=admin&password=admin&proxy=#proxy"/> <to uri="bean:processFile"/> </route>
You can also assign a user name and password to the proxy, if necessary. Please consult the documentation for com.jcraft.jsch.Proxy
to discover all options.
24.20. Setting preferred SFTP authentication method
If you want to explicitly specify the list of authentication methods that should be used by sftp
component, use preferredAuthentications
option. If for example you would like Camel to attempt to authenticate with private/public SSH key and fallback to user/password authentication in the case when no public key is available, use the following route configuration:
from("sftp://localhost:9999/root?username=admin&password=admin&preferredAuthentications=publickey,password"). to("bean:processFile");
24.21. Consuming a single file using a fixed name
When you want to download a single file and knows the file name, you can use fileName=myFileName.txt
to tell Camel the name of the file to download. By default the consumer will still do a FTP LIST command to do a directory listing and then filter these files based on the fileName
option. Though in this use-case it may be desirable to turn off the directory listing by setting useList=false
. For example the user account used to login to the FTP server may not have permission to do a FTP LIST command. So you can turn off this with useList=false
, and then provide the fixed name of the file to download with fileName=myFileName.txt
, then the FTP consumer can still download the file. If the file for some reason does not exist, then Camel will by default throw an exception, you can turn this off and ignore this by setting ignoreFileNotFoundOrPermissionError=true
.
For example to have a Camel route that pickup a single file, and delete it after use you can do
from("ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true") .to("activemq:queue:report");
Notice that we have used all the options we talked above.
You can also use this with ConsumerTemplate
. For example to download a single file (if it exists) and grab the file content as a String type:
String data = template.retrieveBodyNoWait("ftp://admin@localhost:21/nolist/?password=admin&stepwise=false&useList=false&ignoreFileNotFoundOrPermissionError=true&fileName=report.txt&delete=true", String.class);
24.22. Debug logging
This component has log level TRACE that can be helpful if you have problems.
24.23. Spring Boot Auto-Configuration
When using ftp with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ftp-starter</artifactId> </dependency>
The component supports 13 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.ftp.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.ftp.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.ftp.enabled | Whether to enable auto configuration of the ftp component. This is enabled by default. | Boolean | |
camel.component.ftp.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.ftps.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.ftps.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.ftps.enabled | Whether to enable auto configuration of the ftps component. This is enabled by default. | Boolean | |
camel.component.ftps.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.ftps.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
camel.component.sftp.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.sftp.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.sftp.enabled | Whether to enable auto configuration of the sftp component. This is enabled by default. | Boolean | |
camel.component.sftp.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 25. HTTP
Only producer is supported
The HTTP component provides HTTP based endpoints for calling external HTTP resources (as a client to call external servers using HTTP).
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-http</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
25.1. URI format
http:hostname[:port][/resourceUri][?options]
Will by default use port 80 for HTTP and 443 for HTTPS.
25.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
25.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
25.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
25.3. Component Options
The HTTP component supports 37 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
cookieStore (producer) | To use a custom org.apache.http.client.CookieStore. By default the org.apache.http.impl.client.BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn’t be stored as we are just bridging (eg acting as a proxy). | CookieStore | |
copyHeaders (producer) | If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
responsePayloadStreamingThreshold (producer) | This threshold in bytes controls whether the response payload should be stored in memory as a byte array or be streaming based. Set this to -1 to always use streaming mode. | 8192 | int |
skipRequestHeaders (producer (advanced)) | Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector. | false | boolean |
skipResponseHeaders (producer (advanced)) | Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector. | false | boolean |
allowJavaSerializedObject (advanced) | Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. | false | boolean |
authCachingDisabled (advanced) | Disables authentication scheme caching. | false | boolean |
automaticRetriesDisabled (advanced) | Disables automatic request recovery and re-execution. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
clientConnectionManager (advanced) | To use a custom and shared HttpClientConnectionManager to manage connections. If this has been configured then this is always used for all endpoints created by this component. | HttpClientConnectionManager | |
connectionsPerRoute (advanced) | The maximum number of connections per route. | 20 | int |
connectionStateDisabled (advanced) | Disables connection state tracking. | false | boolean |
connectionTimeToLive (advanced) | The time for connection to live, the time unit is millisecond, the default value is always keep alive. | long | |
contentCompressionDisabled (advanced) | Disables automatic content decompression. | false | boolean |
cookieManagementDisabled (advanced) | Disables state (cookie) management. | false | boolean |
defaultUserAgentDisabled (advanced) | Disables the default user agent set by this builder if none has been provided by the user. | false | boolean |
httpBinding (advanced) | To use a custom HttpBinding to control the mapping between Camel message and HttpClient. | HttpBinding | |
httpClientConfigurer (advanced) | To use the custom HttpClientConfigurer to perform configuration of the HttpClient that will be used. | HttpClientConfigurer | |
httpConfiguration (advanced) | To use the shared HttpConfiguration as base configuration. | HttpConfiguration | |
httpContext (advanced) | To use a custom org.apache.http.protocol.HttpContext when executing requests. | HttpContext | |
maxTotalConnections (advanced) | The maximum number of connections. | 200 | int |
redirectHandlingDisabled (advanced) | Disables automatic redirect handling. | false | boolean |
headerFilterStrategy (filter) | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
proxyAuthDomain (proxy) | Proxy authentication domain to use. | String | |
proxyAuthHost (proxy) | Proxy authentication host. | String | |
proxyAuthMethod (proxy) | Proxy authentication method to use. Enum values:
| String | |
proxyAuthNtHost (proxy) | Proxy authentication domain (workstation name) to use with NTML. | String | |
proxyAuthPassword (proxy) | Proxy authentication password. | String | |
proxyAuthPort (proxy) | Proxy authentication port. | Integer | |
proxyAuthUsername (proxy) | Proxy authentication username. | String | |
sslContextParameters (security) | To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.support.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need. | SSLContextParameters | |
useGlobalSslContextParameters (security) | Enable usage of global SSL context parameters. | false | boolean |
x509HostnameVerifier (security) | To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier. | HostnameVerifier | |
connectionRequestTimeout (timeout) | The timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). | -1 | int |
connectTimeout (timeout) | Determines the timeout in milliseconds until a connection is established. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). | -1 | int |
socketTimeout (timeout) | Defines the socket timeout in milliseconds, which is the timeout for waiting for data or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). | -1 | int |
25.4. Endpoint Options
The HTTP endpoint is configured using URI syntax:
http://httpUri
with the following path and query parameters:
25.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
httpUri (common) | Required The url of the HTTP endpoint to call. | URI |
25.4.2. Query Parameters (51 parameters)
Name | Description | Default | Type |
---|---|---|---|
chunked (producer) | If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response. | true | boolean |
disableStreamCache (common) | Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. | false | boolean |
headerFilterStrategy (common) | To use a custom HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
httpBinding (common (advanced)) | To use a custom HttpBinding to control the mapping between Camel message and HttpClient. | HttpBinding | |
bridgeEndpoint (producer) | If the option is true, HttpProducer will ignore the Exchange.HTTP_URI header, and use the endpoint’s URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back. | false | boolean |
clearExpiredCookies (producer) | Whether to clear expired cookies before sending the HTTP request. This ensures the cookies store does not keep growing by adding new cookies which is newer removed when they are expired. If the component has disabled cookie management then this option is disabled too. | true | boolean |
connectionClose (producer) | Specifies whether a Connection Close header must be added to HTTP Request. By default connectionClose is false. | false | boolean |
copyHeaders (producer) | If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). | true | boolean |
customHostHeader (producer) | To use custom host header for producer. When not set in query will be ignored. When set will override host header derived from url. | String | |
httpMethod (producer) | Configure the HTTP method to use. The HttpMethod header cannot override this option if set. Enum values:
| HttpMethods | |
ignoreResponseBody (producer) | If this option is true, The http producer won’t read response body and cache the input stream. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
preserveHostHeader (producer) | If the option is true, HttpProducer will set the Host header to the value contained in the current exchange Host header, useful in reverse proxy applications where you want the Host header received by the downstream server to reflect the URL called by the upstream client, this allows applications which use the Host header to generate accurate URL’s for a proxied service. | false | boolean |
throwExceptionOnFailure (producer) | Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. | true | boolean |
transferException (producer) | If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. | false | boolean |
cookieHandler (producer (advanced)) | Configure a cookie handler to maintain a HTTP session. | CookieHandler | |
cookieStore (producer (advanced)) | To use a custom CookieStore. By default the BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn’t be stored as we are just bridging (eg acting as a proxy). If a cookieHandler is set then the cookie store is also forced to be a noop cookie store as cookie handling is then performed by the cookieHandler. | CookieStore | |
deleteWithBody (producer (advanced)) | Whether the HTTP DELETE should include the message body or not. By default HTTP DELETE do not include any HTTP body. However in some rare cases users may need to be able to include the message body. | false | boolean |
getWithBody (producer (advanced)) | Whether the HTTP GET should include the message body or not. By default HTTP GET do not include any HTTP body. However in some rare cases users may need to be able to include the message body. | false | boolean |
okStatusCodeRange (producer (advanced)) | The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. | 200-299 | String |
skipRequestHeaders (producer (advanced)) | Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector. | false | boolean |
skipResponseHeaders (producer (advanced)) | Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector. | false | boolean |
userAgent (producer (advanced)) | To set a custom HTTP User-Agent request header. | String | |
clientBuilder (advanced) | Provide access to the http client request parameters used on new RequestConfig instances used by producers or consumers of this endpoint. | HttpClientBuilder | |
clientConnectionManager (advanced) | To use a custom HttpClientConnectionManager to manage connections. | HttpClientConnectionManager | |
connectionsPerRoute (advanced) | The maximum number of connections per route. | 20 | int |
httpClient (advanced) | Sets a custom HttpClient to be used by the producer. | HttpClient | |
httpClientConfigurer (advanced) | Register a custom configuration strategy for new HttpClient instances created by producers or consumers such as to configure authentication mechanisms etc. | HttpClientConfigurer | |
httpClientOptions (advanced) | To configure the HttpClient using the key/values from the Map. | Map | |
httpContext (advanced) | To use a custom HttpContext instance. | HttpContext | |
maxTotalConnections (advanced) | The maximum number of connections. | 200 | int |
useSystemProperties (advanced) | To use System Properties as fallback for configuration. | false | boolean |
proxyAuthDomain (proxy) | Proxy authentication domain to use with NTML. | String | |
proxyAuthHost (proxy) | Proxy authentication host. | String | |
proxyAuthMethod (proxy) | Proxy authentication method to use. Enum values:
| String | |
proxyAuthNtHost (proxy) | Proxy authentication domain (workstation name) to use with NTML. | String | |
proxyAuthPassword (proxy) | Proxy authentication password. | String | |
proxyAuthPort (proxy) | Proxy authentication port. | int | |
proxyAuthScheme (proxy) | Proxy authentication scheme to use. Enum values:
| String | |
proxyAuthUsername (proxy) | Proxy authentication username. | String | |
proxyHost (proxy) | Proxy hostname to use. | String | |
proxyPort (proxy) | Proxy port to use. | int | |
authDomain (security) | Authentication domain to use with NTML. | String | |
authenticationPreemptive (security) | If this option is true, camel-http sends preemptive basic authentication to the server. | false | boolean |
authHost (security) | Authentication host to use with NTML. | String | |
authMethod (security) | Authentication methods allowed to use as a comma separated list of values Basic, Digest or NTLM. | String | |
authMethodPriority (security) | Which authentication method to prioritize to use, either as Basic, Digest or NTLM. Enum values:
| String | |
authPassword (security) | Authentication password. | String | |
authUsername (security) | Authentication username. | String | |
sslContextParameters (security) | To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.util.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need. | SSLContextParameters | |
x509HostnameVerifier (security) | To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier. | HostnameVerifier |
25.5. Message Headers
Name | Type | Description |
---|---|---|
|
| URI to call. Will override existing URI set directly on the endpoint. This uri is the uri of the http server to call. Its not the same as the Camel endpoint uri, where you can configure endpoint options such as security etc. This header does not support that, its only the uri of the http server. |
|
| Request URI’s path, the header will be used to build the request URI with the HTTP_URI. |
|
| URI parameters. Will override existing URI parameters set directly on the endpoint. |
|
| The HTTP response code from the external server. Is 200 for OK. |
|
| The HTTP response text from the external server. |
|
| Character encoding. |
|
|
The HTTP content type. Is set on both the IN and OUT message to provide a content type, such as |
|
|
The HTTP content encoding. Is set on both the IN and OUT message to provide a content encoding, such as |
25.6. Message Body
Camel will store the HTTP response from the external server on the OUT body. All headers from the IN message will be copied to the OUT message, so headers are preserved during routing. Additionally Camel will add the HTTP response headers as well to the OUT message headers.
25.7. Using System Properties
When setting useSystemProperties to true, the HTTP Client will look for the following System Properties and it will use it:
- ssl.TrustManagerFactory.algorithm
- javax.net.ssl.trustStoreType
- javax.net.ssl.trustStore
- javax.net.ssl.trustStoreProvider
- javax.net.ssl.trustStorePassword
- java.home
- ssl.KeyManagerFactory.algorithm
- javax.net.ssl.keyStoreType
- javax.net.ssl.keyStore
- javax.net.ssl.keyStoreProvider
- javax.net.ssl.keyStorePassword
- http.proxyHost
- http.proxyPort
- http.nonProxyHosts
- http.keepAlive
- http.maxConnections
25.8. Response code
Camel will handle according to the HTTP response code:
- Response code is in the range 100..299, Camel regards it as a success response.
-
Response code is in the range 300..399, Camel regards it as a redirection response and will throw a
HttpOperationFailedException
with the information. -
Response code is 400+, Camel regards it as an external server failure and will throw a
HttpOperationFailedException
with the information.
throwExceptionOnFailure The option, throwExceptionOnFailure
, can be set to false
to prevent the HttpOperationFailedException
from being thrown for failed response codes. This allows you to get any response from the remote server.
There is a sample below demonstrating this.
25.9. Exceptions
HttpOperationFailedException
exception contains the following information:
- The HTTP status code
- The HTTP status line (text of the status code)
- Redirect location, if server returned a redirect
-
Response body as a
java.lang.String
, if server provided a body as response
25.10. Which HTTP method will be used
The following algorithm is used to determine what HTTP method should be used:
1. Use method provided as endpoint configuration (httpMethod
).
2. Use method provided in header (Exchange.HTTP_METHOD
).
3. GET
if query string is provided in header.
4. GET
if endpoint is configured with a query string.
5. POST
if there is data to send (body is not null
).
6. GET
otherwise.
25.11. How to get access to HttpServletRequest and HttpServletResponse
You can get access to these two using the Camel type converter system using
HttpServletRequest request = exchange.getIn().getBody(HttpServletRequest.class); HttpServletResponse response = exchange.getIn().getBody(HttpServletResponse.class);
You can get the request and response not just from the processor after the camel-jetty or camel-cxf endpoint.
25.12. Configuring URI to call
You can set the HTTP producer’s URI directly form the endpoint URI. In the route below, Camel will call out to the external server, oldhost
, using HTTP.
from("direct:start") .to("http://oldhost");
And the equivalent Spring sample:
<camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="direct:start"/> <to uri="http://oldhost"/> </route> </camelContext>
You can override the HTTP endpoint URI by adding a header with the key, Exchange.HTTP_URI
, on the message.
from("direct:start") .setHeader(Exchange.HTTP_URI, constant("http://newhost")) .to("http://oldhost");
In the sample above Camel will call the http://newhost/ despite the endpoint is configured with http://oldhost/.
If the http endpoint is working in bridge mode, it will ignore the message header of Exchange.HTTP_URI
.
25.13. Configuring URI Parameters
The http producer supports URI parameters to be sent to the HTTP server. The URI parameters can either be set directly on the endpoint URI or as a header with the key Exchange.HTTP_QUERY
on the message.
from("direct:start") .to("http://oldhost?order=123&detail=short");
Or options provided in a header:
from("direct:start") .setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short")) .to("http://oldhost");
25.14. How to set the http method (GET/PATCH/POST/PUT/DELETE/HEAD/OPTIONS/TRACE) to the HTTP producer
The HTTP component provides a way to set the HTTP request method by setting the message header. Here is an example:
from("direct:start") .setHeader(Exchange.HTTP_METHOD, constant(org.apache.camel.component.http.HttpMethods.POST)) .to("http://www.google.com") .to("mock:results");
The method can be written a bit shorter using the string constants:
.setHeader("CamelHttpMethod", constant("POST"))
And the equivalent Spring sample:
<camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="direct:start"/> <setHeader name="CamelHttpMethod"> <constant>POST</constant> </setHeader> <to uri="http://www.google.com"/> <to uri="mock:results"/> </route> </camelContext>
25.15. Using client timeout - SO_TIMEOUT
See the HttpSOTimeoutTest unit test.
25.16. Configuring a Proxy
The HTTP component provides a way to configure a proxy.
from("direct:start") .to("http://oldhost?proxyAuthHost=www.myproxy.com&proxyAuthPort=80");
There is also support for proxy authentication via the proxyAuthUsername
and proxyAuthPassword
options.
25.16.1. Using proxy settings outside of URI
To avoid System properties conflicts, you can set proxy configuration only from the CamelContext or URI.
Java DSL :
context.getGlobalOptions().put("http.proxyHost", "172.168.18.9"); context.getGlobalOptions().put("http.proxyPort", "8080");
Spring XML
<camelContext> <properties> <property key="http.proxyHost" value="172.168.18.9"/> <property key="http.proxyPort" value="8080"/> </properties> </camelContext>
Camel will first set the settings from Java System or CamelContext Properties and then the endpoint proxy options if provided.
So you can override the system properties with the endpoint options.
There is also a http.proxyScheme
property you can set to explicit configure the scheme to use.
25.17. Configuring charset
If you are using POST
to send data you can configure the charset
using the Exchange
property:
exchange.setProperty(Exchange.CHARSET_NAME, "ISO-8859-1");
25.17.1. Sample with scheduled poll
This sample polls the Google homepage every 10 seconds and write the page to the file message.html
:
from("timer://foo?fixedRate=true&delay=0&period=10000") .to("http://www.google.com") .setHeader(FileComponent.HEADER_FILE_NAME, "message.html") .to("file:target/google");
25.17.2. URI Parameters from the endpoint URI
In this sample we have the complete URI endpoint that is just what you would have typed in a web browser. Multiple URI parameters can of course be set using the &
character as separator, just as you would in the web browser. Camel does no tricks here.
// we query for Camel at the Google page template.sendBody("http://www.google.com/search?q=Camel", null);
25.17.3. URI Parameters from the Message
Map headers = new HashMap(); headers.put(Exchange.HTTP_QUERY, "q=Camel&lr=lang_en"); // we query for Camel and English language at Google template.sendBody("http://www.google.com/search", null, headers);
In the header value above notice that it should not be prefixed with ?
and you can separate parameters as usual with the &
char.
25.17.4. Getting the Response Code
You can get the HTTP response code from the HTTP component by getting the value from the Out message header with Exchange.HTTP_RESPONSE_CODE
.
Exchange exchange = template.send("http://www.google.com/search", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(Exchange.HTTP_QUERY, constant("hl=en&q=activemq")); } }); Message out = exchange.getOut(); int responseCode = out.getHeader(Exchange.HTTP_RESPONSE_CODE, Integer.class);
25.18. Disabling Cookies
To disable cookies you can set the HTTP Client to ignore cookies by adding this URI option:httpClient.cookieSpec=ignoreCookies
25.19. Basic auth with the streaming message body
In order to avoid the NonRepeatableRequestException
, you need to do the Preemptive Basic Authentication by adding the option:authenticationPreemptive=true
25.20. Advanced Usage
If you need more control over the HTTP producer you should use the HttpComponent
where you can set various classes to give you custom behavior.
25.20.1. Setting up SSL for HTTP Client
Using the JSSE Configuration Utility
The HTTP component supports SSL/TLS configuration through the Camel JSSE Configuration Utility. This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the HTTP component.
Programmatic configuration of the component
KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); HttpComponent httpComponent = getContext().getComponent("https", HttpComponent.class); httpComponent.setSslContextParameters(scp);
Spring DSL based configuration of endpoint
<camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters> <to uri="https://127.0.0.1/mail/?sslContextParameters=#sslContextParameters"/>
Configuring Apache HTTP Client Directly
Basically camel-http component is built on the top of Apache HttpClient. Please refer to SSL/TLS customization for details or have a look into the org.apache.camel.component.http.HttpsServerTestSupport
unit test base class.
You can also implement a custom org.apache.camel.component.http.HttpClientConfigurer
to do some configuration on the http client if you need full control of it.
However if you just want to specify the keystore and truststore you can do this with Apache HTTP HttpClientConfigurer
, for example:
KeyStore keystore = ...; KeyStore truststore = ...; SchemeRegistry registry = new SchemeRegistry(); registry.register(new Scheme("https", 443, new SSLSocketFactory(keystore, "mypassword", truststore)));
And then you need to create a class that implements HttpClientConfigurer
, and registers https protocol providing a keystore or truststore per example above. Then, from your camel route builder class you can hook it up like so:
HttpComponent httpComponent = getContext().getComponent("http", HttpComponent.class); httpComponent.setHttpClientConfigurer(new MyHttpClientConfigurer());
If you are doing this using the Spring DSL, you can specify your HttpClientConfigurer
using the URI. For example:
<bean id="myHttpClientConfigurer" class="my.https.HttpClientConfigurer"> </bean> <to uri="https://myhostname.com:443/myURL?httpClientConfigurer=myHttpClientConfigurer"/>
As long as you implement the HttpClientConfigurer and configure your keystore and truststore as described above, it will work fine.
Using HTTPS to authenticate gotchas
An end user reported that he had problem with authenticating with HTTPS. The problem was eventually resolved by providing a custom configured org.apache.http.protocol.HttpContext
:
- 1. Create a (Spring) factory for HttpContexts:
public class HttpContextFactory { private String httpHost = "localhost"; private String httpPort = 9001; private BasicHttpContext httpContext = new BasicHttpContext(); private BasicAuthCache authCache = new BasicAuthCache(); private BasicScheme basicAuth = new BasicScheme(); public HttpContext getObject() { authCache.put(new HttpHost(httpHost, httpPort), basicAuth); httpContext.setAttribute(ClientContext.AUTH_CACHE, authCache); return httpContext; } // getter and setter }
- 2. Declare an HttpContext in the Spring application context file:
<bean id="myHttpContext" factory-bean="httpContextFactory" factory-method="getObject"/>
- 3. Reference the context in the http URL:
<to uri="https://myhostname.com:443/myURL?httpContext=myHttpContext"/>
Using different SSLContextParameters
The HTTP component only support one instance of org.apache.camel.support.jsse.SSLContextParameters
per component. If you need to use 2 or more different instances, then you need to setup multiple HTTP components as shown below. Where we have 2 components, each using their own instance of sslContextParameters
property.
<bean id="http-foo" class="org.apache.camel.component.http.HttpComponent"> <property name="sslContextParameters" ref="sslContextParams1"/> <property name="x509HostnameVerifier" ref="hostnameVerifier"/> </bean> <bean id="http-bar" class="org.apache.camel.component.http.HttpComponent"> <property name="sslContextParameters" ref="sslContextParams2"/> <property name="x509HostnameVerifier" ref="hostnameVerifier"/> </bean>
25.21. Spring Boot Auto-Configuration
When using http with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-http-starter</artifactId> </dependency>
The component supports 38 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.http.allow-java-serialized-object | Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. | false | Boolean |
camel.component.http.auth-caching-disabled | Disables authentication scheme caching. | false | Boolean |
camel.component.http.automatic-retries-disabled | Disables automatic request recovery and re-execution. | false | Boolean |
camel.component.http.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.http.client-connection-manager | To use a custom and shared HttpClientConnectionManager to manage connections. If this has been configured then this is always used for all endpoints created by this component. The option is a org.apache.http.conn.HttpClientConnectionManager type. | HttpClientConnectionManager | |
camel.component.http.connect-timeout | Determines the timeout in milliseconds until a connection is established. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). | -1 | Integer |
camel.component.http.connection-request-timeout | The timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). | -1 | Integer |
camel.component.http.connection-state-disabled | Disables connection state tracking. | false | Boolean |
camel.component.http.connection-time-to-live | The time for connection to live, the time unit is millisecond, the default value is always keep alive. | Long | |
camel.component.http.connections-per-route | The maximum number of connections per route. | 20 | Integer |
camel.component.http.content-compression-disabled | Disables automatic content decompression. | false | Boolean |
camel.component.http.cookie-management-disabled | Disables state (cookie) management. | false | Boolean |
camel.component.http.cookie-store | To use a custom org.apache.http.client.CookieStore. By default the org.apache.http.impl.client.BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn’t be stored as we are just bridging (eg acting as a proxy). The option is a org.apache.http.client.CookieStore type. | CookieStore | |
camel.component.http.copy-headers | If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). | true | Boolean |
camel.component.http.default-user-agent-disabled | Disables the default user agent set by this builder if none has been provided by the user. | false | Boolean |
camel.component.http.enabled | Whether to enable auto configuration of the http component. This is enabled by default. | Boolean | |
camel.component.http.header-filter-strategy | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.http.http-binding | To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.http.common.HttpBinding type. | HttpBinding | |
camel.component.http.http-client-configurer | To use the custom HttpClientConfigurer to perform configuration of the HttpClient that will be used. The option is a org.apache.camel.component.http.HttpClientConfigurer type. | HttpClientConfigurer | |
camel.component.http.http-configuration | To use the shared HttpConfiguration as base configuration. The option is a org.apache.camel.http.common.HttpConfiguration type. | HttpConfiguration | |
camel.component.http.http-context | To use a custom org.apache.http.protocol.HttpContext when executing requests. The option is a org.apache.http.protocol.HttpContext type. | HttpContext | |
camel.component.http.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.http.max-total-connections | The maximum number of connections. | 200 | Integer |
camel.component.http.proxy-auth-domain | Proxy authentication domain to use. | String | |
camel.component.http.proxy-auth-host | Proxy authentication host. | String | |
camel.component.http.proxy-auth-method | Proxy authentication method to use. | String | |
camel.component.http.proxy-auth-nt-host | Proxy authentication domain (workstation name) to use with NTML. | String | |
camel.component.http.proxy-auth-password | Proxy authentication password. | String | |
camel.component.http.proxy-auth-port | Proxy authentication port. | Integer | |
camel.component.http.proxy-auth-username | Proxy authentication username. | String | |
camel.component.http.redirect-handling-disabled | Disables automatic redirect handling. | false | Boolean |
camel.component.http.response-payload-streaming-threshold | This threshold in bytes controls whether the response payload should be stored in memory as a byte array or be streaming based. Set this to -1 to always use streaming mode. | 8192 | Integer |
camel.component.http.skip-request-headers | Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector. | false | Boolean |
camel.component.http.skip-response-headers | Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector. | false | Boolean |
camel.component.http.socket-timeout | Defines the socket timeout in milliseconds, which is the timeout for waiting for data or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). | -1 | Integer |
camel.component.http.ssl-context-parameters | To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.support.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need. The option is a org.apache.camel.support.jsse.SSLContextParameters type. | SSLContextParameters | |
camel.component.http.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
camel.component.http.x509-hostname-verifier | To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier. The option is a javax.net.ssl.HostnameVerifier type. | HostnameVerifier |
Chapter 26. Infinispan
Both producer and consumer are supported
This component allows you to interact with Infinispan distributed data grid / cache using the Hot Rod procol. Infinispan is an extremely scalable, highly available key/value data store and data grid platform written in Java.
If you use Maven, you must add the following dependency to your pom.xml
:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-infinispan</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
26.1. URI format
infinispan://cacheName?[options]
The producer allows sending messages to a remote cache using the HotRod protocol. The consumer allows listening for events from a remote cache using the HotRod protocol.
26.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
26.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
26.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
26.3. Component Options
The Infinispan component supports 26 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
configuration (common) | Component configuration. | InfinispanRemoteConfiguration | |
hosts (common) | Specifies the host of the cache on Infinispan instance. | String | |
queryBuilder (common) | Specifies the query builder. | InfinispanQueryBuilder | |
secure (common) | Define if we are connecting to a secured Infinispan instance. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
customListener (consumer) | Returns the custom listener in use, if provided. | InfinispanRemoteCustomListener | |
eventTypes (consumer) | Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. | String | |
defaultValue (producer) | Set a specific default value for some producer operations. | Object | |
key (producer) | Set a specific key for producer operations. | Object | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
oldValue (producer) | Set a specific old value for some producer operations. | Object | |
operation (producer) | The operation to perform. Enum values:
| PUT | InfinispanOperation |
value (producer) | Set a specific value for producer operations. | Object | |
password ( security) | Define the password to access the infinispan instance. | String | |
saslMechanism ( security) | Define the SASL Mechanism to access the infinispan instance. | String | |
securityRealm ( security) | Define the security realm to access the infinispan instance. | String | |
securityServerName ( security) | Define the security server name to access the infinispan instance. | String | |
username ( security) | Define the username to access the infinispan instance. | String | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
cacheContainer (advanced) | Autowired Specifies the cache Container to connect. | RemoteCacheManager | |
cacheContainerConfiguration (advanced) | Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. | Configuration | |
configurationProperties (advanced) | Implementation specific properties for the CacheManager. | Map | |
configurationUri (advanced) | An implementation specific URI for the CacheManager. | String | |
flags (advanced) | A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. | String | |
remappingFunction (advanced) | Set a specific remappingFunction to use in a compute operation. | BiFunction | |
resultHeader (advanced) | Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. | String |
26.4. Endpoint Options
The Infinispan endpoint is configured using URI syntax:
infinispan:cacheName
with the following path and query parameters:
26.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
cacheName (common) | Required The name of the cache to use. Use current to use the existing cache name from the currently configured cached manager. Or use default for the default cache manager name. | String |
26.4.2. Query Parameters (26 parameters)
Name | Description | Default | Type |
---|---|---|---|
hosts (common) | Specifies the host of the cache on Infinispan instance. | String | |
queryBuilder (common) | Specifies the query builder. | InfinispanQueryBuilder | |
secure (common) | Define if we are connecting to a secured Infinispan instance. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
customListener (consumer) | Returns the custom listener in use, if provided. | InfinispanRemoteCustomListener | |
eventTypes (consumer) | Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. | String | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
defaultValue (producer) | Set a specific default value for some producer operations. | Object | |
key (producer) | Set a specific key for producer operations. | Object | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
oldValue (producer) | Set a specific old value for some producer operations. | Object | |
operation (producer) | The operation to perform. Enum values:
| PUT | InfinispanOperation |
value (producer) | Set a specific value for producer operations. | Object | |
password ( security) | Define the password to access the infinispan instance. | String | |
saslMechanism ( security) | Define the SASL Mechanism to access the infinispan instance. | String | |
securityRealm ( security) | Define the security realm to access the infinispan instance. | String | |
securityServerName ( security) | Define the security server name to access the infinispan instance. | String | |
username ( security) | Define the username to access the infinispan instance. | String | |
cacheContainer (advanced) | Autowired Specifies the cache Container to connect. | RemoteCacheManager | |
cacheContainerConfiguration (advanced) | Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. | Configuration | |
configurationProperties (advanced) | Implementation specific properties for the CacheManager. | Map | |
configurationUri (advanced) | An implementation specific URI for the CacheManager. | String | |
flags (advanced) | A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. | String | |
remappingFunction (advanced) | Set a specific remappingFunction to use in a compute operation. | BiFunction | |
resultHeader (advanced) | Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. | String |
26.5. Camel Operations
This section lists all available operations, along with their header information.
Operation Name | Description |
---|---|
InfinispanOperation.PUT | Puts a key/value pair in the cache, optionally with expiration |
InfinispanOperation.PUTASYNC | Asynchronously puts a key/value pair in the cache, optionally with expiration |
InfinispanOperation.PUTIFABSENT | Puts a key/value pair in the cache if it did not exist, optionally with expiration |
InfinispanOperation.PUTIFABSENTASYNC | Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration |
Required Headers:
- CamelInfinispanKey
- CamelInfinispanValue
Optional Headers:
- CamelInfinispanLifespanTime
- CamelInfinispanLifespanTimeUnit
- CamelInfinispanMaxIdleTime
- CamelInfinispanMaxIdleTimeUnit
Result Header:
- CamelInfinispanOperationResult
Operation Name | Description |
---|---|
InfinispanOperation.PUTALL | Adds multiple entries to a cache, optionally with expiration |
CamelInfinispanOperation.PUTALLASYNC | Asynchronously adds multiple entries to a cache, optionally with expiration |
Required Headers:
- CamelInfinispanMap
Optional Headers:
- CamelInfinispanLifespanTime
- CamelInfinispanLifespanTimeUnit
- CamelInfinispanMaxIdleTime
- CamelInfinispanMaxIdleTimeUnit
Operation Name | Description |
---|---|
InfinispanOperation.GET | Retrieves the value associated with a specific key from the cache |
InfinispanOperation.GETORDEFAULT | Retrieves the value, or default value, associated with a specific key from the cache |
Required Headers:
- CamelInfinispanKey
Operation Name | Description |
---|---|
InfinispanOperation.CONTAINSKEY | Determines whether a cache contains a specific key |
Required Headers
- CamelInfinispanKey
Result Header
- CamelInfinispanOperationResult
Operation Name | Description |
---|---|
InfinispanOperation.CONTAINSVALUE | Determines whether a cache contains a specific value |
Required Headers:
- CamelInfinispanKey
Operation Name | Description |
---|---|
InfinispanOperation.REMOVE | Removes an entry from a cache, optionally only if the value matches a given one |
InfinispanOperation.REMOVEASYNC | Asynchronously removes an entry from a cache, optionally only if the value matches a given one |
Required Headers:
- CamelInfinispanKey
Optional Headers:
- CamelInfinispanValue
Result Header:
- CamelInfinispanOperationResult
Operation Name | Description |
---|---|
InfinispanOperation.REPLACE | Conditionally replaces an entry in the cache, optionally with expiration |
InfinispanOperation.REPLACEASYNC | Asynchronously conditionally replaces an entry in the cache, optionally with expiration |
Required Headers:
- CamelInfinispanKey
- CamelInfinispanValue
- CamelInfinispanOldValue
Optional Headers:
- CamelInfinispanLifespanTime
- CamelInfinispanLifespanTimeUnit
- CamelInfinispanMaxIdleTime
- CamelInfinispanMaxIdleTimeUnit
Result Header:
- CamelInfinispanOperationResult
Operation Name | Description |
---|---|
InfinispanOperation.CLEAR | Clears the cache |
InfinispanOperation.CLEARASYNC | Asynchronously clears the cache |
Operation Name | Description |
---|---|
InfinispanOperation.SIZE | Returns the number of entries in the cache |
Result Header
- CamelInfinispanOperationResult
Operation Name | Description |
---|---|
InfinispanOperation.STATS | Returns statistics about the cache |
Result Header:
- CamelInfinispanOperationResult
Operation Name | Description |
---|---|
InfinispanOperation.QUERY | Executes a query on the cache |
Required Headers:
- CamelInfinispanQueryBuilder
Result Header:
- CamelInfinispanOperationResult
Write methods like put(key, value) and remove(key) do not return the previous value by default.
26.6. Message Headers
Name | Default Value | Type | Context | Description |
---|---|---|---|---|
CamelInfinispanCacheName |
| String | Shared | The cache participating in the operation or event. |
CamelInfinispanOperation |
| InfinispanOperation | Producer | The operation to perform. |
CamelInfinispanMap |
| Map | Producer | A Map to use in case of CamelInfinispanOperationPutAll operation |
CamelInfinispanKey |
| Object | Shared | The key to perform the operation to or the key generating the event. |
CamelInfinispanValue |
| Object | Producer | The value to use for the operation. |
CamelInfinispanEventType |
| String | Consumer | The type of the received event. |
CamelInfinispanLifespanTime |
| long | Producer | The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. |
CamelInfinispanTimeUnit |
| String | Producer | The Time Unit of an entry Lifespan Time. |
CamelInfinispanMaxIdleTime |
| long | Producer | The maximum amount of time an entry is allowed to be idle for before it is considered as expired. |
CamelInfinispanMaxIdleTimeUnit |
| String | Producer | The Time Unit of an entry Max Idle Time. |
CamelInfinispanQueryBuilder | null | InfinispanQueryBuilder | Producer | The QueryBuilde to use for QUERY command, if not present the command defaults to InifinispanConfiguration’s one |
CamelInfinispanOperationResultHeader | null | String | Producer | Store the operation result in a header instead of the message body |
26.7. Examples
Put a key/value into a named cache:
from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant("123") (2) .to("infinispan:myCacheName&cacheContainer=#cacheContainer"); (3)
Where,
- 1 - Set the operation to perform
- 2 - Set the key used to identify the element in the cache
3 - Use the configured cache manager
cacheContainer
from the registry to put an element to the cache namedmyCacheName
It is possible to configure the lifetime and/or the idle time before the entry expires and gets evicted from the cache, as example:
from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to("infinispan:myCacheName");
where,
- 1 - Set the lifespan of the entry
- 2 - Set the time unit for the lifespan
Queries
from("direct:start") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having("name").like("%abc%").build(); } }) .to("infinispan:myCacheName?cacheContainer=#cacheManager") ;
The .proto descriptors for domain objects must be registered with the remote Data Grid server, see Remote Query Example in the official Infinispan documentation.
Custom Listeners
from("infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener") .to("mock:result");
The instance of myCustomListener
must exist and Camel should be able to look it up from the Registry
. Users are encouraged to extend the org.apache.camel.component.infinispan.remote.InfinispanRemoteCustomListener
class and annotate the resulting class with @ClientListener
which can be found found in package org.infinispan.client.hotrod.annotation
.
26.8. Using the Infinispan based idempotent repository
In this section we will use the Infinispan based idempotent repository.
Java Example
InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts("localhost:1122") InfinispanRemoteIdempotentRepository repo = new InfinispanRemoteIdempotentRepository("idempotent"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .idempotentConsumer(header("MessageID"), repo) (3) .to("mock:result"); } });
where,
- 1 - Configure the cache
- 2 - Configure the repository bean
- 3 - Set the repository to the route
XML Example
<bean id="infinispanRepo" class="org.apache.camel.component.infinispan.remote.InfinispanRemoteIdempotentRepository" destroy-method="stop"> <constructor-arg value="idempotent"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration"> <property name="hosts" value="localhost:11222"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <idempotentConsumer messageIdRepositoryRef="infinispanRepo"> (3) <header>MessageID</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext>
where,
- 1 - Set the name of the cache that will be used by the repository
- 2 - Configure the repository bean
- 3 - Set the repository to the route
26.9. Using the Infinispan based aggregation repository
In this section we will use the Infinispan based aggregation repository.
Java Example
InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts("localhost:1122") InfinispanRemoteAggregationRepository repo = new InfinispanRemoteAggregationRepository(); (2) repo.setCacheName("aggregation"); repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .aggregate(header("MessageID")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategyRef("myStrategy") .to("mock:result"); } });
where,
- 1 - Configure the cache
- 2 - Create the repository bean
- 3 - Set the repository to the route
XML Example
<bean id="infinispanRepo" class="org.apache.camel.component.infinispan.remote.InfinispanRemoteAggregationRepository" destroy-method="stop"> <constructor-arg value="aggregation"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration"> <property name="hosts" value="localhost:11222"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <aggregate strategyRef="myStrategy" completionSize="3" aggregationRepositoryRef="infinispanRepo"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri="mock:result"/> </aggregate> </route> </camelContext>
where,
- 1 - Set the name of the cache that will be used by the repository
- 2 - Configure the repository bean
- 3 - Set the repository to the route
With the release of Infinispan 11, it is required to set the encoding configuration on any cache created. This is critical for consuming events too. For more information have a look at Data Encoding and MediaTypes in the official Infinispan documentation.
26.10. Spring Boot Auto-Configuration
When using infinispan with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-starter</artifactId> </dependency>
The component supports 23 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.infinispan.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.infinispan.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.infinispan.cache-container | Specifies the cache Container to connect. The option is a org.infinispan.client.hotrod.RemoteCacheManager type. | RemoteCacheManager | |
camel.component.infinispan.cache-container-configuration | The CacheContainer configuration. Used if the cacheContainer is not defined. The option is a org.infinispan.client.hotrod.configuration.Configuration type. | Configuration | |
camel.component.infinispan.configuration | Component configuration. The option is a org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration type. | InfinispanRemoteConfiguration | |
camel.component.infinispan.configuration-properties | Implementation specific properties for the CacheManager. | Map | |
camel.component.infinispan.configuration-uri | An implementation specific URI for the CacheManager. | String | |
camel.component.infinispan.custom-listener | Returns the custom listener in use, if provided. The option is a org.apache.camel.component.infinispan.remote.InfinispanRemoteCustomListener type. | InfinispanRemoteCustomListener | |
camel.component.infinispan.enabled | Whether to enable auto configuration of the infinispan component. This is enabled by default. | Boolean | |
camel.component.infinispan.event-types | Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. | String | |
camel.component.infinispan.flags | A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. | String | |
camel.component.infinispan.hosts | Specifies the host of the cache on Infinispan instance. | String | |
camel.component.infinispan.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.infinispan.operation | The operation to perform. | InfinispanOperation | |
camel.component.infinispan.password | Define the password to access the infinispan instance. | String | |
camel.component.infinispan.query-builder | Specifies the query builder. The option is a org.apache.camel.component.infinispan.InfinispanQueryBuilder type. | InfinispanQueryBuilder | |
camel.component.infinispan.remapping-function | Set a specific remappingFunction to use in a compute operation. The option is a java.util.function.BiFunction type. | BiFunction | |
camel.component.infinispan.result-header | Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. | String | |
camel.component.infinispan.sasl-mechanism | Define the SASL Mechanism to access the infinispan instance. | String | |
camel.component.infinispan.secure | Define if we are connecting to a secured Infinispan instance. | false | Boolean |
camel.component.infinispan.security-realm | Define the security realm to access the infinispan instance. | String | |
camel.component.infinispan.security-server-name | Define the security server name to access the infinispan instance. | String | |
camel.component.infinispan.username | Define the username to access the infinispan instance. | String |
Chapter 27. Jira
Both producer and consumer are supported
The JIRA component interacts with the JIRA API by encapsulating Atlassian’s REST Java Client for JIRA. It currently provides polling for new issues and new comments. It is also able to create new issues, add comments, change issues, add/remove watchers, add attachment and transition the state of an issue.
Rather than webhooks, this endpoint relies on simple polling. Reasons include:
- Concern for reliability/stability
- The types of payloads we’re polling aren’t typically large (plus, paging is available in the API)
- The need to support apps running somewhere not publicly accessible where a webhook would fail
Note that the JIRA API is fairly expansive. Therefore, this component could be easily expanded to provide additional interactions.
Maven users will need to add the following dependency to their pom.xml for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jira</artifactId> <version>${camel-version}</version> </dependency>
27.1. URI format
jira://type[?options]
The Jira type accepts the following operations:
For consumers:
- newIssues: retrieve only new issues after the route is started
- newComments: retrieve only new comments after the route is started
- watchUpdates: retrieve only updated fields/issues based on provided jql
For producers:
- addIssue: add an issue
- addComment: add a comment on a given issue
- attach: add an attachment on a given issue
- deleteIssue: delete a given issue
- updateIssue: update fields of a given issue
- transitionIssue: transition a status of a given issue
- watchers: add/remove watchers of a given issue
As Jira is fully customizable, you must assure the fields IDs exists for the project and workflow, as they can change between different Jira servers.
27.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
27.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
27.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
27.3. Component Options
The Jira component supports 12 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
delay (common) | Time in milliseconds to elapse for the next poll. | 6000 | Integer |
jiraUrl (common) | Required The Jira server url, example: . | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
configuration (advanced) | To use a shared base jira configuration. | JiraConfiguration | |
accessToken (security) | (OAuth only) The access token generated by the Jira server. | String | |
consumerKey (security) | (OAuth only) The consumer key from Jira settings. | String | |
password (security) | (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. | String | |
privateKey (security) | (OAuth only) The private key generated by the client to encrypt the conversation to the server. | String | |
username (security) | (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. | String | |
verificationCode (security) | (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. | String |
27.4. Endpoint Options
The Jira endpoint is configured using URI syntax:
jira:type
with the following path and query parameters:
27.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
type (common) | Required Operation to perform. Consumers: NewIssues, NewComments. Producers: AddIssue, AttachFile, DeleteIssue, TransitionIssue, UpdateIssue, Watchers. See this class javadoc description for more information. Enum values:
| JiraType |
27.4.2. Query Parameters (16 parameters)
Name | Description | Default | Type |
---|---|---|---|
delay (common) | Time in milliseconds to elapse for the next poll. | 6000 | Integer |
jiraUrl (common) | Required The Jira server url, example: . | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
jql (consumer) | JQL is the query language from JIRA which allows you to retrieve the data you want. For example jql=project=MyProject Where MyProject is the product key in Jira. It is important to use the RAW() and set the JQL inside it to prevent camel parsing it, example: RAW(project in (MYP, COM) AND resolution = Unresolved). | String | |
maxResults (consumer) | Max number of issues to search for. | 50 | Integer |
sendOnlyUpdatedField (consumer) | Indicator for sending only changed fields in exchange body or issue object. By default consumer sends only changed fields. | true | boolean |
watchedFields (consumer) | Comma separated list of fields to watch for changes. Status,Priority are the defaults. | Status,Priority | String |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
accessToken (security) | (OAuth only) The access token generated by the Jira server. | String | |
consumerKey (security) | (OAuth only) The consumer key from Jira settings. | String | |
password (security) | (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. | String | |
privateKey (security) | (OAuth only) The private key generated by the client to encrypt the conversation to the server. | String | |
username (security) | (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. | String | |
verificationCode (security) | (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. | String |
27.5. Client Factory
You can bind the JiraRestClientFactory
with name JiraRestClientFactory in the registry to have it automatically set in the Jira endpoint.
27.6. Authentication
Camel-jira supports Basic Authentication and OAuth 3 legged authentication.
We recommend to use OAuth whenever possible, as it provides the best security for your users and system.
27.6.1. Basic authentication requirements:
- An username and password
27.6.2. OAuth authentication requirements:
Follow the tutorial in Jira OAuth documentation to generate the client private key, consumer key, verification code and access token.
- a private key, generated locally on your system.
- A verification code, generated by Jira server.
- The consumer key, set in the Jira server settings.
- An access token, generated by Jira server.
27.7. JQL
The JQL URI option is used by both consumer endpoints. Theoretically, items like "project key", etc. could be URI options themselves. However, by requiring the use of JQL, the consumers become much more flexible and powerful.
At the bare minimum, the consumers will require the following:
jira://[type]?[required options]&jql=project=[project key]
One important thing to note is that the newIssues consumer will automatically set the JQL as:
-
append
ORDER BY key desc
to your JQL -
prepend
id > latestIssueId
to retrieve issues added after the camel route was started.
This is in order to optimize startup processing, rather than having to index every single issue in the project.
Another note is that, similarly, the newComments consumer will have to index every single issue and comment in the project. Therefore, for large projects, it’s vital to optimize the JQL expression as much as possible. For example, the JIRA Toolkit Plugin includes a "Number of comments" custom field — use '"Number of comments" > 0' in your query. Also try to minimize based on state (status=Open), increase the polling delay, etc. Example:
jira://[type]?[required options]&jql=RAW(project=[project key] AND status in (Open, \"Coding In Progress\") AND \"Number of comments\">0)"
27.8. Operations
See a list of required headers to set when using the Jira operations. The author field for the producers is automatically set to the authenticated user in the Jira side.
If any required field is not set, then an IllegalArgumentException is throw.
There are operations that requires id
for fields suchs as: issue type, priority, transition. Check the valid id
on your jira project as they may differ on a jira installation and project workflow.
27.9. AddIssue
Required:
-
ProjectKey
: The project key, example: CAMEL, HHH, MYP. -
IssueTypeId
orIssueTypeName
: Theid
of the issue type or the name of the issue type, you can see the valid list inhttp://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY
. -
IssueSummary
: The summary of the issue.
Optional:
-
IssueAssignee
: the assignee user -
IssuePriorityId
orIssuePriorityName
: The priority of the issue, you can see the valid list inhttp://jira_server/rest/api/2/priority
. -
IssueComponents
: A list of string with the valid component names. -
IssueWatchersAdd
: A list of strings with the usernames to add to the watcher list. -
IssueDescription
: The description of the issue.
27.10. AddComment
Required:
-
IssueKey
: The issue key identifier. - body of the exchange is the description.
27.11. Attach
Only one file should attach per invocation.
Required:
-
IssueKey
: The issue key identifier. -
body of the exchange should be of type
File
27.12. DeleteIssue
Required:
-
IssueKey
: The issue key identifier.
27.13. TransitionIssue
Required:
-
IssueKey
: The issue key identifier. -
IssueTransitionId
: The issue transitionid
. - body of the exchange is the description.
27.14. UpdateIssue
-
IssueKey
: The issue key identifier. -
IssueTypeId
orIssueTypeName
: Theid
of the issue type or the name of the issue type, you can see the valid list inhttp://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY
. -
IssueSummary
: The summary of the issue. -
IssueAssignee
: the assignee user -
IssuePriorityId
orIssuePriorityName
: The priority of the issue, you can see the valid list inhttp://jira_server/rest/api/2/priority
. -
IssueComponents
: A list of string with the valid component names. -
IssueDescription
: The description of the issue.
27.15. Watcher
-
IssueKey
: The issue key identifier. -
IssueWatchersAdd
: A list of strings with the usernames to add to the watcher list. -
IssueWatchersRemove
: A list of strings with the usernames to remove from the watcher list.
27.16. WatchUpdates (consumer)
-
watchedFields
Comma separated list of fields to watch for changes i.eStatus,Priority,Assignee,Components
etc. -
sendOnlyUpdatedField
By default only changed field is send as the body.
All messages also contain following headers that add additional info about the change:
-
issueKey
: Key of the updated issue -
changed
: name of the updated field (i.e Status) -
watchedIssues
: list of all issue keys that are watched in the time of update
27.17. Spring Boot Auto-Configuration
When using jira with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jira-starter</artifactId> </dependency>
The component supports 13 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.jira.access-token | (OAuth only) The access token generated by the Jira server. | String | |
camel.component.jira.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.jira.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.jira.configuration | To use a shared base jira configuration. The option is a org.apache.camel.component.jira.JiraConfiguration type. | JiraConfiguration | |
camel.component.jira.consumer-key | (OAuth only) The consumer key from Jira settings. | String | |
camel.component.jira.delay | Time in milliseconds to elapse for the next poll. | 6000 | Integer |
camel.component.jira.enabled | Whether to enable auto configuration of the jira component. This is enabled by default. | Boolean | |
camel.component.jira.jira-url | The Jira server url, example: http://my_jira.com:8081/. | String | |
camel.component.jira.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.jira.password | (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. | String | |
camel.component.jira.private-key | (OAuth only) The private key generated by the client to encrypt the conversation to the server. | String | |
camel.component.jira.username | (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. | String | |
camel.component.jira.verification-code | (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. | String |
Chapter 28. JMS
Both producer and consumer are supported
This component allows messages to be sent to (or consumed from) a JMS Queue or Topic. It uses Spring’s JMS support for declarative transactions, including Spring’s JmsTemplate
for sending and a MessageListenerContainer
for consuming.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jms</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
Using ActiveMQ
If you are using Apache ActiveMQ, you should prefer the ActiveMQ component as it has been optimized for ActiveMQ. All of the options and samples on this page are also valid for the ActiveMQ component.
Transacted and caching
See section Transactions and Cache Levels below if you are using transactions with JMS as it can impact performance.
Request/Reply over JMS
Make sure to read the section Request-reply over JMS further below on this page for important notes about request/reply, as Camel offers a number of options to configure for performance, and clustered environments.
28.1. URI format
jms:[queue:|topic:]destinationName[?options]
Where destinationName
is a JMS queue or topic name. By default, the destinationName
is interpreted as a queue name. For example, to connect to the queue, FOO.BAR
use:
jms:FOO.BAR
You can include the optional queue:
prefix, if you prefer:
jms:queue:FOO.BAR
To connect to a topic, you must include the topic:
prefix. For example, to connect to the topic, Stocks.Prices
, use:
jms:topic:Stocks.Prices
You append query options to the URI by using the following format,
?option=value&option=value&…
28.1.1. Using ActiveMQ
The JMS component reuses Spring 2’s JmsTemplate
for sending messages. This is not ideal for use in a non-J2EE container and typically requires some caching in the JMS provider to avoid poor performance .
If you intend to use Apache ActiveMQ as your message broker, the recommendation is that you do one of the following:
- Use the ActiveMQ component, which is already optimized to use ActiveMQ efficiently
-
Use the
PoolingConnectionFactory
in ActiveMQ.
28.1.2. Transactions and Cache Levels
If you are consuming messages and using transactions (transacted=true
) then the default settings for cache level can impact performance.
If you are using XA transactions then you cannot cache as it can cause the XA transaction to not work properly.
If you are not using XA, then you should consider caching as it speeds up performance, such as setting cacheLevelName=CACHE_CONSUMER
.
The default setting for cacheLevelName
is CACHE_AUTO
. This default auto detects the mode and sets the cache level accordingly to:
-
CACHE_CONSUMER
iftransacted=false
-
CACHE_NONE
iftransacted=true
So you can say the default setting is conservative. Consider using cacheLevelName=CACHE_CONSUMER
if you are using non-XA transactions.
28.1.3. Durable Subscriptions
If you wish to use durable topic subscriptions, you need to specify both clientId
and durableSubscriptionName
. The value of the clientId
must be unique and can only be used by a single JMS connection instance in your entire network. You may prefer to use Virtual Topics instead to avoid this limitation. More background on durable messaging here.
28.1.4. Message Header Mapping
When using message headers, the JMS specification states that header names must be valid Java identifiers. So try to name your headers to be valid Java identifiers. One benefit of doing this is that you can then use your headers inside a JMS Selector (whose SQL92 syntax mandates Java identifier syntax for headers).
A simple strategy for mapping header names is used by default. The strategy is to replace any dots and hyphens in the header name as shown below and to reverse the replacement when the header name is restored from a JMS message sent over the wire. What does this mean? No more losing method names to invoke on a bean component, no more losing the filename header for the File Component, and so on.
The current header name strategy for accepting header names in Camel is as follows:
- Dots are replaced by `DOT` and the replacement is reversed when Camel consume the message
- Hyphen is replaced by `HYPHEN` and the replacement is reversed when Camel consumes the message
You can configure many different properties on the JMS endpoint, which map to properties on the JMSConfiguration
object.
Mapping to Spring JMS
Many of these properties map to properties on Spring JMS, which Camel uses for sending and receiving messages. So you can get more information about these properties by consulting the relevant Spring documentation.
28.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
28.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
28.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
28.3. Component Options
The JMS component supports 98 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
clientId (common) | Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. | String | |
connectionFactory (common) | The connection factory to be use. A connection factory must be configured either on the component or endpoint. | ConnectionFactory | |
disableReplyTo (common) | Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. | false | boolean |
durableSubscriptionName (common) | The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. | String | |
jmsMessageType (common) | Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values:
| JmsMessageType | |
replyTo (common) | Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). | String | |
testConnectionOnStartup (common) | Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. | false | boolean |
acknowledgementModeName (consumer) | The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values:
| AUTO_ACKNOWLEDGE | String |
artemisConsumerPriority (consumer) | Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). | int | |
asyncConsumer (consumer) | Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). | false | boolean |
autoStartup (consumer) | Specifies whether the consumer container should auto-startup. | true | boolean |
cacheLevel (consumer) | Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. | int | |
cacheLevelName (consumer) | Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values:
| CACHE_AUTO | String |
concurrentConsumers (consumer) | Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | 1 | int |
maxConcurrentConsumers (consumer) | Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | int | |
replyToDeliveryPersistent (consumer) | Specifies whether to use persistent delivery by default for replies. | true | boolean |
selector (consumer) | Sets the JMS selector to use. | String | |
subscriptionDurable (consumer) | Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. | false | boolean |
subscriptionName (consumer) | Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client’s JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). | String | |
subscriptionShared (consumer) | Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. | false | boolean |
acceptMessagesWhileStopping (consumer (advanced)) | Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. | false | boolean |
allowReplyManagerQuickStop (consumer (advanced)) | Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. | false | boolean |
consumerType (consumer (advanced)) | The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values:
| Default | ConsumerType |
defaultTaskExecutorType (consumer (advanced)) | Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring’s SimpleAsyncTaskExecutor) or ThreadPool (uses Spring’s ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values:
| DefaultTaskExecutorType | |
eagerLoadingOfProperties (consumer (advanced)) | Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. | false | boolean |
eagerPoisonBody (consumer (advanced)) | If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. | Poison JMS message due to $\{exception.message} | String |
exposeListenerSession (consumer (advanced)) | Specifies whether the listener session should be exposed when consuming messages. | false | boolean |
replyToSameDestinationAllowed (consumer (advanced)) | Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. | false | boolean |
taskExecutor (consumer (advanced)) | Allows you to specify a custom task executor for consuming messages. | TaskExecutor | |
deliveryDelay (producer) | Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. | -1 | long |
deliveryMode (producer) | Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values:
| Integer | |
deliveryPersistent (producer) | Specifies whether persistent delivery is used by default. | true | boolean |
explicitQosEnabled (producer) | Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring’s JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. | false | Boolean |
formatDateHeadersToIso8601 (producer) | Sets whether JMS date properties should be formatted according to the ISO 8601 standard. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
preserveMessageQos (producer) | Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. | false | boolean |
priority (producer) | Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values:
| 4 | int |
replyToConcurrentConsumers (producer) | Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | 1 | int |
replyToMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | int | |
replyToOnTimeoutMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. | 1 | int |
replyToOverride (producer) | Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. | String | |
replyToType (producer) | Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values:
| ReplyToType | |
requestTimeout (producer) | The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. | 20000 | long |
timeToLive (producer) | When sending messages, specifies the time-to-live of the message (in milliseconds). | -1 | long |
allowAdditionalHeaders (producer (advanced)) | This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. | String | |
allowNullBody (producer (advanced)) | Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. | true | boolean |
alwaysCopyMessage (producer (advanced)) | If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). | false | boolean |
correlationProperty (producer (advanced)) | When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. | String | |
disableTimeToLive (producer (advanced)) | Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. | false | boolean |
forceSendOriginalMessage (producer (advanced)) | When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. | false | boolean |
includeSentJMSMessageID (producer (advanced)) | Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. | false | boolean |
replyToCacheLevelName (producer (advanced)) | Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values:
| String | |
replyToDestinationSelectorName (producer (advanced)) | Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). | String | |
streamMessageTypeEnabled (producer (advanced)) | Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. | false | boolean |
allowAutoWiredConnectionFactory (advanced) | Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. | true | boolean |
allowAutoWiredDestinationResolver (advanced) | Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. | true | boolean |
allowSerializedHeaders (advanced) | Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
artemisStreamingEnabled (advanced) | Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. | false | boolean |
asyncStartListener (advanced) | Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. | false | boolean |
asyncStopListener (advanced) | Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
configuration (advanced) | To use a shared JMS configuration. | JmsConfiguration | |
destinationResolver (advanced) | A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). | DestinationResolver | |
errorHandler (advanced) | Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. | ErrorHandler | |
exceptionListener (advanced) | Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. | ExceptionListener | |
idleConsumerLimit (advanced) | Specify the limit for the number of consumers that are allowed to be idle at any given time. | 1 | int |
idleTaskExecutionLimit (advanced) | Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. | 1 | int |
includeAllJMSXProperties (advanced) | Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. | false | boolean |
jmsKeyFormatStrategy (advanced) | Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values:
| JmsKeyFormatStrategy | |
mapJmsMessage (advanced) | Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. | true | boolean |
maxMessagesPerTask (advanced) | The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. | -1 | int |
messageConverter (advanced) | To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. | MessageConverter | |
messageCreatedStrategy (advanced) | To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. | MessageCreatedStrategy | |
messageIdEnabled (advanced) | When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. | true | boolean |
messageListenerContainerFactory (advanced) | Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. | MessageListenerContainerFactory | |
messageTimestampEnabled (advanced) | Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. | true | boolean |
pubSubNoLocal (advanced) | Specifies whether to inhibit the delivery of messages published by its own connection. | false | boolean |
queueBrowseStrategy (advanced) | To use a custom QueueBrowseStrategy when browsing queues. | QueueBrowseStrategy | |
receiveTimeout (advanced) | The timeout for receiving messages (in milliseconds). | 1000 | long |
recoveryInterval (advanced) | Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. | 5000 | long |
requestTimeoutCheckerInterval (advanced) | Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. | 1000 | long |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
transferException (advanced) | If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. | false | boolean |
transferExchange (advanced) | You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. | false | boolean |
useMessageIDAsCorrelationID (advanced) | Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. | false | boolean |
waitForProvisionCorrelationToBeUpdatedCounter (advanced) | Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. | 50 | int |
waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) | Interval in millis to sleep each time while waiting for provisional correlation id to be updated. | 100 | long |
headerFilterStrategy (filter) | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
errorHandlerLoggingLevel (logging) | Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values:
| WARN | LoggingLevel |
errorHandlerLogStackTrace (logging) | Allows to control whether stacktraces should be logged or not, by the default errorHandler. | true | boolean |
password (security) | Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
username (security) | Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
transacted (transaction) | Specifies whether to use transacted mode. | false | boolean |
transactedInOut (transaction) | Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. | false | boolean |
lazyCreateTransactionManager (transaction (advanced)) | If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. | true | boolean |
transactionManager (transaction (advanced)) | The Spring transaction manager to use. | PlatformTransactionManager | |
transactionName (transaction (advanced)) | The name of the transaction to use. | String | |
transactionTimeout (transaction (advanced)) | The timeout value of the transaction (in seconds), if using transacted mode. | -1 | int |
28.4. Endpoint Options
The JMS endpoint is configured using URI syntax:
jms:destinationType:destinationName
with the following path and query parameters:
28.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
destinationType (common) | The kind of destination to use. Enum values:
| queue | String |
destinationName (common) | Required Name of the queue or topic to use as destination. | String |
28.4.2. Query Parameters (95 parameters)
Name | Description | Default | Type |
---|---|---|---|
clientId (common) | Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. | String | |
connectionFactory (common) | The connection factory to be use. A connection factory must be configured either on the component or endpoint. | ConnectionFactory | |
disableReplyTo (common) | Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. | false | boolean |
durableSubscriptionName (common) | The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. | String | |
jmsMessageType (common) | Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values:
| JmsMessageType | |
replyTo (common) | Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). | String | |
testConnectionOnStartup (common) | Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. | false | boolean |
acknowledgementModeName (consumer) | The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values:
| AUTO_ACKNOWLEDGE | String |
artemisConsumerPriority (consumer) | Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). | int | |
asyncConsumer (consumer) | Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). | false | boolean |
autoStartup (consumer) | Specifies whether the consumer container should auto-startup. | true | boolean |
cacheLevel (consumer) | Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. | int | |
cacheLevelName (consumer) | Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values:
| CACHE_AUTO | String |
concurrentConsumers (consumer) | Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | 1 | int |
maxConcurrentConsumers (consumer) | Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | int | |
replyToDeliveryPersistent (consumer) | Specifies whether to use persistent delivery by default for replies. | true | boolean |
selector (consumer) | Sets the JMS selector to use. | String | |
subscriptionDurable (consumer) | Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. | false | boolean |
subscriptionName (consumer) | Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client’s JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). | String | |
subscriptionShared (consumer) | Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. | false | boolean |
acceptMessagesWhileStopping (consumer (advanced)) | Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. | false | boolean |
allowReplyManagerQuickStop (consumer (advanced)) | Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. | false | boolean |
consumerType (consumer (advanced)) | The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values:
| Default | ConsumerType |
defaultTaskExecutorType (consumer (advanced)) | Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring’s SimpleAsyncTaskExecutor) or ThreadPool (uses Spring’s ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values:
| DefaultTaskExecutorType | |
eagerLoadingOfProperties (consumer (advanced)) | Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. | false | boolean |
eagerPoisonBody (consumer (advanced)) | If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. | Poison JMS message due to $\{exception.message} | String |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
exposeListenerSession (consumer (advanced)) | Specifies whether the listener session should be exposed when consuming messages. | false | boolean |
replyToSameDestinationAllowed (consumer (advanced)) | Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. | false | boolean |
taskExecutor (consumer (advanced)) | Allows you to specify a custom task executor for consuming messages. | TaskExecutor | |
deliveryDelay (producer) | Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. | -1 | long |
deliveryMode (producer) | Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values:
| Integer | |
deliveryPersistent (producer) | Specifies whether persistent delivery is used by default. | true | boolean |
explicitQosEnabled (producer) | Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring’s JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. | false | Boolean |
formatDateHeadersToIso8601 (producer) | Sets whether JMS date properties should be formatted according to the ISO 8601 standard. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
preserveMessageQos (producer) | Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. | false | boolean |
priority (producer) | Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values:
| 4 | int |
replyToConcurrentConsumers (producer) | Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | 1 | int |
replyToMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | int | |
replyToOnTimeoutMaxConcurrentConsumers (producer) | Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. | 1 | int |
replyToOverride (producer) | Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. | String | |
replyToType (producer) | Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values:
| ReplyToType | |
requestTimeout (producer) | The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. | 20000 | long |
timeToLive (producer) | When sending messages, specifies the time-to-live of the message (in milliseconds). | -1 | long |
allowAdditionalHeaders (producer (advanced)) | This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. | String | |
allowNullBody (producer (advanced)) | Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. | true | boolean |
alwaysCopyMessage (producer (advanced)) | If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). | false | boolean |
correlationProperty (producer (advanced)) | When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. | String | |
disableTimeToLive (producer (advanced)) | Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. | false | boolean |
forceSendOriginalMessage (producer (advanced)) | When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. | false | boolean |
includeSentJMSMessageID (producer (advanced)) | Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. | false | boolean |
replyToCacheLevelName (producer (advanced)) | Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values:
| String | |
replyToDestinationSelectorName (producer (advanced)) | Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). | String | |
streamMessageTypeEnabled (producer (advanced)) | Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. | false | boolean |
allowSerializedHeaders (advanced) | Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
artemisStreamingEnabled (advanced) | Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. | false | boolean |
asyncStartListener (advanced) | Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. | false | boolean |
asyncStopListener (advanced) | Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. | false | boolean |
destinationResolver (advanced) | A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). | DestinationResolver | |
errorHandler (advanced) | Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. | ErrorHandler | |
exceptionListener (advanced) | Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. | ExceptionListener | |
headerFilterStrategy (advanced) | To use a custom HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
idleConsumerLimit (advanced) | Specify the limit for the number of consumers that are allowed to be idle at any given time. | 1 | int |
idleTaskExecutionLimit (advanced) | Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. | 1 | int |
includeAllJMSXProperties (advanced) | Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. | false | boolean |
jmsKeyFormatStrategy (advanced) | Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values:
| JmsKeyFormatStrategy | |
mapJmsMessage (advanced) | Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. | true | boolean |
maxMessagesPerTask (advanced) | The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. | -1 | int |
messageConverter (advanced) | To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. | MessageConverter | |
messageCreatedStrategy (advanced) | To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. | MessageCreatedStrategy | |
messageIdEnabled (advanced) | When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. | true | boolean |
messageListenerContainerFactory (advanced) | Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. | MessageListenerContainerFactory | |
messageTimestampEnabled (advanced) | Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. | true | boolean |
pubSubNoLocal (advanced) | Specifies whether to inhibit the delivery of messages published by its own connection. | false | boolean |
receiveTimeout (advanced) | The timeout for receiving messages (in milliseconds). | 1000 | long |
recoveryInterval (advanced) | Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. | 5000 | long |
requestTimeoutCheckerInterval (advanced) | Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. | 1000 | long |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
transferException (advanced) | If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. | false | boolean |
transferExchange (advanced) | You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. | false | boolean |
useMessageIDAsCorrelationID (advanced) | Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. | false | boolean |
waitForProvisionCorrelationToBeUpdatedCounter (advanced) | Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. | 50 | int |
waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) | Interval in millis to sleep each time while waiting for provisional correlation id to be updated. | 100 | long |
errorHandlerLoggingLevel (logging) | Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values:
| WARN | LoggingLevel |
errorHandlerLogStackTrace (logging) | Allows to control whether stacktraces should be logged or not, by the default errorHandler. | true | boolean |
password (security) | Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
username (security) | Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
transacted (transaction) | Specifies whether to use transacted mode. | false | boolean |
transactedInOut (transaction) | Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. | false | boolean |
lazyCreateTransactionManager (transaction (advanced)) | If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. | true | boolean |
transactionManager (transaction (advanced)) | The Spring transaction manager to use. | PlatformTransactionManager | |
transactionName (transaction (advanced)) | The name of the transaction to use. | String | |
transactionTimeout (transaction (advanced)) | The timeout value of the transaction (in seconds), if using transacted mode. | -1 | int |
28.5. Samples
JMS is used in many examples for other components as well. But we provide a few samples below to get started.
28.5.1. Receiving from JMS
In the following sample we configure a route that receives JMS messages and routes the message to a POJO:
from("jms:queue:foo"). to("bean:myBusinessLogic");
You can of course use any of the EIP patterns so the route can be context based. For example, here’s how to filter an order topic for the big spenders:
from("jms:topic:OrdersTopic"). filter().method("myBean", "isGoldCustomer"). to("jms:queue:BigSpendersQueue");
28.5.2. Sending to JMS
In the sample below we poll a file folder and send the file content to a JMS topic. As we want the content of the file as a TextMessage
instead of a BytesMessage
, we need to convert the body to a String
:
from("file://orders"). convertBodyTo(String.class). to("jms:topic:OrdersTopic");
28.5.3. Using Annotations
Camel also has annotations so you can use POJO Consuming and POJO Producing.
28.5.4. Spring DSL sample
The preceding examples use the Java DSL. Camel also supports Spring XML DSL. Here is the big spender sample using Spring DSL:
<route> <from uri="jms:topic:OrdersTopic"/> <filter> <method ref="myBean" method="isGoldCustomer"/> <to uri="jms:queue:BigSpendersQueue"/> </filter> </route>
28.5.5. Other samples
JMS appears in many of the examples for other components and EIP patterns, as well in this Camel documentation. So feel free to browse the documentation.
28.5.6. Using JMS as a Dead Letter Queue storing Exchange
Normally, when using JMS as the transport, it only transfers the body and headers as the payload. If you want to use JMS with a Dead Letter Channel, using a JMS queue as the Dead Letter Queue, then normally the caused Exception is not stored in the JMS message. You can, however, use the transferExchange
option on the JMS dead letter queue to instruct Camel to store the entire Exchange in the queue as a javax.jms.ObjectMessage
that holds a org.apache.camel.support.DefaultExchangeHolder
. This allows you to consume from the Dead Letter Queue and retrieve the caused exception from the Exchange property with the key Exchange.EXCEPTION_CAUGHT
. The demo below illustrates this:
// setup error handler to use JMS as queue and store the entire Exchange errorHandler(deadLetterChannel("jms:queue:dead?transferExchange=true"));
Then you can consume from the JMS queue and analyze the problem:
from("jms:queue:dead").to("bean:myErrorAnalyzer"); // and in our bean String body = exchange.getIn().getBody(); Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); // the cause message is String problem = cause.getMessage();
28.5.7. Using JMS as a Dead Letter Channel storing error only
You can use JMS to store the cause error message or to store a custom body, which you can initialize yourself. The following example uses the Message Translator EIP to do a transformation on the failed exchange before it is moved to the JMS dead letter queue:
// we sent it to a seda dead queue first errorHandler(deadLetterChannel("seda:dead")); // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue from("seda:dead").transform(exceptionMessage()).to("jms:queue:dead");
Here we only store the original cause error message in the transform. You can, however, use any Expression to send whatever you like. For example, you can invoke a method on a Bean or use a custom processor.
28.6. Message Mapping between JMS and Camel
Camel automatically maps messages between javax.jms.Message
and org.apache.camel.Message
.
When sending a JMS message, Camel converts the message body to the following JMS message types:
Body Type | JMS Message | Comment |
---|---|---|
|
| |
|
|
The DOM will be converted to |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
When receiving a JMS message, Camel converts the JMS message to the following body type:
JMS Message | Body Type |
---|---|
|
|
|
|
|
|
|
|
28.6.1. Disabling auto-mapping of JMS messages
You can use the mapJmsMessage
option to disable the auto-mapping above. If disabled, Camel will not try to map the received JMS message, but instead uses it directly as the payload. This allows you to avoid the overhead of mapping and let Camel just pass through the JMS message. For instance, it even allows you to route javax.jms.ObjectMessage
JMS messages with classes you do not have on the classpath.
28.6.2. Using a custom MessageConverter
You can use the messageConverter
option to do the mapping yourself in a Spring org.springframework.jms.support.converter.MessageConverter
class.
For example, in the route below we use a custom message converter when sending a message to the JMS order queue:
from("file://inbox/order").to("jms:queue:order?messageConverter=#myMessageConverter");
You can also use a custom message converter when consuming from a JMS destination.
28.6.3. Controlling the mapping strategy selected
You can use the jmsMessageType
option on the endpoint URL to force a specific message type for all messages.
In the route below, we poll files from a folder and send them as javax.jms.TextMessage
as we have forced the JMS producer endpoint to use text messages:
from("file://inbox/order").to("jms:queue:order?jmsMessageType=Text");
You can also specify the message type to use for each message by setting the header with the key CamelJmsMessageType
. For example:
from("file://inbox/order").setHeader("CamelJmsMessageType", JmsMessageType.Text).to("jms:queue:order");
The possible values are defined in the enum
class, org.apache.camel.jms.JmsMessageType
.
28.7. Message format when sending
The exchange that is sent over the JMS wire must conform to the JMS Message spec.
For the exchange.in.header
the following rules apply for the header keys:
-
Keys starting with
JMS
orJMSX
are reserved. -
exchange.in.headers
keys must be literals and all be valid Java identifiers (do not use dots in the key name). -
Camel replaces dots & hyphens and the reverse when when consuming JMS messages:
.
is replaced by `DOT` and the reverse replacement when Camel consumes the message.-
is replaced by `HYPHEN` and the reverse replacement when Camel consumes the message. -
See also the option
jmsKeyFormatStrategy
, which allows use of your own custom strategy for formatting keys.
For the exchange.in.header
, the following rules apply for the header values:
-
The values must be primitives or their counter objects (such as
Integer
,Long
,Character
). The types,String
,CharSequence
,Date
,BigDecimal
andBigInteger
are all converted to theirtoString()
representation. All other types are dropped.
Camel will log with category org.apache.camel.component.jms.JmsBinding
at DEBUG level if it drops a given header value. For example:
2008-07-09 06:43:04,046 [main ] DEBUG JmsBinding - Ignoring non primitive header: order of class: org.apache.camel.component.jms.issues.DummyOrder with value: DummyOrder{orderId=333, itemId=4444, quantity=2}
28.8. Message format when receiving
Camel adds the following properties to the Exchange
when it receives a message:
Property | Type | Description |
---|---|---|
|
| The reply destination. |
Camel adds the following JMS properties to the In message headers when it receives a JMS message:
Header | Type | Description |
---|---|---|
|
| The JMS correlation ID. |
|
| The JMS delivery mode. |
|
| The JMS destination. |
|
| The JMS expiration. |
|
| The JMS unique message ID. |
|
| The JMS priority (with 0 as the lowest priority and 9 as the highest). |
|
| Is the JMS message redelivered. |
|
| The JMS reply-to destination. |
|
| The JMS timestamp. |
|
| The JMS type. |
|
| The JMS group ID. |
As all the above information is standard JMS you can check the JMS documentation for further details.
28.9. About using Camel to send and receive messages and JMSReplyTo
The JMS component is complex and you have to pay close attention to how it works in some cases. So this is a short summary of some of the areas/pitfalls to look for.
When Camel sends a message using its JMSProducer
, it checks the following conditions:
- The message exchange pattern,
-
Whether a
JMSReplyTo
was set in the endpoint or in the message headers, -
Whether any of the following options have been set on the JMS endpoint:
disableReplyTo
,preserveMessageQos
,explicitQosEnabled
.
All this can be a tad complex to understand and configure to support your use case.
28.9.1. JmsProducer
The JmsProducer
behaves as follows, depending on configuration:
Exchange Pattern | Other options | Description |
---|---|---|
InOut | - |
Camel will expect a reply, set a temporary |
InOut |
|
Camel will expect a reply and, after sending the message, it will start to listen for the reply message on the specified |
InOnly | - | Camel will send the message and not expect a reply. |
InOnly |
|
By default, Camel discards the |
28.9.2. JmsConsumer
The JmsConsumer
behaves as follows, depending on configuration:
Exchange Pattern | Other options | Description |
---|---|---|
InOut | - |
Camel will send the reply back to the |
InOnly | - | Camel will not send a reply back, as the pattern is InOnly. |
- |
| This option suppresses replies. |
So pay attention to the message exchange pattern set on your exchanges.
If you send a message to a JMS destination in the middle of your route you can specify the exchange pattern to use, see more at Request Reply.
This is useful if you want to send an InOnly
message to a JMS topic:
from("activemq:queue:in") .to("bean:validateOrder") .to(ExchangePattern.InOnly, "activemq:topic:order") .to("bean:handleOrder");
28.10. Reuse endpoint and send to different destinations computed at runtime
If you need to send messages to a lot of different JMS destinations, it makes sense to reuse a JMS endpoint and specify the real destination in a message header. This allows Camel to reuse the same endpoint, but send to different destinations. This greatly reduces the number of endpoints created and economizes on memory and thread resources.
You can specify the destination in the following headers:
Header | Type | Description |
---|---|---|
|
| A destination object. |
|
| The destination name. |
For example, the following route shows how you can compute a destination at run time and use it to override the destination appearing in the JMS URL:
from("file://inbox") .to("bean:computeDestination") .to("activemq:queue:dummy");
The queue name, dummy
, is just a placeholder. It must be provided as part of the JMS endpoint URL, but it will be ignored in this example.
In the computeDestination
bean, specify the real destination by setting the CamelJmsDestinationName
header as follows:
public void setJmsHeader(Exchange exchange) { String id = .... exchange.getIn().setHeader("CamelJmsDestinationName", "order:" + id"); }
Then Camel will read this header and use it as the destination instead of the one configured on the endpoint. So, in this example Camel sends the message to activemq:queue:order:2
, assuming the id
value was 2.
If both the CamelJmsDestination
and the CamelJmsDestinationName
headers are set, CamelJmsDestination
takes priority. Keep in mind that the JMS producer removes both CamelJmsDestination
and CamelJmsDestinationName
headers from the exchange and do not propagate them to the created JMS message in order to avoid the accidental loops in the routes (in scenarios when the message will be forwarded to the another JMS endpoint).
28.11. Configuring different JMS providers
You can configure your JMS provider in Spring XML as follows:
Basically, you can configure as many JMS component instances as you wish and give them a unique name using the id
attribute. The preceding example configures an activemq
component. You could do the same to configure MQSeries, TibCo, BEA, Sonic and so on.
Once you have a named JMS component, you can then refer to endpoints within that component using URIs. For example for the component name, activemq
, you can then refer to destinations using the URI format, activemq:[queue:|topic:]destinationName
. You can use the same approach for all other JMS providers.
This works by the SpringCamelContext lazily fetching components from the spring context for the scheme name you use for Endpoint URIs and having the Component resolve the endpoint URIs.
28.11.1. Using JNDI to find the ConnectionFactory
If you are using a J2EE container, you might need to look up JNDI to find the JMS ConnectionFactory
rather than use the usual <bean>
mechanism in Spring. You can do this using Spring’s factory bean or the new Spring XML namespace. For example:
<bean id="weblogic" class="org.apache.camel.component.jms.JmsComponent"> <property name="connectionFactory" ref="myConnectionFactory"/> </bean> <jee:jndi-lookup id="myConnectionFactory" jndi-name="jms/connectionFactory"/>
See The jee schema in the Spring reference documentation for more details about JNDI lookup.
28.12. Concurrent Consuming
A common requirement with JMS is to consume messages concurrently in multiple threads in order to make an application more responsive. You can set the concurrentConsumers
option to specify the number of threads servicing the JMS endpoint, as follows:
from("jms:SomeQueue?concurrentConsumers=20"). bean(MyClass.class);
You can configure this option in one of the following ways:
-
On the
JmsComponent
, - On the endpoint URI or,
-
By invoking
setConcurrentConsumers()
directly on theJmsEndpoint
.
28.12.1. Concurrent Consuming with async consumer
Notice that each concurrent consumer will only pickup the next available message from the JMS broker, when the current message has been fully processed. You can set the option asyncConsumer=true
to let the consumer pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). See more details in the table on top of the page about the asyncConsumer
option.
from("jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true"). bean(MyClass.class);
28.13. Request-reply over JMS
Camel supports Request Reply over JMS. In essence the MEP of the Exchange should be InOut
when you send a message to a JMS queue.
Camel offers a number of options to configure request/reply over JMS that influence performance and clustered environments. The table below summaries the options.
Option | Performance | Cluster | Description |
---|---|---|---|
| Fast | Yes |
A temporary queue is used as reply queue, and automatic created by Camel. To use this do not specify a replyTo queue name. And you can optionally configure |
| Slow | Yes |
A shared persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you can optionally configure |
| Fast | No (*Yes) |
An exclusive persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you must configure |
| Fast | Yes |
Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the |
| Fast | Yes |
Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the |
The JmsProducer
detects the InOut
and provides a JMSReplyTo
header with the reply destination to be used. By default Camel uses a temporary queue, but you can use the replyTo
option on the endpoint to specify a fixed reply queue (see more below about fixed reply queue).
Camel will automatically setup a consumer which listen on the reply queue, so you should not do anything.
This consumer is a Spring DefaultMessageListenerContainer
which listen for replies. However it’s fixed to 1 concurrent consumer.
That means replies will be processed in sequence as there are only 1 thread to process the replies. You can configure the listener to use concurrent threads using the concurrentConsumers
and maxConcurrentConsumers
options. This allows you to easier configure this in Camel as shown below:
from(xxx) .inOut().to("activemq:queue:foo?concurrentConsumers=5") .to(yyy) .to(zzz);
In this route we instruct Camel to route replies asynchronously using a thread pool with 5 threads.
28.13.2. Request-reply over JMS and using an exclusive fixed reply queue
In the previous example, Camel would anticipate the fixed reply queue named "bar" was shared, and thus it uses a JMSSelector
to only consume reply messages which it expects. However there is a drawback doing this as the JMS selector is slower. Also the consumer on the reply queue is slower to update with new JMS selector ids. In fact it only updates when the receiveTimeout
option times out, which by default is 1 second. So in theory the reply messages could take up till about 1 sec to be detected. On the other hand if the fixed reply queue is exclusive to the Camel reply consumer, then we can avoid using the JMS selectors, and thus be more performant. In fact as fast as using temporary queues. There is the ReplyToType
option which you can configure to Exclusive
to tell Camel that the reply queue is exclusive as shown in the example below:
from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy)
Mind that the queue must be exclusive to each and every endpoint. So if you have two routes, then they each need an unique reply queue as shown in the next example:
from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy) from(aaa) .inOut().to("activemq:queue:order?replyTo=order.reply&replyToType=Exclusive") .to(bbb)
The same applies if you run in a clustered environment. Then each node in the cluster must use an unique reply queue name. As otherwise each node in the cluster may pickup messages which was intended as a reply on another node. For clustered environments its recommended to use shared reply queues instead.
28.14. Synchronizing clocks between senders and receivers
When doing messaging between systems, its desirable that the systems have synchronized clocks. For example when sending a JMS message, then you can set a time to live value on the message. Then the receiver can inspect this value, and determine if the message is already expired, and thus drop the message instead of consume and process it. However this requires that both sender and receiver have synchronized clocks. If you are using ActiveMQ then you can use the timestamp plugin to synchronize clocks.
28.15. About time to live
Read first above about synchronized clocks.
When you do request/reply (InOut) over JMS with Camel then Camel uses a timeout on the sender side, which is default 20 seconds from the requestTimeout
option. You can control this by setting a higher/lower value. However the time to live value is still set on the message being send. So that requires the clocks to be synchronized between the systems. If they are not, then you may want to disable the time to live value being set. This is now possible using the disableTimeToLive
option from Camel 2.8 onwards. So if you set this option to disableTimeToLive=true
, then Camel does not set any time to live value when sending JMS messages. But the request timeout is still active. So for example if you do request/reply over JMS and have disabled time to live, then Camel will still use a timeout by 20 seconds (the requestTimeout
option). That option can of course also be configured. So the two options requestTimeout
and disableTimeToLive
gives you fine grained control when doing request/reply.
You can provide a header in the message to override and use as the request timeout value instead of the endpoint configured value. For example:
from("direct:someWhere") .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply");
In the route above we have a endpoint configured requestTimeout
of 30 seconds. So Camel will wait up till 30 seconds for that reply message to come back on the bar queue. If no reply message is received then a org.apache.camel.ExchangeTimedOutException
is set on the Exchange and Camel continues routing the message, which would then fail due the exception, and Camel’s error handler reacts.
If you want to use a per message timeout value, you can set the header with key org.apache.camel.component.jms.JmsConstants#JMS_REQUEST_TIMEOUT
which has constant value "CamelJmsRequestTimeout"
with a timeout value as long type.
For example we can use a bean to compute the timeout value per individual message, such as calling the "whatIsTheTimeout"
method on the service bean as shown below:
from("direct:someWhere") .setHeader("CamelJmsRequestTimeout", method(ServiceBean.class, "whatIsTheTimeout")) .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply");
When you do fire and forget (InOut) over JMS with Camel then Camel by default does not set any time to live value on the message. You can configure a value by using the timeToLive
option. For example to indicate a 5 sec., you set timeToLive=5000
. The option disableTimeToLive
can be used to force disabling the time to live, also for InOnly messaging. The requestTimeout
option is not being used for InOnly messaging.
28.16. Enabling Transacted Consumption
A common requirement is to consume from a queue in a transaction and then process the message using the Camel route. To do this, just ensure that you set the following properties on the component/endpoint:
-
transacted
= true -
transactionManager
= a Transsaction Manager - typically theJmsTransactionManager
See the Transactional Client EIP pattern for further details.
Transactions and [Request Reply] over JMS
When using Request Reply over JMS you cannot use a single transaction; JMS will not send any messages until a commit is performed, so the server side won’t receive anything at all until the transaction commits. Therefore to use Request Reply you must commit a transaction after sending the request and then use a separate transaction for receiving the response.
To address this issue the JMS component uses different properties to specify transaction use for oneway messaging and request reply messaging:
The transacted
property applies only to the InOnly message Exchange Pattern (MEP).
You can leverage the DMLC transacted session API using the following properties on component/endpoint:
-
transacted
= true -
lazyCreateTransactionManager
= false
The benefit of doing so is that the cacheLevel setting will be honored when using local transactions without a configured TransactionManager. When a TransactionManager is configured, no caching happens at DMLC level and it is necessary to rely on a pooled connection factory. For more details about this kind of setup, see here and here.
28.17. Using JMSReplyTo for late replies
When using Camel as a JMS listener, it sets an Exchange property with the value of the ReplyTo javax.jms.Destination
object, having the key ReplyTo
. You can obtain this Destination
as follows:
Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class);
And then later use it to send a reply using regular JMS or Camel.
// we need to pass in the JMS component, and in this sample we use ActiveMQ JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); // now we have the endpoint we can use regular Camel API to send a message to it template.sendBody(endpoint, "Here is the late reply.");
A different solution to sending a reply is to provide the replyDestination
object in the same Exchange property when sending. Camel will then pick up this property and use it for the real destination. The endpoint URI must include a dummy destination, however. For example:
// we pretend to send it to some non existing dummy queue template.send("activemq:queue:dummy, new Processor() { public void process(Exchange exchange) throws Exception { // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); exchange.getIn().setBody("Here is the late reply."); } }
28.18. Using a request timeout
In the sample below we send a Request Reply style message Exchange (we use the requestBody
method = InOut
) to the slow queue for further processing in Camel and we wait for a return reply:
28.19. Sending an InOnly message and keeping the JMSReplyTo header
When sending to a JMS destination using camel-jms the producer will use the MEP to detect if its InOnly or InOut messaging. However there can be times where you want to send an InOnly message but keeping the JMSReplyTo
header. To do so you have to instruct Camel to keep it, otherwise the JMSReplyTo
header will be dropped.
For example to send an InOnly message to the foo queue, but with a JMSReplyTo
with bar queue you can do as follows:
template.send("activemq:queue:foo?preserveMessageQos=true", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody("World"); exchange.getIn().setHeader("JMSReplyTo", "bar"); } });
Notice we use preserveMessageQos=true
to instruct Camel to keep the JMSReplyTo
header.
28.20. Setting JMS provider options on the destination
Some JMS providers, like IBM’s WebSphere MQ need options to be set on the JMS destination. For example, you may need to specify the targetClient
option. Since targetClient
is a WebSphere MQ option and not a Camel URI option, you need to set that on the JMS destination name like so:
// ... .setHeader("CamelJmsDestinationName", constant("queue:///MY_QUEUE?targetClient=1")) .to("wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true");
Some versions of WMQ won’t accept this option on the destination name and you will get an exception like:
com.ibm.msg.client.jms.DetailedJMSException: JMSCC0005: The specified value 'MY_QUEUE?targetClient=1' is not allowed for 'XMSC_DESTINATION_NAME'
A workaround is to use a custom DestinationResolver:
JmsComponent wmq = new JmsComponent(connectionFactory); wmq.setDestinationResolver(new DestinationResolver() { public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { MQQueueSession wmqSession = (MQQueueSession) session; return wmqSession.createQueue("queue:///" + destinationName + "?targetClient=1"); } });
28.21. Spring Boot Auto-Configuration
When using jms with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jms-starter</artifactId> </dependency>
The component supports 99 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.jms.accept-messages-while-stopping | Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. | false | Boolean |
camel.component.jms.acknowledgement-mode-name | The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. | AUTO_ACKNOWLEDGE | String |
camel.component.jms.allow-additional-headers | This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. | String | |
camel.component.jms.allow-auto-wired-connection-factory | Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. | true | Boolean |
camel.component.jms.allow-auto-wired-destination-resolver | Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. | true | Boolean |
camel.component.jms.allow-null-body | Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. | true | Boolean |
camel.component.jms.allow-reply-manager-quick-stop | Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. | false | Boolean |
camel.component.jms.allow-serialized-headers | Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | Boolean |
camel.component.jms.always-copy-message | If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). | false | Boolean |
camel.component.jms.artemis-consumer-priority | Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). | Integer | |
camel.component.jms.artemis-streaming-enabled | Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. | false | Boolean |
camel.component.jms.async-consumer | Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the next message from the JMS queue, while the previous message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the next message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). | false | Boolean |
camel.component.jms.async-start-listener | Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. | false | Boolean |
camel.component.jms.async-stop-listener | Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. | false | Boolean |
camel.component.jms.auto-startup | Specifies whether the consumer container should auto-startup. | true | Boolean |
camel.component.jms.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.jms.cache-level | Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. | Integer | |
camel.component.jms.cache-level-name | Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. | CACHE_AUTO | String |
camel.component.jms.client-id | Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. | String | |
camel.component.jms.concurrent-consumers | Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | 1 | Integer |
camel.component.jms.configuration | To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. | JmsConfiguration | |
camel.component.jms.connection-factory | The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. | ConnectionFactory | |
camel.component.jms.consumer-type | The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. | ConsumerType | |
camel.component.jms.correlation-property | When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. | String | |
camel.component.jms.default-task-executor-type | Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring’s SimpleAsyncTaskExecutor) or ThreadPool (uses Spring’s ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the previous behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. | DefaultTaskExecutorType | |
camel.component.jms.delivery-delay | Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. | -1 | Long |
camel.component.jms.delivery-mode | Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. | Integer | |
camel.component.jms.delivery-persistent | Specifies whether persistent delivery is used by default. | true | Boolean |
camel.component.jms.destination-resolver | A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. | DestinationResolver | |
camel.component.jms.disable-reply-to | Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. | false | Boolean |
camel.component.jms.disable-time-to-live | Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. | false | Boolean |
camel.component.jms.durable-subscription-name | The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. | String | |
camel.component.jms.eager-loading-of-properties | Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. | false | Boolean |
camel.component.jms.eager-poison-body | If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. | Poison JMS message due to $\{exception.message} | String |
camel.component.jms.enabled | Whether to enable auto configuration of the jms component. This is enabled by default. | Boolean | |
camel.component.jms.error-handler | Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. | ErrorHandler | |
camel.component.jms.error-handler-log-stack-trace | Allows to control whether stacktraces should be logged or not, by the default errorHandler. | true | Boolean |
camel.component.jms.error-handler-logging-level | Allows to configure the default errorHandler logging level for logging uncaught exceptions. | LoggingLevel | |
camel.component.jms.exception-listener | Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. | ExceptionListener | |
camel.component.jms.explicit-qos-enabled | Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring’s JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. | false | Boolean |
camel.component.jms.expose-listener-session | Specifies whether the listener session should be exposed when consuming messages. | false | Boolean |
camel.component.jms.force-send-original-message | When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. | false | Boolean |
camel.component.jms.format-date-headers-to-iso8601 | Sets whether JMS date properties should be formatted according to the ISO 8601 standard. | false | Boolean |
camel.component.jms.header-filter-strategy | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.jms.idle-consumer-limit | Specify the limit for the number of consumers that are allowed to be idle at any given time. | 1 | Integer |
camel.component.jms.idle-task-execution-limit | Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. | 1 | Integer |
camel.component.jms.include-all-j-m-s-x-properties | Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. | false | Boolean |
camel.component.jms.include-sent-j-m-s-message-i-d | Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. | false | Boolean |
camel.component.jms.jms-key-format-strategy | Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. | JmsKeyFormatStrategy | |
camel.component.jms.jms-message-type | Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. | JmsMessageType | |
camel.component.jms.lazy-create-transaction-manager | If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. | true | Boolean |
camel.component.jms.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.jms.map-jms-message | Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. | true | Boolean |
camel.component.jms.max-concurrent-consumers | Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. | Integer | |
camel.component.jms.max-messages-per-task | The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. | -1 | Integer |
camel.component.jms.message-converter | To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. | MessageConverter | |
camel.component.jms.message-created-strategy | To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. | MessageCreatedStrategy | |
camel.component.jms.message-id-enabled | When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. | true | Boolean |
camel.component.jms.message-listener-container-factory | Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. The option is a org.apache.camel.component.jms.MessageListenerContainerFactory type. | MessageListenerContainerFactory | |
camel.component.jms.message-timestamp-enabled | Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. | true | Boolean |
camel.component.jms.password | Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
camel.component.jms.preserve-message-qos | Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. | false | Boolean |
camel.component.jms.priority | Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. | 4 | Integer |
camel.component.jms.pub-sub-no-local | Specifies whether to inhibit the delivery of messages published by its own connection. | false | Boolean |
camel.component.jms.queue-browse-strategy | To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. | QueueBrowseStrategy | |
camel.component.jms.receive-timeout | The timeout for receiving messages (in milliseconds). The option is a long type. | 1000 | Long |
camel.component.jms.recovery-interval | Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. The option is a long type. | 5000 | Long |
camel.component.jms.reply-to | Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). | String | |
camel.component.jms.reply-to-cache-level-name | Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. | String | |
camel.component.jms.reply-to-concurrent-consumers | Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | 1 | Integer |
camel.component.jms.reply-to-delivery-persistent | Specifies whether to use persistent delivery by default for replies. | true | Boolean |
camel.component.jms.reply-to-destination-selector-name | Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). | String | |
camel.component.jms.reply-to-max-concurrent-consumers | Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. | Integer | |
camel.component.jms.reply-to-on-timeout-max-concurrent-consumers | Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. | 1 | Integer |
camel.component.jms.reply-to-override | Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. | String | |
camel.component.jms.reply-to-same-destination-allowed | Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. | false | Boolean |
camel.component.jms.reply-to-type | Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. | ReplyToType | |
camel.component.jms.request-timeout | The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. The option is a long type. | 20000 | Long |
camel.component.jms.request-timeout-checker-interval | Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. The option is a long type. | 1000 | Long |
camel.component.jms.selector | Sets the JMS selector to use. | String | |
camel.component.jms.stream-message-type-enabled | Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. | false | Boolean |
camel.component.jms.subscription-durable | Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. | false | Boolean |
camel.component.jms.subscription-name | Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client’s JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). | String | |
camel.component.jms.subscription-shared | Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. | false | Boolean |
camel.component.jms.synchronous | Sets whether synchronous processing should be strictly used. | false | Boolean |
camel.component.jms.task-executor | Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. | TaskExecutor | |
camel.component.jms.test-connection-on-startup | Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. | false | Boolean |
camel.component.jms.time-to-live | When sending messages, specifies the time-to-live of the message (in milliseconds). | -1 | Long |
camel.component.jms.transacted | Specifies whether to use transacted mode. | false | Boolean |
camel.component.jms.transacted-in-out | Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. | false | Boolean |
camel.component.jms.transaction-manager | The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. | PlatformTransactionManager | |
camel.component.jms.transaction-name | The name of the transaction to use. | String | |
camel.component.jms.transaction-timeout | The timeout value of the transaction (in seconds), if using transacted mode. | -1 | Integer |
camel.component.jms.transfer-exception | If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. | false | Boolean |
camel.component.jms.transfer-exchange | You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. | false | Boolean |
camel.component.jms.use-message-i-d-as-correlation-i-d | Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. | false | Boolean |
camel.component.jms.username | Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. | String | |
camel.component.jms.wait-for-provision-correlation-to-be-updated-counter | Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. | 50 | Integer |
camel.component.jms.wait-for-provision-correlation-to-be-updated-thread-sleeping-time | Interval in millis to sleep each time while waiting for provisional correlation id to be updated. The option is a long type. | 100 | Long |
Chapter 29. Kafka
Both producer and consumer are supported
The Kafka component is used for communicating with Apache Kafka message broker.
Maven users will need to add the following dependency to their pom.xml
for this component.
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-kafka</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
29.1. URI format
kafka:topic[?options]
29.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
29.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
29.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
29.3. Component Options
The Kafka component supports 104 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
additionalProperties (common) | Sets additional properties for either kafka consumer or kafka producer in case they can’t be set directly on the camel configurations (e.g: new Kafka properties that are not reflected yet in Camel configurations), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro. | Map | |
brokers (common) | URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation. | String | |
clientId (common) | The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. | String | |
configuration (common) | Allows to pre-configure the Kafka component with common options that the endpoints will reuse. | KafkaConfiguration | |
headerFilterStrategy (common) | To use a custom HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
reconnectBackoffMaxMs (common) | The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. | 1000 | Integer |
shutdownTimeout (common) | Timeout in milliseconds to wait gracefully for the consumer or producer to shutdown and terminate its worker threads. | 30000 | int |
allowManualCommit (consumer) | Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer. | false | boolean |
autoCommitEnable (consumer) | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin. | true | Boolean |
autoCommitIntervalMs (consumer) | The frequency in ms that the consumer offsets are committed to zookeeper. | 5000 | Integer |
autoCommitOnStop (consumer) | Whether to perform an explicit auto commit when the consumer stops to ensure the broker has a commit from the last consumed message. This requires the option autoCommitEnable is turned on. The possible values are: sync, async, or none. And sync is the default value. Enum values:
| sync | String |
autoOffsetReset (consumer) | What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest : automatically reset the offset to the latest offset fail: throw exception to the consumer. Enum values:
| latest | String |
breakOnFirstError (consumer) | This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the next message and processes it. If the option is true then the consumer breaks out, and will seek back to offset of the message that caused a failure, and then re-attempt to process this message. However this can lead to endless processing of the same message if its bound to fail every time, eg a poison message. Therefore its recommended to deal with that for example by using Camel’s error handler. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
checkCrcs (consumer) | Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. | true | Boolean |
commitTimeoutMs (consumer) | The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete. | 5000 | Long |
consumerRequestTimeoutMs (consumer) | The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. | 40000 | Integer |
consumersCount (consumer) | The number of consumers that connect to kafka server. Each consumer is run on a separate thread, that retrieves and process the incoming data. | 1 | int |
fetchMaxBytes (consumer) | The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. | 52428800 | Integer |
fetchMinBytes (consumer) | The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. | 1 | Integer |
fetchWaitMaxMs (consumer) | The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes. | 500 | Integer |
groupId (consumer) | A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. This option is required for consumers. | String | |
groupInstanceId (consumer) | A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior. | String | |
headerDeserializer (consumer) | To use a custom KafkaHeaderDeserializer to deserialize kafka headers values. | KafkaHeaderDeserializer | |
heartbeatIntervalMs (consumer) | The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. | 3000 | Integer |
keyDeserializer (consumer) | Deserializer class for key that implements the Deserializer interface. | org.apache.kafka.common.serialization.StringDeserializer | String |
maxPartitionFetchBytes (consumer) | The maximum amount of data per-partition the server will return. The maximum total memory used for a request will be #partitions max.partition.fetch.bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. | 1048576 | Integer |
maxPollIntervalMs (consumer) | The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. | Long | |
maxPollRecords (consumer) | The maximum number of records returned in a single call to poll(). | 500 | Integer |
offsetRepository (consumer) | The offset repository to use in order to locally store the offset of each partition of the topic. Defining one will disable the autocommit. | StateRepository | |
partitionAssignor (consumer) | The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used. | org.apache.kafka.clients.consumer.RangeAssignor | String |
pollOnError (consumer) | What to do if kafka threw an exception while polling for new messages. Will by default use the value from the component configuration unless an explicit value has been configured on the endpoint level. DISCARD will discard the message and continue to poll next message. ERROR_HANDLER will use Camel’s error handler to process the exception, and afterwards continue to poll next message. RECONNECT will re-connect the consumer and try poll the message again RETRY will let the consumer retry polling the same message again STOP will stop the consumer (have to be manually started/restarted if the consumer should be able to consume messages again). Enum values:
| ERROR_HANDLER | PollOnError |
pollTimeoutMs (consumer) | The timeout used when polling the KafkaConsumer. | 5000 | Long |
resumeStrategy (consumer) | This option allows the user to set a custom resume strategy. The resume strategy is executed when partitions are assigned (i.e.: when connecting or reconnecting). It allows implementations to customize how to resume operations and serve as more flexible alternative to the seekTo and the offsetRepository mechanisms. See the KafkaConsumerResumeStrategy for implementation details. This option does not affect the auto commit setting. It is likely that implementations using this setting will also want to evaluate using the manual commit option along with this. | KafkaConsumerResumeStrategy | |
seekTo (consumer) | Set if KafkaConsumer will read from beginning or end on startup: beginning : read from beginning end : read from end This is replacing the earlier property seekToBeginning. Enum values:
| String | |
sessionTimeoutMs (consumer) | The timeout used to detect failures when using Kafka’s group management facilities. | 10000 | Integer |
specificAvroReader (consumer) | This enables the use of a specific Avro reader for use with the Confluent Platform schema registry and the io.confluent.kafka.serializers.KafkaAvroDeserializer. This option is only available in the Confluent Platform (not standard Apache Kafka). | false | boolean |
topicIsPattern (consumer) | Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. | false | boolean |
valueDeserializer (consumer) | Deserializer class for value that implements the Deserializer interface. | org.apache.kafka.common.serialization.StringDeserializer | String |
kafkaManualCommitFactory (consumer (advanced)) | Autowired Factory to use for creating KafkaManualCommit instances. This allows to plugin a custom factory to create custom KafkaManualCommit instances in case special logic is needed when doing manual commits that deviates from the default implementation that comes out of the box. | KafkaManualCommitFactory | |
pollExceptionStrategy (consumer (advanced)) | Autowired To use a custom strategy with the consumer to control how to handle exceptions thrown from the Kafka broker while pooling messages. | PollExceptionStrategy | |
bufferMemorySize (producer) | The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will either block or throw an exception based on the preference specified by block.on.buffer.full.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. | 33554432 | Integer |
compressionCodec (producer) | This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip and snappy. Enum values:
| none | String |
connectionMaxIdleMs (producer) | Close idle connections after the number of milliseconds specified by this config. | 540000 | Integer |
deliveryTimeoutMs (producer) | An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. | 120000 | Integer |
enableIdempotence (producer) | If set to 'true' the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries may write duplicates of the retried message in the stream. If set to true this option will require max.in.flight.requests.per.connection to be set to 1 and retries cannot be zero and additionally acks must be set to 'all'. | false | boolean |
headerSerializer (producer) | To use a custom KafkaHeaderSerializer to serialize kafka headers values. | KafkaHeaderSerializer | |
key (producer) | The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY. | String | |
keySerializer (producer) | The serializer class for keys (defaults to the same as for messages if nothing is given). | org.apache.kafka.common.serialization.StringSerializer | String |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
lingerMs (producer) | The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle’s algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. | 0 | Integer |
maxBlockMs (producer) | The configuration controls how long sending to kafka will block. These methods can be blocked for multiple reasons. For e.g: buffer full, metadata unavailable.This configuration imposes maximum limit on the total time spent in fetching metadata, serialization of key and value, partitioning and allocation of buffer memory when doing a send(). In case of partitionsFor(), this configuration imposes a maximum time threshold on waiting for metadata. | 60000 | Integer |
maxInFlightRequest (producer) | The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). | 5 | Integer |
maxRequestSize (producer) | The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. | 1048576 | Integer |
metadataMaxAgeMs (producer) | The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. | 300000 | Integer |
metricReporters (producer) | A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. | String | |
metricsSampleWindowMs (producer) | The number of samples maintained to compute metrics. | 30000 | Integer |
noOfMetricsSample (producer) | The number of samples maintained to compute metrics. | 2 | Integer |
partitioner (producer) | The partitioner class for partitioning messages amongst sub-topics. The default partitioner is based on the hash of the key. | org.apache.kafka.clients.producer.internals.DefaultPartitioner | String |
partitionKey (producer) | The partition to which the record will be sent (or null if no partition was specified). If this option has been configured then it take precedence over header KafkaConstants#PARTITION_KEY. | Integer | |
producerBatchSize (producer) | The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size.Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. | 16384 | Integer |
queueBufferingMaxMessages (producer) | The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped. | 10000 | Integer |
receiveBufferBytes (producer) | The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. | 65536 | Integer |
reconnectBackoffMs (producer) | The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. | 50 | Integer |
recordMetadata (producer) | Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata’s. The list is stored on a header with the key KafkaConstants#KAFKA_RECORDMETA. | true | boolean |
requestRequiredAcks (producer) | The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won’t generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. Enum values:
| 1 | String |
requestTimeoutMs (producer) | The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client. | 30000 | Integer |
retries (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer |
retryBackoffMs (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer |
sendBufferBytes (producer) | Socket write buffer size. | 131072 | Integer |
valueSerializer (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String |
workerPool (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. | ExecutorService | |
workerPoolCoreSize (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer |
workerPoolMaxSize (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
kafkaClientFactory (advanced) | Autowired Factory to use for creating org.apache.kafka.clients.consumer.KafkaConsumer and org.apache.kafka.clients.producer.KafkaProducer instances. This allows to configure a custom factory to create instances with logic that extends the vanilla Kafka clients. | KafkaClientFactory | |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
schemaRegistryURL (confluent) | URL of the Confluent Platform schema registry servers to use. The format is host1:port1,host2:port2. This is known as schema.registry.url in the Confluent Platform documentation. This option is only available in the Confluent Platform (not standard Apache Kafka). | String | |
interceptorClasses (monitoring) | Sets interceptors for producer or consumers. Producer interceptors have to be classes implementing org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors have to be classes implementing org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use Producer interceptor on a consumer it will throw a class cast exception in runtime. | String | |
kerberosBeforeReloginMinTime (security) | Login thread sleep time between refresh attempts. | 60000 | Integer |
kerberosInitCmd (security) | Kerberos kinit command path. Default is /usr/bin/kinit. | /usr/bin/kinit | String |
kerberosPrincipalToLocalRules (security) | A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}{REALM} are mapped to {username}. For more details on the format please see the security authorization and acls documentation.. Multiple values can be separated by comma. | DEFAULT | String |
kerberosRenewJitter (security) | Percentage of random jitter added to the renewal time. | 0.05 | Double |
kerberosRenewWindowFactor (security) | Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket. | 0.8 | Double |
saslJaasConfig (security) | Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD;. | String | |
saslKerberosServiceName (security) | The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config. | String | |
saslMechanism (security) | The Simple Authentication and Security Layer (SASL) Mechanism used. For the valid values see . | GSSAPI | String |
securityProtocol (security) | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT and SSL are supported. | PLAINTEXT | String |
sslCipherSuites (security) | A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.By default all the available cipher suites are supported. | String | |
sslContextParameters (security) | SSL configuration using a Camel SSLContextParameters object. If configured it’s applied before the other SSL endpoint parameters. NOTE: Kafka only supports loading keystore from file locations, so prefix the location with file: in the KeyStoreParameters.resource option. | SSLContextParameters | |
sslEnabledProtocols (security) | The list of protocols enabled for SSL connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default. | String | |
sslEndpointAlgorithm (security) | The endpoint identification algorithm to validate server hostname using server certificate. | https | String |
sslKeymanagerAlgorithm (security) | The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. | SunX509 | String |
sslKeyPassword (security) | The password of the private key in the key store file. This is optional for client. | String | |
sslKeystoreLocation (security) | The location of the key store file. This is optional for client and can be used for two-way authentication for client. | String | |
sslKeystorePassword (security) | The store password for the key store file.This is optional for client and only needed if ssl.keystore.location is configured. | String | |
sslKeystoreType (security) | The file format of the key store file. This is optional for client. Default value is JKS. | JKS | String |
sslProtocol (security) | The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. | String | |
sslProvider (security) | The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. | String | |
sslTrustmanagerAlgorithm (security) | The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. | PKIX | String |
sslTruststoreLocation (security) | The location of the trust store file. | String | |
sslTruststorePassword (security) | The password for the trust store file. | String | |
sslTruststoreType (security) | The file format of the trust store file. Default value is JKS. | JKS | String |
useGlobalSslContextParameters (security) | Enable usage of global SSL context parameters. | false | boolean |
29.4. Endpoint Options
The Kafka endpoint is configured using URI syntax:
kafka:topic
with the following path and query parameters:
29.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
topic (common) | Required Name of the topic to use. On the consumer you can use comma to separate multiple topics. A producer can only send a message to a single topic. | String |
29.4.2. Query Parameters (102 parameters)
Name | Description | Default | Type |
---|---|---|---|
additionalProperties (common) | Sets additional properties for either kafka consumer or kafka producer in case they can’t be set directly on the camel configurations (e.g: new Kafka properties that are not reflected yet in Camel configurations), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro. | Map | |
brokers (common) | URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation. | String | |
clientId (common) | The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. | String | |
headerFilterStrategy (common) | To use a custom HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
reconnectBackoffMaxMs (common) | The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. | 1000 | Integer |
shutdownTimeout (common) | Timeout in milliseconds to wait gracefully for the consumer or producer to shutdown and terminate its worker threads. | 30000 | int |
allowManualCommit (consumer) | Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer. | false | boolean |
autoCommitEnable (consumer) | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin. | true | Boolean |
autoCommitIntervalMs (consumer) | The frequency in ms that the consumer offsets are committed to zookeeper. | 5000 | Integer |
autoCommitOnStop (consumer) | Whether to perform an explicit auto commit when the consumer stops to ensure the broker has a commit from the last consumed message. This requires the option autoCommitEnable is turned on. The possible values are: sync, async, or none. And sync is the default value. Enum values:
| sync | String |
autoOffsetReset (consumer) | What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest : automatically reset the offset to the latest offset fail: throw exception to the consumer. Enum values:
| latest | String |
breakOnFirstError (consumer) | This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the next message and processes it. If the option is true then the consumer breaks out, and will seek back to offset of the message that caused a failure, and then re-attempt to process this message. However this can lead to endless processing of the same message if its bound to fail every time, eg a poison message. Therefore its recommended to deal with that for example by using Camel’s error handler. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
checkCrcs (consumer) | Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. | true | Boolean |
commitTimeoutMs (consumer) | The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete. | 5000 | Long |
consumerRequestTimeoutMs (consumer) | The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. | 40000 | Integer |
consumersCount (consumer) | The number of consumers that connect to kafka server. Each consumer is run on a separate thread, that retrieves and process the incoming data. | 1 | int |
fetchMaxBytes (consumer) | The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. | 52428800 | Integer |
fetchMinBytes (consumer) | The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. | 1 | Integer |
fetchWaitMaxMs (consumer) | The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes. | 500 | Integer |
groupId (consumer) | A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. This option is required for consumers. | String | |
groupInstanceId (consumer) | A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior. | String | |
headerDeserializer (consumer) | To use a custom KafkaHeaderDeserializer to deserialize kafka headers values. | KafkaHeaderDeserializer | |
heartbeatIntervalMs (consumer) | The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. | 3000 | Integer |
keyDeserializer (consumer) | Deserializer class for key that implements the Deserializer interface. | org.apache.kafka.common.serialization.StringDeserializer | String |
maxPartitionFetchBytes (consumer) | The maximum amount of data per-partition the server will return. The maximum total memory used for a request will be #partitions max.partition.fetch.bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. | 1048576 | Integer |
maxPollIntervalMs (consumer) | The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. | Long | |
maxPollRecords (consumer) | The maximum number of records returned in a single call to poll(). | 500 | Integer |
offsetRepository (consumer) | The offset repository to use in order to locally store the offset of each partition of the topic. Defining one will disable the autocommit. | StateRepository | |
partitionAssignor (consumer) | The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used. | org.apache.kafka.clients.consumer.RangeAssignor | String |
pollOnError (consumer) | What to do if kafka threw an exception while polling for new messages. Will by default use the value from the component configuration unless an explicit value has been configured on the endpoint level. DISCARD will discard the message and continue to poll next message. ERROR_HANDLER will use Camel’s error handler to process the exception, and afterwards continue to poll next message. RECONNECT will re-connect the consumer and try poll the message again RETRY will let the consumer retry polling the same message again STOP will stop the consumer (have to be manually started/restarted if the consumer should be able to consume messages again). Enum values:
| ERROR_HANDLER | PollOnError |
pollTimeoutMs (consumer) | The timeout used when polling the KafkaConsumer. | 5000 | Long |
resumeStrategy (consumer) | This option allows the user to set a custom resume strategy. The resume strategy is executed when partitions are assigned (i.e.: when connecting or reconnecting). It allows implementations to customize how to resume operations and serve as more flexible alternative to the seekTo and the offsetRepository mechanisms. See the KafkaConsumerResumeStrategy for implementation details. This option does not affect the auto commit setting. It is likely that implementations using this setting will also want to evaluate using the manual commit option along with this. | KafkaConsumerResumeStrategy | |
seekTo (consumer) | Set if KafkaConsumer will read from beginning or end on startup: beginning : read from beginning end : read from end This is replacing the earlier property seekToBeginning. Enum values:
| String | |
sessionTimeoutMs (consumer) | The timeout used to detect failures when using Kafka’s group management facilities. | 10000 | Integer |
specificAvroReader (consumer) | This enables the use of a specific Avro reader for use with the Confluent Platform schema registry and the io.confluent.kafka.serializers.KafkaAvroDeserializer. This option is only available in the Confluent Platform (not standard Apache Kafka). | false | boolean |
topicIsPattern (consumer) | Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. | false | boolean |
valueDeserializer (consumer) | Deserializer class for value that implements the Deserializer interface. | org.apache.kafka.common.serialization.StringDeserializer | String |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
kafkaManualCommitFactory (consumer (advanced)) | Factory to use for creating KafkaManualCommit instances. This allows to plugin a custom factory to create custom KafkaManualCommit instances in case special logic is needed when doing manual commits that deviates from the default implementation that comes out of the box. | KafkaManualCommitFactory | |
bufferMemorySize (producer) | The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will either block or throw an exception based on the preference specified by block.on.buffer.full.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. | 33554432 | Integer |
compressionCodec (producer) | This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip and snappy. Enum values:
| none | String |
connectionMaxIdleMs (producer) | Close idle connections after the number of milliseconds specified by this config. | 540000 | Integer |
deliveryTimeoutMs (producer) | An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. | 120000 | Integer |
enableIdempotence (producer) | If set to 'true' the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries may write duplicates of the retried message in the stream. If set to true this option will require max.in.flight.requests.per.connection to be set to 1 and retries cannot be zero and additionally acks must be set to 'all'. | false | boolean |
headerSerializer (producer) | To use a custom KafkaHeaderSerializer to serialize kafka headers values. | KafkaHeaderSerializer | |
key (producer) | The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY. | String | |
keySerializer (producer) | The serializer class for keys (defaults to the same as for messages if nothing is given). | org.apache.kafka.common.serialization.StringSerializer | String |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
lingerMs (producer) | The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle’s algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. | 0 | Integer |
maxBlockMs (producer) | The configuration controls how long sending to kafka will block. These methods can be blocked for multiple reasons. For e.g: buffer full, metadata unavailable.This configuration imposes maximum limit on the total time spent in fetching metadata, serialization of key and value, partitioning and allocation of buffer memory when doing a send(). In case of partitionsFor(), this configuration imposes a maximum time threshold on waiting for metadata. | 60000 | Integer |
maxInFlightRequest (producer) | The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). | 5 | Integer |
maxRequestSize (producer) | The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. | 1048576 | Integer |
metadataMaxAgeMs (producer) | The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. | 300000 | Integer |
metricReporters (producer) | A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. | String | |
metricsSampleWindowMs (producer) | The number of samples maintained to compute metrics. | 30000 | Integer |
noOfMetricsSample (producer) | The number of samples maintained to compute metrics. | 2 | Integer |
partitioner (producer) | The partitioner class for partitioning messages amongst sub-topics. The default partitioner is based on the hash of the key. | org.apache.kafka.clients.producer.internals.DefaultPartitioner | String |
partitionKey (producer) | The partition to which the record will be sent (or null if no partition was specified). If this option has been configured then it take precedence over header KafkaConstants#PARTITION_KEY. | Integer | |
producerBatchSize (producer) | The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size.Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. | 16384 | Integer |
queueBufferingMaxMessages (producer) | The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped. | 10000 | Integer |
receiveBufferBytes (producer) | The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. | 65536 | Integer |
reconnectBackoffMs (producer) | The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. | 50 | Integer |
recordMetadata (producer) | Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata’s. The list is stored on a header with the key KafkaConstants#KAFKA_RECORDMETA. | true | boolean |
requestRequiredAcks (producer) | The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won’t generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. Enum values:
| 1 | String |
requestTimeoutMs (producer) | The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client. | 30000 | Integer |
retries (producer) | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer |
retryBackoffMs (producer) | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer |
sendBufferBytes (producer) | Socket write buffer size. | 131072 | Integer |
valueSerializer (producer) | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String |
workerPool (producer) | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. | ExecutorService | |
workerPoolCoreSize (producer) | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer |
workerPoolMaxSize (producer) | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer |
kafkaClientFactory (advanced) | Factory to use for creating org.apache.kafka.clients.consumer.KafkaConsumer and org.apache.kafka.clients.producer.KafkaProducer instances. This allows to configure a custom factory to create instances with logic that extends the vanilla Kafka clients. | KafkaClientFactory | |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
schemaRegistryURL (confluent) | URL of the Confluent Platform schema registry servers to use. The format is host1:port1,host2:port2. This is known as schema.registry.url in the Confluent Platform documentation. This option is only available in the Confluent Platform (not standard Apache Kafka). | String | |
interceptorClasses (monitoring) | Sets interceptors for producer or consumers. Producer interceptors have to be classes implementing org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors have to be classes implementing org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use Producer interceptor on a consumer it will throw a class cast exception in runtime. | String | |
kerberosBeforeReloginMinTime (security) | Login thread sleep time between refresh attempts. | 60000 | Integer |
kerberosInitCmd (security) | Kerberos kinit command path. Default is /usr/bin/kinit. | /usr/bin/kinit | String |
kerberosPrincipalToLocalRules (security) | A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}{REALM} are mapped to {username}. For more details on the format please see the security authorization and acls documentation.. Multiple values can be separated by comma. | DEFAULT | String |
kerberosRenewJitter (security) | Percentage of random jitter added to the renewal time. | 0.05 | Double |
kerberosRenewWindowFactor (security) | Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket. | 0.8 | Double |
saslJaasConfig (security) | Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD;. | String | |
saslKerberosServiceName (security) | The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config. | String | |
saslMechanism (security) | The Simple Authentication and Security Layer (SASL) Mechanism used. For the valid values see . | GSSAPI | String |
securityProtocol (security) | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT and SSL are supported. | PLAINTEXT | String |
sslCipherSuites (security) | A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.By default all the available cipher suites are supported. | String | |
sslContextParameters (security) | SSL configuration using a Camel SSLContextParameters object. If configured it’s applied before the other SSL endpoint parameters. NOTE: Kafka only supports loading keystore from file locations, so prefix the location with file: in the KeyStoreParameters.resource option. | SSLContextParameters | |
sslEnabledProtocols (security) | The list of protocols enabled for SSL connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default. | String | |
sslEndpointAlgorithm (security) | The endpoint identification algorithm to validate server hostname using server certificate. | https | String |
sslKeymanagerAlgorithm (security) | The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. | SunX509 | String |
sslKeyPassword (security) | The password of the private key in the key store file. This is optional for client. | String | |
sslKeystoreLocation (security) | The location of the key store file. This is optional for client and can be used for two-way authentication for client. | String | |
sslKeystorePassword (security) | The store password for the key store file.This is optional for client and only needed if ssl.keystore.location is configured. | String | |
sslKeystoreType (security) | The file format of the key store file. This is optional for client. Default value is JKS. | JKS | String |
sslProtocol (security) | The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. | String | |
sslProvider (security) | The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. | String | |
sslTrustmanagerAlgorithm (security) | The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. | PKIX | String |
sslTruststoreLocation (security) | The location of the trust store file. | String | |
sslTruststorePassword (security) | The password for the trust store file. | String | |
sslTruststoreType (security) | The file format of the trust store file. Default value is JKS. | JKS | String |
For more information about Producer/Consumer configuration see:
29.5. Message headers
29.5.1. Consumer headers
The following headers are available when consuming messages from Kafka.
Header constant | Header value | Type | Description |
---|---|---|---|
|
|
| The topic from where the message originated |
|
|
| The partition where the message was stored |
|
|
| The offset of the message |
|
|
| The key of the message if configured |
|
|
| The record headers |
|
|
|
Whether or not it’s the last record before commit (only available if |
|
|
|
Indicates the last record within the current poll request (only available if |
|
|
| Can be used for forcing manual offset commit when using Kafka consumer. |
29.5.2. Producer headers
Before sending a message to Kafka you can configure the following headers.
Header constant | Header value | Type | Description |
---|---|---|---|
|
|
| Required The key of the message in order to ensure that all related message goes in the same partition |
|
|
| The topic to which send the message (override and takes precedence), and the header is not preserved. |
|
|
| The ProducerRecord also has an associated timestamp. If the user did provide a timestamp, the producer will stamp the record with the provided timestamp and the header is not preserved. |
|
|
| Explicitly specify the partition |
If you want to send a message to a dynamic topic then use KafkaConstants.OVERRIDE_TOPIC
as its used as a one-time header that are not send along the message, as its removed in the producer.
After the message is sent to Kafka, the following headers are available
Header constant | Header value | Type | Description |
---|---|---|---|
|
|
|
The metadata (only configured if |
29.6. Consumer error handling
While kafka consumer is polling messages from the kafka broker, then errors can happen. This section describes what happens and what you can configure.
The consumer may throw exception when invoking the Kafka poll
API. For example if the message cannot be de-serialized due invalid data, and many other kind of errors. Those errors are in the form of KafkaException
which are either retryable or not. The exceptions which can be retried (RetriableException
) will be retried again (with a poll timeout in between). All other kind of exceptions are handled according to the pollOnError configuration. This configuration has the following values:
- DISCARD will discard the message and continue to poll next message.
- ERROR_HANDLER will use Camel’s error handler to process the exception, and afterwards continue to poll next message.
- RECONNECT will re-connect the consumer and try poll the message again.
- RETRY will let the consumer retry polling the same message again
- STOP will stop the consumer (have to be manually started/restarted if the consumer should be able to consume messages again).
The default is ERROR_HANDLER which will let Camel’s error handler (if any configured) process the caused exception. And then afterwards continue to poll the next message. This behavior is similar to the bridgeErrorHandler option that Camel components have.
For advanced control then a custom implementation of org.apache.camel.component.kafka.PollExceptionStrategy
can be configured on the component level, which allows to control which exceptions causes which of the strategies above.
29.7. Samples
29.7.1. Consuming messages from Kafka
Here is the minimal route you need in order to read messages from Kafka.
from("kafka:test?brokers=localhost:9092") .log("Message received from Kafka : ${body}") .log(" on the topic ${headers[kafka.TOPIC]}") .log(" on the partition ${headers[kafka.PARTITION]}") .log(" with the offset ${headers[kafka.OFFSET]}") .log(" with the key ${headers[kafka.KEY]}")
If you need to consume messages from multiple topics you can use a comma separated list of topic names.
from("kafka:test,test1,test2?brokers=localhost:9092") .log("Message received from Kafka : ${body}") .log(" on the topic ${headers[kafka.TOPIC]}") .log(" on the partition ${headers[kafka.PARTITION]}") .log(" with the offset ${headers[kafka.OFFSET]}") .log(" with the key ${headers[kafka.KEY]}")
It’s also possible to subscribe to multiple topics giving a pattern as the topic name and using the topicIsPattern
option.
from("kafka:test*?brokers=localhost:9092&topicIsPattern=true") .log("Message received from Kafka : ${body}") .log(" on the topic ${headers[kafka.TOPIC]}") .log(" on the partition ${headers[kafka.PARTITION]}") .log(" with the offset ${headers[kafka.OFFSET]}") .log(" with the key ${headers[kafka.KEY]}")
When consuming messages from Kafka you can use your own offset management and not delegate this management to Kafka. In order to keep the offsets the component needs a StateRepository
implementation such as FileStateRepository
. This bean should be available in the registry. Here how to use it :
// Create the repository in which the Kafka offsets will be persisted FileStateRepository repository = FileStateRepository.fileStateRepository(new File("/path/to/repo.dat")); // Bind this repository into the Camel registry Registry registry = createCamelRegistry(); registry.bind("offsetRepo", repository); // Configure the camel context DefaultCamelContext camelContext = new DefaultCamelContext(registry); camelContext.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + // Setup the topic and broker address "&groupId=A" + // The consumer processor group ID "&autoOffsetReset=earliest" + // Ask to start from the beginning if we have unknown offset "&offsetRepository=#offsetRepo") // Keep the offsets in the previously configured repository .to("mock:result"); } });
29.7.2. Producing messages to Kafka
Here is the minimal route you need in order to write messages to Kafka.
from("direct:start") .setBody(constant("Message from Camel")) // Message to send .setHeader(KafkaConstants.KEY, constant("Camel")) // Key of the message .to("kafka:test?brokers=localhost:9092");
29.8. SSL configuration
You have 2 different ways to configure the SSL communication on the Kafka` component.
The first way is through the many SSL endpoint parameters
from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + "&groupId=A" + "&sslKeystoreLocation=/path/to/keystore.jks" + "&sslKeystorePassword=changeit" + "&sslKeyPassword=changeit" + "&securityProtocol=SSL") .to("mock:result");
The second way is to use the sslContextParameters
endpoint parameter.
// Configure the SSLContextParameters object KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/path/to/keystore.jks"); ksp.setPassword("changeit"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("changeit"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); // Bind this SSLContextParameters into the Camel registry Registry registry = createCamelRegistry(); registry.bind("ssl", scp); // Configure the camel context DefaultCamelContext camelContext = new DefaultCamelContext(registry); camelContext.addRoutes(new RouteBuilder() { @Override public void configure() throws Exception { from("kafka:" + TOPIC + "?brokers=localhost:{{kafkaPort}}" + // Setup the topic and broker address "&groupId=A" + // The consumer processor group ID "&sslContextParameters=#ssl" + // The security protocol "&securityProtocol=SSL) // Reference the SSL configuration .to("mock:result"); } });
29.9. Using the Kafka idempotent repository
The camel-kafka
library provides a Kafka topic-based idempotent repository.
This repository stores broadcasts all changes to idempotent state (add/remove) in a Kafka topic, and populates a local in-memory cache for each repository’s process instance through event sourcing. The topic used must be unique per idempotent repository instance.
The mechanism does not have any requirements about the number of topic partitions; as the repository consumes from all partitions at the same time. It also does not have any requirements about the replication factor of the topic.
Each repository instance that uses the topic (e.g. typically on different machines running in parallel) controls its own consumer group, so in a cluster of 10 Camel processes using the same topic each will control its own offset.
On startup, the instance subscribes to the topic and rewinds the offset to the beginning, rebuilding the cache to the latest state. The cache will not be considered warmed up until one poll of pollDurationMs
in length returns 0 records. Startup will not be completed until either the cache has warmed up, or 30 seconds go by; if the latter happens the idempotent repository may be in an inconsistent state until its consumer catches up to the end of the topic.
Be mindful of the format of the header used for the uniqueness check. By default, it uses Strings as the data types. When using primitive numeric formats, the header must be deserialized accordingly. Check the samples below for examples.
A KafkaIdempotentRepository
has the following properties:
Property | Description |
---|---|
| The name of the Kafka topic to use to broadcast changes. (required) |
|
The |
|
Sets the properties that will be used by the Kafka producer that broadcasts changes. Overrides |
|
Sets the properties that will be used by the Kafka consumer that populates the cache from the topic. Overrides |
| How many of the most recently used keys should be stored in memory (default 1000). |
|
The poll duration of the Kafka consumer. The local caches are updated immediately. This value will affect how far behind other peers that update their caches from the topic are relative to the idempotent consumer instance that sent the cache action message. The default value of this is 100 ms. |
The repository can be instantiated by defining the topic
and bootstrapServers
, or the producerConfig
and consumerConfig
property sets can be explicitly defined to enable features such as SSL/SASL. To use, this repository must be placed in the Camel registry, either manually or by registration as a bean in Spring/Blueprint, as it is CamelContext
aware.
Sample usage is as follows:
KafkaIdempotentRepository kafkaIdempotentRepository = new KafkaIdempotentRepository("idempotent-db-inserts", "localhost:9091"); SimpleRegistry registry = new SimpleRegistry(); registry.put("insertDbIdemRepo", kafkaIdempotentRepository); // must be registered in the registry, to enable access to the CamelContext CamelContext context = new CamelContext(registry); // later in RouteBuilder... from("direct:performInsert") .idempotentConsumer(header("id")).messageIdRepositoryRef("insertDbIdemRepo") // once-only insert into database .end()
In XML:
<!-- simple --> <bean id="insertDbIdemRepo" class="org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository"> <property name="topic" value="idempotent-db-inserts"/> <property name="bootstrapServers" value="localhost:9091"/> </bean> <!-- complex --> <bean id="insertDbIdemRepo" class="org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository"> <property name="topic" value="idempotent-db-inserts"/> <property name="maxCacheSize" value="10000"/> <property name="consumerConfig"> <props> <prop key="bootstrap.servers">localhost:9091</prop> </props> </property> <property name="producerConfig"> <props> <prop key="bootstrap.servers">localhost:9091</prop> </props> </property> </bean>
There are 3 alternatives to choose from when using idempotency with numeric identifiers. The first one is to use the static method numericHeader
method from org.apache.camel.component.kafka.serde.KafkaSerdeHelper
to perform the conversion for you:
from("direct:performInsert") .idempotentConsumer(numericHeader("id")).messageIdRepositoryRef("insertDbIdemRepo") // once-only insert into database .end()
Alternatively, it is possible use a custom serializer configured via the route URL to perform the conversion:
public class CustomHeaderDeserializer extends DefaultKafkaHeaderDeserializer { private static final Logger LOG = LoggerFactory.getLogger(CustomHeaderDeserializer.class); @Override public Object deserialize(String key, byte[] value) { if (key.equals("id")) { BigInteger bi = new BigInteger(value); return String.valueOf(bi.longValue()); } else { return super.deserialize(key, value); } } }
Lastly, it is also possible to do so in a processor:
from(from).routeId("foo") .process(exchange -> { byte[] id = exchange.getIn().getHeader("id", byte[].class); BigInteger bi = new BigInteger(id); exchange.getIn().setHeader("id", String.valueOf(bi.longValue())); }) .idempotentConsumer(header("id")) .messageIdRepositoryRef("kafkaIdempotentRepository") .to(to);
29.10. Using manual commit with Kafka consumer
By default the Kafka consumer will use auto commit, where the offset will be committed automatically in the background using a given interval.
In case you want to force manual commits, you can use KafkaManualCommit
API from the Camel Exchange, stored on the message header. This requires to turn on manual commits by either setting the option allowManualCommit
to true
on the KafkaComponent
or on the endpoint, for example:
KafkaComponent kafka = new KafkaComponent(); kafka.setAllowManualCommit(true); ... camelContext.addComponent("kafka", kafka);
You can then use the KafkaManualCommit
from Java code such as a Camel Processor
:
public void process(Exchange exchange) { KafkaManualCommit manual = exchange.getIn().getHeader(KafkaConstants.MANUAL_COMMIT, KafkaManualCommit.class); manual.commit(); }
This will force a synchronous commit which will block until the commit is acknowledge on Kafka, or if it fails an exception is thrown. You can use an asynchronous commit as well by configuring the KafkaManualCommitFactory
with the `DefaultKafkaManualAsyncCommitFactory`implementation.
The commit will then be done in the next consumer loop using the kafka asynchronous commit api. Be aware that records from a partition must be processed and committed by a unique thread. If not, this could lead with non consistent behaviors. This is mostly useful with aggregation’s completion timeout strategies.
If you want to use a custom implementation of KafkaManualCommit
then you can configure a custom KafkaManualCommitFactory
on the KafkaComponent
that creates instances of your custom implementation.
29.11. Kafka Headers propagation
When consuming messages from Kafka, headers will be propagated to camel exchange headers automatically. Producing flow backed by same behaviour - camel headers of particular exchange will be propagated to kafka message headers.
Since kafka headers allows only byte[]
values, in order camel exchange header to be propagated its value should be serialized to bytes[]
, otherwise header will be skipped. Following header value types are supported: String
, Integer
, Long
, Double
, Boolean
, byte[]
. Note: all headers propagated from kafka to camel exchange will contain byte[]
value by default. In order to override default functionality uri parameters can be set: headerDeserializer
for from
route and headerSerializer
for to
route. Example:
from("kafka:my_topic?headerDeserializer=#myDeserializer") ... .to("kafka:my_topic?headerSerializer=#mySerializer")
By default all headers are being filtered by KafkaHeaderFilterStrategy
. Strategy filters out headers which start with Camel
or org.apache.camel
prefixes. Default strategy can be overridden by using headerFilterStrategy
uri parameter in both to
and from
routes:
from("kafka:my_topic?headerFilterStrategy=#myStrategy") ... .to("kafka:my_topic?headerFilterStrategy=#myStrategy")
myStrategy
object should be subclass of HeaderFilterStrategy
and must be placed in the Camel registry, either manually or by registration as a bean in Spring/Blueprint, as it is CamelContext
aware.
29.12. Spring Boot Auto-Configuration
When using kafka with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kafka-starter</artifactId> </dependency>
The component supports 105 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.kafka.additional-properties | Sets additional properties for either kafka consumer or kafka producer in case they can’t be set directly on the camel configurations (e.g: new Kafka properties that are not reflected yet in Camel configurations), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro. | Map | |
camel.component.kafka.allow-manual-commit | Whether to allow doing manual commits via KafkaManualCommit. If this option is enabled then an instance of KafkaManualCommit is stored on the Exchange message header, which allows end users to access this API and perform manual offset commits via the Kafka consumer. | false | Boolean |
camel.component.kafka.auto-commit-enable | If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin. | true | Boolean |
camel.component.kafka.auto-commit-interval-ms | The frequency in ms that the consumer offsets are committed to zookeeper. | 5000 | Integer |
camel.component.kafka.auto-commit-on-stop | Whether to perform an explicit auto commit when the consumer stops to ensure the broker has a commit from the last consumed message. This requires the option autoCommitEnable is turned on. The possible values are: sync, async, or none. And sync is the default value. | sync | String |
camel.component.kafka.auto-offset-reset | What to do when there is no initial offset in ZooKeeper or if an offset is out of range: earliest : automatically reset the offset to the earliest offset latest : automatically reset the offset to the latest offset fail: throw exception to the consumer. | latest | String |
camel.component.kafka.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.kafka.break-on-first-error | This options controls what happens when a consumer is processing an exchange and it fails. If the option is false then the consumer continues to the next message and processes it. If the option is true then the consumer breaks out, and will seek back to offset of the message that caused a failure, and then re-attempt to process this message. However this can lead to endless processing of the same message if its bound to fail every time, eg a poison message. Therefore its recommended to deal with that for example by using Camel’s error handler. | false | Boolean |
camel.component.kafka.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.kafka.brokers | URL of the Kafka brokers to use. The format is host1:port1,host2:port2, and the list can be a subset of brokers or a VIP pointing to a subset of brokers. This option is known as bootstrap.servers in the Kafka documentation. | String | |
camel.component.kafka.buffer-memory-size | The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will either block or throw an exception based on the preference specified by block.on.buffer.full.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. | 33554432 | Integer |
camel.component.kafka.check-crcs | Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. | true | Boolean |
camel.component.kafka.client-id | The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. | String | |
camel.component.kafka.commit-timeout-ms | The maximum time, in milliseconds, that the code will wait for a synchronous commit to complete. The option is a java.lang.Long type. | 5000 | Long |
camel.component.kafka.compression-codec | This parameter allows you to specify the compression codec for all data generated by this producer. Valid values are none, gzip and snappy. | none | String |
camel.component.kafka.configuration | Allows to pre-configure the Kafka component with common options that the endpoints will reuse. The option is a org.apache.camel.component.kafka.KafkaConfiguration type. | KafkaConfiguration | |
camel.component.kafka.connection-max-idle-ms | Close idle connections after the number of milliseconds specified by this config. | 540000 | Integer |
camel.component.kafka.consumer-request-timeout-ms | The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. | 40000 | Integer |
camel.component.kafka.consumers-count | The number of consumers that connect to kafka server. Each consumer is run on a separate thread, that retrieves and process the incoming data. | 1 | Integer |
camel.component.kafka.delivery-timeout-ms | An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. | 120000 | Integer |
camel.component.kafka.enable-idempotence | If set to 'true' the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries may write duplicates of the retried message in the stream. If set to true this option will require max.in.flight.requests.per.connection to be set to 1 and retries cannot be zero and additionally acks must be set to 'all'. | false | Boolean |
camel.component.kafka.enabled | Whether to enable auto configuration of the kafka component. This is enabled by default. | Boolean | |
camel.component.kafka.fetch-max-bytes | The maximum amount of data the server should return for a fetch request This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. | 52428800 | Integer |
camel.component.kafka.fetch-min-bytes | The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. | 1 | Integer |
camel.component.kafka.fetch-wait-max-ms | The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes. | 500 | Integer |
camel.component.kafka.group-id | A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. This option is required for consumers. | String | |
camel.component.kafka.group-instance-id | A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior. | String | |
camel.component.kafka.header-deserializer | To use a custom KafkaHeaderDeserializer to deserialize kafka headers values. The option is a org.apache.camel.component.kafka.serde.KafkaHeaderDeserializer type. | KafkaHeaderDeserializer | |
camel.component.kafka.header-filter-strategy | To use a custom HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.kafka.header-serializer | To use a custom KafkaHeaderSerializer to serialize kafka headers values. The option is a org.apache.camel.component.kafka.serde.KafkaHeaderSerializer type. | KafkaHeaderSerializer | |
camel.component.kafka.heartbeat-interval-ms | The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. | 3000 | Integer |
camel.component.kafka.interceptor-classes | Sets interceptors for producer or consumers. Producer interceptors have to be classes implementing org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors have to be classes implementing org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use Producer interceptor on a consumer it will throw a class cast exception in runtime. | String | |
camel.component.kafka.kafka-client-factory | Factory to use for creating org.apache.kafka.clients.consumer.KafkaConsumer and org.apache.kafka.clients.producer.KafkaProducer instances. This allows to configure a custom factory to create instances with logic that extends the vanilla Kafka clients. The option is a org.apache.camel.component.kafka.KafkaClientFactory type. | KafkaClientFactory | |
camel.component.kafka.kafka-manual-commit-factory | Factory to use for creating KafkaManualCommit instances. This allows to plugin a custom factory to create custom KafkaManualCommit instances in case special logic is needed when doing manual commits that deviates from the default implementation that comes out of the box. The option is a org.apache.camel.component.kafka.KafkaManualCommitFactory type. | KafkaManualCommitFactory | |
camel.component.kafka.kerberos-before-relogin-min-time | Login thread sleep time between refresh attempts. | 60000 | Integer |
camel.component.kafka.kerberos-init-cmd | Kerberos kinit command path. Default is /usr/bin/kinit. | /usr/bin/kinit | String |
camel.component.kafka.kerberos-principal-to-local-rules | A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}{REALM} are mapped to {username}. For more details on the format please see the security authorization and acls documentation.. Multiple values can be separated by comma. | DEFAULT | String |
camel.component.kafka.kerberos-renew-jitter | Percentage of random jitter added to the renewal time. | Double | |
camel.component.kafka.kerberos-renew-window-factor | Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket. | Double | |
camel.component.kafka.key | The record key (or null if no key is specified). If this option has been configured then it take precedence over header KafkaConstants#KEY. | String | |
camel.component.kafka.key-deserializer | Deserializer class for key that implements the Deserializer interface. | org.apache.kafka.common.serialization.StringDeserializer | String |
camel.component.kafka.key-serializer | The serializer class for keys (defaults to the same as for messages if nothing is given). | org.apache.kafka.common.serialization.StringSerializer | String |
camel.component.kafka.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.kafka.linger-ms | The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle’s algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. | 0 | Integer |
camel.component.kafka.max-block-ms | The configuration controls how long sending to kafka will block. These methods can be blocked for multiple reasons. For e.g: buffer full, metadata unavailable.This configuration imposes maximum limit on the total time spent in fetching metadata, serialization of key and value, partitioning and allocation of buffer memory when doing a send(). In case of partitionsFor(), this configuration imposes a maximum time threshold on waiting for metadata. | 60000 | Integer |
camel.component.kafka.max-in-flight-request | The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). | 5 | Integer |
camel.component.kafka.max-partition-fetch-bytes | The maximum amount of data per-partition the server will return. The maximum total memory used for a request will be #partitions max.partition.fetch.bytes. This size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. If that happens, the consumer can get stuck trying to fetch a large message on a certain partition. | 1048576 | Integer |
camel.component.kafka.max-poll-interval-ms | The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. The option is a java.lang.Long type. | Long | |
camel.component.kafka.max-poll-records | The maximum number of records returned in a single call to poll(). | 500 | Integer |
camel.component.kafka.max-request-size | The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. | 1048576 | Integer |
camel.component.kafka.metadata-max-age-ms | The period of time in milliseconds after which we force a refresh of metadata even if we haven’t seen any partition leadership changes to proactively discover any new brokers or partitions. | 300000 | Integer |
camel.component.kafka.metric-reporters | A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. | String | |
camel.component.kafka.metrics-sample-window-ms | The number of samples maintained to compute metrics. | 30000 | Integer |
camel.component.kafka.no-of-metrics-sample | The number of samples maintained to compute metrics. | 2 | Integer |
camel.component.kafka.offset-repository | The offset repository to use in order to locally store the offset of each partition of the topic. Defining one will disable the autocommit. The option is a org.apache.camel.spi.StateRepository<java.lang.String, java.lang.String> type. | StateRepository | |
camel.component.kafka.partition-assignor | The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used. | org.apache.kafka.clients.consumer.RangeAssignor | String |
camel.component.kafka.partition-key | The partition to which the record will be sent (or null if no partition was specified). If this option has been configured then it take precedence over header KafkaConstants#PARTITION_KEY. | Integer | |
camel.component.kafka.partitioner | The partitioner class for partitioning messages amongst sub-topics. The default partitioner is based on the hash of the key. | org.apache.kafka.clients.producer.internals.DefaultPartitioner | String |
camel.component.kafka.poll-exception-strategy | To use a custom strategy with the consumer to control how to handle exceptions thrown from the Kafka broker while pooling messages. The option is a org.apache.camel.component.kafka.PollExceptionStrategy type. | PollExceptionStrategy | |
camel.component.kafka.poll-on-error | What to do if kafka threw an exception while polling for new messages. Will by default use the value from the component configuration unless an explicit value has been configured on the endpoint level. DISCARD will discard the message and continue to poll next message. ERROR_HANDLER will use Camel’s error handler to process the exception, and afterwards continue to poll next message. RECONNECT will re-connect the consumer and try poll the message again RETRY will let the consumer retry polling the same message again STOP will stop the consumer (have to be manually started/restarted if the consumer should be able to consume messages again). | PollOnError | |
camel.component.kafka.poll-timeout-ms | The timeout used when polling the KafkaConsumer. The option is a java.lang.Long type. | 5000 | Long |
camel.component.kafka.producer-batch-size | The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size.Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. | 16384 | Integer |
camel.component.kafka.queue-buffering-max-messages | The maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be blocked or data must be dropped. | 10000 | Integer |
camel.component.kafka.receive-buffer-bytes | The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. | 65536 | Integer |
camel.component.kafka.reconnect-backoff-max-ms | The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. | 1000 | Integer |
camel.component.kafka.reconnect-backoff-ms | The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. | 50 | Integer |
camel.component.kafka.record-metadata | Whether the producer should store the RecordMetadata results from sending to Kafka. The results are stored in a List containing the RecordMetadata metadata’s. The list is stored on a header with the key KafkaConstants#KAFKA_RECORDMETA. | true | Boolean |
camel.component.kafka.request-required-acks | The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are common: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won’t generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. | 1 | String |
camel.component.kafka.request-timeout-ms | The amount of time the broker will wait trying to meet the request.required.acks requirement before sending back an error to the client. | 30000 | Integer |
camel.component.kafka.resume-strategy | This option allows the user to set a custom resume strategy. The resume strategy is executed when partitions are assigned (i.e.: when connecting or reconnecting). It allows implementations to customize how to resume operations and serve as more flexible alternative to the seekTo and the offsetRepository mechanisms. See the KafkaConsumerResumeStrategy for implementation details. This option does not affect the auto commit setting. It is likely that implementations using this setting will also want to evaluate using the manual commit option along with this. The option is a org.apache.camel.component.kafka.consumer.support.KafkaConsumerResumeStrategy type. | KafkaConsumerResumeStrategy | |
camel.component.kafka.retries | Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first. | 0 | Integer |
camel.component.kafka.retry-backoff-ms | Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. | 100 | Integer |
camel.component.kafka.sasl-jaas-config | Expose the kafka sasl.jaas.config parameter Example: org.apache.kafka.common.security.plain.PlainLoginModule required username=USERNAME password=PASSWORD;. | String | |
camel.component.kafka.sasl-kerberos-service-name | The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config. | String | |
camel.component.kafka.sasl-mechanism | The Simple Authentication and Security Layer (SASL) Mechanism used. For the valid values see . | GSSAPI | String |
camel.component.kafka.schema-registry-u-r-l | URL of the Confluent Platform schema registry servers to use. The format is host1:port1,host2:port2. This is known as schema.registry.url in the Confluent Platform documentation. This option is only available in the Confluent Platform (not standard Apache Kafka). | String | |
camel.component.kafka.security-protocol | Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT and SSL are supported. | PLAINTEXT | String |
camel.component.kafka.seek-to | Set if KafkaConsumer will read from beginning or end on startup: beginning : read from beginning end : read from end This is replacing the earlier property seekToBeginning. | String | |
camel.component.kafka.send-buffer-bytes | Socket write buffer size. | 131072 | Integer |
camel.component.kafka.session-timeout-ms | The timeout used to detect failures when using Kafka’s group management facilities. | 10000 | Integer |
camel.component.kafka.shutdown-timeout | Timeout in milliseconds to wait gracefully for the consumer or producer to shutdown and terminate its worker threads. | 30000 | Integer |
camel.component.kafka.specific-avro-reader | This enables the use of a specific Avro reader for use with the Confluent Platform schema registry and the io.confluent.kafka.serializers.KafkaAvroDeserializer. This option is only available in the Confluent Platform (not standard Apache Kafka). | false | Boolean |
camel.component.kafka.ssl-cipher-suites | A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.By default all the available cipher suites are supported. | String | |
camel.component.kafka.ssl-context-parameters | SSL configuration using a Camel SSLContextParameters object. If configured it’s applied before the other SSL endpoint parameters. NOTE: Kafka only supports loading keystore from file locations, so prefix the location with file: in the KeyStoreParameters.resource option. The option is a org.apache.camel.support.jsse.SSLContextParameters type. | SSLContextParameters | |
camel.component.kafka.ssl-enabled-protocols | The list of protocols enabled for SSL connections. TLSv1.2, TLSv1.1 and TLSv1 are enabled by default. | String | |
camel.component.kafka.ssl-endpoint-algorithm | The endpoint identification algorithm to validate server hostname using server certificate. | https | String |
camel.component.kafka.ssl-key-password | The password of the private key in the key store file. This is optional for client. | String | |
camel.component.kafka.ssl-keymanager-algorithm | The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. | SunX509 | String |
camel.component.kafka.ssl-keystore-location | The location of the key store file. This is optional for client and can be used for two-way authentication for client. | String | |
camel.component.kafka.ssl-keystore-password | The store password for the key store file.This is optional for client and only needed if ssl.keystore.location is configured. | String | |
camel.component.kafka.ssl-keystore-type | The file format of the key store file. This is optional for client. Default value is JKS. | JKS | String |
camel.component.kafka.ssl-protocol | The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. | String | |
camel.component.kafka.ssl-provider | The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. | String | |
camel.component.kafka.ssl-trustmanager-algorithm | The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. | PKIX | String |
camel.component.kafka.ssl-truststore-location | The location of the trust store file. | String | |
camel.component.kafka.ssl-truststore-password | The password for the trust store file. | String | |
camel.component.kafka.ssl-truststore-type | The file format of the trust store file. Default value is JKS. | JKS | String |
camel.component.kafka.synchronous | Sets whether synchronous processing should be strictly used. | false | Boolean |
camel.component.kafka.topic-is-pattern | Whether the topic is a pattern (regular expression). This can be used to subscribe to dynamic number of topics matching the pattern. | false | Boolean |
camel.component.kafka.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
camel.component.kafka.value-deserializer | Deserializer class for value that implements the Deserializer interface. | org.apache.kafka.common.serialization.StringDeserializer | String |
camel.component.kafka.value-serializer | The serializer class for messages. | org.apache.kafka.common.serialization.StringSerializer | String |
camel.component.kafka.worker-pool | To use a custom worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. If using this option then you must handle the lifecycle of the thread pool to shut the pool down when no longer needed. The option is a java.util.concurrent.ExecutorService type. | ExecutorService | |
camel.component.kafka.worker-pool-core-size | Number of core threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 10 | Integer |
camel.component.kafka.worker-pool-max-size | Maximum number of threads for the worker pool for continue routing Exchange after kafka server has acknowledge the message that was sent to it from KafkaProducer using asynchronous non-blocking processing. | 20 | Integer |
Chapter 30. Kamelet
Both producer and consumer are supported
The Kamelet Component provides support for interacting with the Camel Route Template engine using Endpoint semantic.
30.1. URI format
kamelet:templateId/routeId[?options]
30.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
30.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
30.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
30.3. Component Options
The Kamelet component supports 9 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
location (common) | The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. | classpath:/kamelets | String |
routeProperties (common) | Set route local parameters. | Map | |
templateProperties (common) | Set template local parameters. | Map | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
block (producer) | If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
timeout (producer) | The timeout value to use if block is enabled. | 30000 | long |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
routeTemplateLoaderListener (advanced) | Autowired To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. | RouteTemplateLoaderListener |
30.4. Endpoint Options
The Kamelet endpoint is configured using URI syntax:
kamelet:templateId/routeId
with the following path and query parameters:
30.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
templateId (common) | Required The Route Template ID. | String | |
routeId (common) | The Route ID. Default value notice: The ID will be auto-generated if not provided. | String |
30.4.2. Query Parameters (8 parameters)
Name | Description | Default | Type |
---|---|---|---|
location (common) | Location of the Kamelet to use which can be specified as a resource from file system, classpath etc. The location cannot use wildcards, and must refer to a file including extension, for example file:/etc/foo-kamelet.xml. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
block (producer) | If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. | true | boolean |
failIfNoConsumers (producer) | Whether the producer should fail by throwing an exception, when sending to a kamelet endpoint with no active consumers. | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
timeout (producer) | The timeout value to use if block is enabled. | 30000 | long |
The kamelet endpoint is lenient, which means that the endpoint accepts additional parameters that are passed to the engine and consumed upon route materialization.
30.5. Discovery
If a Route Template is not found, the kamelet endpoint tries to load the related kamelet definition from the file system (by default classpath:/kamelets
). The default resolution mechanism expect kamelet files to have the extension .kamelet.yaml
.
30.6. Samples
Kamelets can be used as if they were standard Camel components. For example, suppose that we have created a Route Template as follows:
routeTemplate("setMyBody") .templateParameter("bodyValue") .from("kamelet:source") .setBody().constant("{{bodyValue}}");
To let the Kamelet component wiring the materialized route to the caller processor, we need to be able to identify the input and output endpoint of the route and this is done by using kamele:source
to mark the input endpoint and kamelet:sink
for the output endpoint.
Then the template can be instantiated and invoked as shown below:
from("direct:setMyBody") .to("kamelet:setMyBody?bodyValue=myKamelet");
Behind the scenes, the Kamelet component does the following things:
-
It instantiates a route out of the Route Template identified by the given
templateId
path parameter (in this casesetBody
) -
It will act like the
direct
component and connect the current route to the materialized one.
If you had to do it programmatically, it would have been something like:
routeTemplate("setMyBody") .templateParameter("bodyValue") .from("direct:{{foo}}") .setBody().constant("{{bodyValue}}"); TemplatedRouteBuilder.builder(context, "setMyBody") .parameter("foo", "bar") .parameter("bodyValue", "myKamelet") .add(); from("direct:template") .to("direct:bar");
30.7. Spring Boot Auto-Configuration
When using kamelet with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kamelet-starter</artifactId> </dependency>
The component supports 10 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.kamelet.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.kamelet.block | If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. | true | Boolean |
camel.component.kamelet.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.kamelet.enabled | Whether to enable auto configuration of the kamelet component. This is enabled by default. | Boolean | |
camel.component.kamelet.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.kamelet.location | The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. | classpath:/kamelets | String |
camel.component.kamelet.route-properties | Set route local parameters. | Map | |
camel.component.kamelet.route-template-loader-listener | To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. The option is a org.apache.camel.spi.RouteTemplateLoaderListener type. | RouteTemplateLoaderListener | |
camel.component.kamelet.template-properties | Set template local parameters. | Map | |
camel.component.kamelet.timeout | The timeout value to use if block is enabled. | 30000 | Long |
Chapter 31. Language
Only producer is supported
The Language component allows you to send Exchange to an endpoint which executes a script by any of the supported Languages in Camel. By having a component to execute language scripts, it allows more dynamic routing capabilities. For example by using the Routing Slip or Dynamic Router EIPs you can send messages to language endpoints where the script is dynamic defined as well.
This component is provided out of the box in camel-core and hence no additional JARs is needed. You only have to include additional Camel components if the language of choice mandates it, such as using Groovy or JavaScript languages.
31.1. URI format
language://languageName[:script][?options]
You can refer to an external resource for the script using same notation as supported by the other Languages in Camel.
language://languageName:resource:scheme:location][?options]
31.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
31.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
31.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
31.3. Component Options
The Language component supports 2 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
31.4. Endpoint Options
The Language endpoint is configured using URI syntax:
language:languageName:resourceUri
with the following path and query parameters:
31.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
languageName (producer) | Required Sets the name of the language to use. Enum values:
| String | |
resourceUri (producer) | Path to the resource, or a reference to lookup a bean in the Registry to use as the resource. | String |
31.4.2. Query Parameters (7 parameters)
Name | Description | Default | Type |
---|---|---|---|
allowContextMapAll (producer) | Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. | false | boolean |
binary (producer) | Whether the script is binary content or text content. By default the script is read as text content (eg java.lang.String). | false | boolean |
cacheScript (producer) | Whether to cache the compiled script and reuse Notice reusing the script can cause side effects from processing one Camel org.apache.camel.Exchange to the next org.apache.camel.Exchange. | false | boolean |
contentCache (producer) | Sets whether to use resource content cache or not. | true | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
script (producer) | Sets the script to execute. | String | |
transform (producer) | Whether or not the result of the script should be used as message body. This options is default true. | true | boolean |
31.5. Message Headers
The following message headers can be used to affect the behavior of the component
Header | Description |
---|---|
| The script to execute provided in the header. Takes precedence over script configured on the endpoint. |
31.6. Examples
For example you can use the Simple language to Message Translator a message.
You can also provide the script as a header as shown below. Here we use XPath language to extract the text from the <foo> tag.
Object out = producer.requestBodyAndHeader("language:xpath", "<foo>Hello World</foo>", Exchange.LANGUAGE_SCRIPT, "/foo/text()"); assertEquals("Hello World", out);
31.7. Loading scripts from resources
You can specify a resource uri for a script to load in either the endpoint uri, or in the Exchange.LANGUAGE_SCRIPT
header. The uri must start with one of the following schemes: file:, classpath:, or http:
By default the script is loaded once and cached. However you can disable the contentCache
option and have the script loaded on each evaluation. For example if the file myscript.txt is changed on disk, then the updated script is used:
You can refer to the resource similar to the other Languages in Camel by prefixing with "resource:" as shown below.
31.8. Spring Boot Auto-Configuration
When using language with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-language-starter</artifactId> </dependency>
The component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.language.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.language.enabled | Whether to enable auto configuration of the language component. This is enabled by default. | Boolean | |
camel.component.language.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 32. Log
Only producer is supported
The Log component logs message exchanges to the underlying logging mechanism.
Camel uses SLF4J which allows you to configure logging via, among others:
- Log4j
- Logback
- Java Util Logging
32.1. URI format
log:loggingCategory[?options]
Where loggingCategory is the name of the logging category to use. You can append query options to the URI in the following format,
?option=value&option=value&…
Using Logger instance from the Registry
If there’s single instance of org.slf4j.Logger
found in the Registry, the loggingCategory is no longer used to create logger instance. The registered instance is used instead. Also it is possible to reference particular Logger
instance using ?logger=#myLogger
URI parameter. Eventually, if there’s no registered and URI logger
parameter, the logger instance is created using loggingCategory.
For example, a log endpoint typically specifies the logging level using the level
option, as follows:
log:org.apache.camel.example?level=DEBUG
The default logger logs every exchange (regular logging). But Camel also ships with the Throughput
logger, which is used whenever the groupSize
option is specified.
Also a log in the DSL
There is also a log
directly in the DSL, but it has a different purpose. Its meant for lightweight and human logs. See more details at LogEIP.
32.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
32.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
32.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
32.3. Component Options
The Log component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
exchangeFormatter (advanced) | Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. | ExchangeFormatter |
32.4. Endpoint Options
The Log endpoint is configured using URI syntax:
log:loggerName
with the following path and query parameters:
32.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
loggerName (producer) | Required Name of the logging category to use. | String |
32.4.2. Query Parameters (27 parameters)
Name | Description | Default | Type |
---|---|---|---|
groupActiveOnly (producer) | If true, will hide stats when no new messages have been received for a time interval, if false, show stats regardless of message traffic. | true | Boolean |
groupDelay (producer) | Set the initial delay for stats (in millis). | Long | |
groupInterval (producer) | If specified will group message stats by this time interval (in millis). | Long | |
groupSize (producer) | An integer that specifies a group size for throughput logging. | Integer | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
level (producer) | Logging level to use. The default value is INFO. Enum values:
| INFO | String |
logMask (producer) | If true, mask sensitive information like password or passphrase in the log. | Boolean | |
marker (producer) | An optional Marker name to use. | String | |
exchangeFormatter (advanced) | To use a custom exchange formatter. | ExchangeFormatter | |
maxChars (formatting) | Limits the number of characters logged per line. | 10000 | int |
multiline (formatting) | If enabled then each information is outputted on a newline. | false | boolean |
showAll (formatting) | Quick option for turning all options on. (multiline, maxChars has to be manually set if to be used). | false | boolean |
showAllProperties (formatting) | Show all of the exchange properties (both internal and custom). | false | boolean |
showBody (formatting) | Show the message body. | true | boolean |
showBodyType (formatting) | Show the body Java type. | true | boolean |
showCaughtException (formatting) | If the exchange has a caught exception, show the exception message (no stack trace). A caught exception is stored as a property on the exchange (using the key org.apache.camel.Exchange#EXCEPTION_CAUGHT) and for instance a doCatch can catch exceptions. | false | boolean |
showException (formatting) | If the exchange has an exception, show the exception message (no stacktrace). | false | boolean |
showExchangeId (formatting) | Show the unique exchange ID. | false | boolean |
showExchangePattern (formatting) | Shows the Message Exchange Pattern (or MEP for short). | true | boolean |
showFiles (formatting) | If enabled Camel will output files. | false | boolean |
showFuture (formatting) | If enabled Camel will on Future objects wait for it to complete to obtain the payload to be logged. | false | boolean |
showHeaders (formatting) | Show the message headers. | false | boolean |
showProperties (formatting) | Show the exchange properties (only custom). Use showAllProperties to show both internal and custom properties. | false | boolean |
showStackTrace (formatting) | Show the stack trace, if an exchange has an exception. Only effective if one of showAll, showException or showCaughtException are enabled. | false | boolean |
showStreams (formatting) | Whether Camel should show stream bodies or not (eg such as java.io.InputStream). Beware if you enable this option then you may not be able later to access the message body as the stream have already been read by this logger. To remedy this you will have to use Stream Caching. | false | boolean |
skipBodyLineSeparator (formatting) | Whether to skip line separators when logging the message body. This allows to log the message body in one line, setting this option to false will preserve any line separators from the body, which then will log the body as is. | true | boolean |
style (formatting) | Sets the outputs style to use. Enum values:
| Default | OutputStyle |
32.5. Regular logger sample
In the route below we log the incoming orders at DEBUG
level before the order is processed:
from("activemq:orders").to("log:com.mycompany.order?level=DEBUG").to("bean:processOrder");
Or using Spring XML to define the route:
<route> <from uri="activemq:orders"/> <to uri="log:com.mycompany.order?level=DEBUG"/> <to uri="bean:processOrder"/> </route>
32.6. Regular logger with formatter sample
In the route below we log the incoming orders at INFO
level before the order is processed.
from("activemq:orders"). to("log:com.mycompany.order?showAll=true&multiline=true").to("bean:processOrder");
32.7. Throughput logger with groupSize sample
In the route below we log the throughput of the incoming orders at DEBUG
level grouped by 10 messages.
from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupSize=10").to("bean:processOrder");
32.8. Throughput logger with groupInterval sample
This route will result in message stats logged every 10s, with an initial 60s delay and stats should be displayed even if there isn’t any message traffic.
from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false").to("bean:processOrder");
The following will be logged:
"Received: 1000 new messages, with total 2000 so far. Last group took: 10000 millis which is: 100 messages per second. average: 100"
32.9. Masking sensitive information like password
You can enable security masking for logging by setting logMask
flag to true
. Note that this option also affects Log EIP.
To enable mask in Java DSL at CamelContext level:
camelContext.setLogMask(true);
And in XML:
<camelContext logMask="true">
You can also turn it on|off at endpoint level. To enable mask in Java DSL at endpoint level, add logMask=true option in the URI for the log endpoint:
from("direct:start").to("log:foo?logMask=true");
And in XML:
<route> <from uri="direct:foo"/> <to uri="log:foo?logMask=true"/> </route>
org.apache.camel.support.processor.DefaultMaskingFormatter
is used for the masking by default. If you want to use a custom masking formatter, put it into registry with the name CamelCustomLogMask
. Note that the masking formatter must implement org.apache.camel.spi.MaskingFormatter
.
32.10. Full customization of the logging output
With the options outlined in the section, you can control much of the output of the logger. However, log lines will always follow this structure:
Exchange[Id:ID-machine-local-50656-1234567901234-1-2, ExchangePattern:InOut, Properties:{CamelToEndpoint=log://org.apache.camel.component.log.TEST?showAll=true, CamelCreatedTimestamp=Thu Mar 28 00:00:00 WET 2013}, Headers:{breadcrumbId=ID-machine-local-50656-1234567901234-1-1}, BodyType:String, Body:Hello World, Out: null]
This format is unsuitable in some cases, perhaps because you need to…
- Filter the headers and properties that are printed, to strike a balance between insight and verbosity.
- Adjust the log message to whatever you deem most readable.
- Tailor log messages for digestion by log mining systems, e.g. Splunk.
- Print specific body types differently.
Whenever you require absolute customization, you can create a class that implements the interface. Within the format(Exchange)
method you have access to the full Exchange, so you can select and extract the precise information you need, format it in a custom manner and return it. The return value will become the final log message.
You can have the Log component pick up your custom ExchangeFormatter
in either of two ways:
Explicitly instantiating the LogComponent in your Registry:
<bean name="log" class="org.apache.camel.component.log.LogComponent"> <property name="exchangeFormatter" ref="myCustomFormatter" /> </bean>
32.10.1. Convention over configuration
Simply by registering a bean with the name logFormatter
; the Log Component is intelligent enough to pick it up automatically.
<bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" />
The ExchangeFormatter
gets applied to all Log endpoints within that Camel Context. If you need different ExchangeFormatters for different endpoints, just instantiate the LogComponent as many times as needed, and use the relevant bean name as the endpoint prefix.
When using a custom log formatter, you can specify parameters in the log uri, which gets configured on the custom log formatter. Though when you do that you should define the "logFormatter" as prototype scoped so its not shared if you have different parameters, for example,
<bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" scope="prototype"/>
And then we can have Camel routes using the log uri with different options:
<to uri="log:foo?param1=foo&param2=100"/> <to uri="log:bar?param1=bar&param2=200"/>
32.11. Spring Boot Auto-Configuration
When using log with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-log-starter</artifactId> </dependency>
The component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.log.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.log.enabled | Whether to enable auto configuration of the log component. This is enabled by default. | Boolean | |
camel.component.log.exchange-formatter | Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. | ExchangeFormatter | |
camel.component.log.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 33. Mail
Both producer and consumer are supported
The Mail component provides access to Email via Spring’s Mail support and the underlying JavaMail system.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mail</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
POP3 or IMAP
POP3 has some limitations and end users are encouraged to use IMAP if possible.
Using mock-mail for testing
You can use a mock framework for unit testing, which allows you to test without the need for a real mail server. However you should remember to not include the mock-mail when you go into production or other environments where you need to send mails to a real mail server. Just the presence of the mock-javamail.jar on the classpath means that it will kick in and avoid sending the mails.
33.1. URI format
Mail endpoints can have one of the following URI formats (for the protocols, SMTP, POP3, or IMAP, respectively):
smtp://[username@]host[:port][?options] pop3://[username@]host[:port][?options] imap://[username@]host[:port][?options]
The mail component also supports secure variants of these protocols (layered over SSL). You can enable the secure protocols by adding s
to the scheme:
smtps://[username@]host[:port][?options] pop3s://[username@]host[:port][?options] imaps://[username@]host[:port][?options]
33.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
33.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
33.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
33.3. Component Options
The Mail component supports 43 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
closeFolder (consumer) | Whether the consumer should close the folder after polling. Setting this option to false and having disconnect=false as well, then the consumer keep the folder open between polls. | true | boolean |
copyTo (consumer) | After processing a mail message, it can be copied to a mail folder with the given name. You can override this configuration value, with a header with the key copyTo, allowing you to copy messages to folder names configured at runtime. | String | |
decodeFilename (consumer) | If set to true, the MimeUtility.decodeText method will be used to decode the filename. This is similar to setting JVM system property mail.mime.encodefilename. | false | boolean |
delete (consumer) | Deletes the messages after they have been processed. This is done by setting the DELETED flag on the mail message. If false, the SEEN flag is set instead. As of Camel 2.10 you can override this configuration option by setting a header with the key delete to determine if the mail should be deleted or not. | false | boolean |
disconnect (consumer) | Whether the consumer should disconnect after polling. If enabled this forces Camel to connect on each poll. | false | boolean |
handleFailedMessage (consumer) | If the mail consumer cannot retrieve a given mail message, then this option allows to handle the caused exception by the consumer’s error handler. By enable the bridge error handler on the consumer, then the Camel routing error handler can handle the exception instead. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. | false | boolean |
mimeDecodeHeaders (consumer) | This option enables transparent MIME decoding and unfolding for mail headers. | false | boolean |
moveTo (consumer) | After processing a mail message, it can be moved to a mail folder with the given name. You can override this configuration value, with a header with the key moveTo, allowing you to move messages to folder names configured at runtime. | String | |
peek (consumer) | Will mark the javax.mail.Message as peeked before processing the mail message. This applies to IMAPMessage messages types only. By using peek the mail will not be eager marked as SEEN on the mail server, which allows us to rollback the mail message if there is an error processing in Camel. | true | boolean |
skipFailedMessage (consumer) | If the mail consumer cannot retrieve a given mail message, then this option allows to skip the message and move on to retrieve the next mail message. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. | false | boolean |
unseen (consumer) | Whether to limit by unseen mails only. | true | boolean |
fetchSize (consumer (advanced)) | Sets the maximum number of messages to consume during a poll. This can be used to avoid overloading a mail server, if a mailbox folder contains a lot of messages. Default value of -1 means no fetch size and all messages will be consumed. Setting the value to 0 is a special corner case, where Camel will not consume any messages at all. | -1 | int |
folderName (consumer (advanced)) | The folder to poll. | INBOX | String |
mapMailMessage (consumer (advanced)) | Specifies whether Camel should map the received mail message to Camel body/headers/attachments. If set to true, the body of the mail message is mapped to the body of the Camel IN message, the mail headers are mapped to IN headers, and the attachments to Camel IN attachment message. If this option is set to false then the IN message contains a raw javax.mail.Message. You can retrieve this raw message by calling exchange.getIn().getBody(javax.mail.Message.class). | true | boolean |
bcc (producer) | Sets the BCC email address. Separate multiple email addresses with comma. | String | |
cc (producer) | Sets the CC email address. Separate multiple email addresses with comma. | String | |
from (producer) | The from email address. | camel@localhost | String |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
replyTo (producer) | The Reply-To recipients (the receivers of the response mail). Separate multiple email addresses with a comma. | String | |
subject (producer) | The Subject of the message being sent. Note: Setting the subject in the header takes precedence over this option. | String | |
to (producer) | Sets the To email address. Separate multiple email addresses with comma. | String | |
javaMailSender (producer (advanced)) | To use a custom org.apache.camel.component.mail.JavaMailSender for sending emails. | JavaMailSender | |
additionalJavaMailProperties (advanced) | Sets additional java mail properties, that will append/override any default properties that is set based on all the other options. This is useful if you need to add some special options but want to keep the others as is. | Properties | |
alternativeBodyHeader (advanced) | Specifies the key to an IN message header that contains an alternative email body. For example, if you send emails in text/html format and want to provide an alternative mail body for non-HTML email clients, set the alternative mail body with this key as a header. | CamelMailAlternativeBody | String |
attachmentsContentTransferEncodingResolver (advanced) | To use a custom AttachmentsContentTransferEncodingResolver to resolve what content-type-encoding to use for attachments. | AttachmentsContentTransferEncodingResolver | |
authenticator (advanced) | The authenticator for login. If set then the password and username are ignored. Can be used for tokens which can expire and therefore must be read dynamically. | MailAuthenticator | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
configuration (advanced) | Sets the Mail configuration. | MailConfiguration | |
connectionTimeout (advanced) | The connection timeout in milliseconds. | 30000 | int |
contentType (advanced) | The mail message content type. Use text/html for HTML mails. | text/plain | String |
contentTypeResolver (advanced) | Resolver to determine Content-Type for file attachments. | ContentTypeResolver | |
debugMode (advanced) | Enable debug mode on the underlying mail framework. The SUN Mail framework logs the debug messages to System.out by default. | false | boolean |
ignoreUnsupportedCharset (advanced) | Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. | false | boolean |
ignoreUriScheme (advanced) | Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. | false | boolean |
javaMailProperties (advanced) | Sets the java mail options. Will clear any default properties and only use the properties provided for this method. | Properties | |
session (advanced) | Specifies the mail session that camel should use for all mail interactions. Useful in scenarios where mail sessions are created and managed by some other resource, such as a JavaEE container. When using a custom mail session, then the hostname and port from the mail session will be used (if configured on the session). | Session | |
useInlineAttachments (advanced) | Whether to use disposition inline or attachment. | false | boolean |
headerFilterStrategy (filter) | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
password (security) | The password for login. See also setAuthenticator(MailAuthenticator). | String | |
sslContextParameters (security) | To configure security using SSLContextParameters. | SSLContextParameters | |
useGlobalSslContextParameters (security) | Enable usage of global SSL context parameters. | false | boolean |
username (security) | The username for login. See also setAuthenticator(MailAuthenticator). | String |
33.4. Endpoint Options
The Mail endpoint is configured using URI syntax:
imap:host:port
with the following path and query parameters:
33.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
host (common) | Required The mail server host name. | String | |
port (common) | The port number of the mail server. | int |
33.4.2. Query Parameters (66 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
closeFolder (consumer) | Whether the consumer should close the folder after polling. Setting this option to false and having disconnect=false as well, then the consumer keep the folder open between polls. | true | boolean |
copyTo (consumer) | After processing a mail message, it can be copied to a mail folder with the given name. You can override this configuration value, with a header with the key copyTo, allowing you to copy messages to folder names configured at runtime. | String | |
decodeFilename (consumer) | If set to true, the MimeUtility.decodeText method will be used to decode the filename. This is similar to setting JVM system property mail.mime.encodefilename. | false | boolean |
delete (consumer) | Deletes the messages after they have been processed. This is done by setting the DELETED flag on the mail message. If false, the SEEN flag is set instead. As of Camel 2.10 you can override this configuration option by setting a header with the key delete to determine if the mail should be deleted or not. | false | boolean |
disconnect (consumer) | Whether the consumer should disconnect after polling. If enabled this forces Camel to connect on each poll. | false | boolean |
handleFailedMessage (consumer) | If the mail consumer cannot retrieve a given mail message, then this option allows to handle the caused exception by the consumer’s error handler. By enable the bridge error handler on the consumer, then the Camel routing error handler can handle the exception instead. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. | false | boolean |
maxMessagesPerPoll (consumer) | Specifies the maximum number of messages to gather per poll. By default, no maximum is set. Can be used to set a limit of e.g. 1000 to avoid downloading thousands of files when the server starts up. Set a value of 0 or negative to disable this option. | int | |
mimeDecodeHeaders (consumer) | This option enables transparent MIME decoding and unfolding for mail headers. | false | boolean |
moveTo (consumer) | After processing a mail message, it can be moved to a mail folder with the given name. You can override this configuration value, with a header with the key moveTo, allowing you to move messages to folder names configured at runtime. | String | |
peek (consumer) | Will mark the javax.mail.Message as peeked before processing the mail message. This applies to IMAPMessage messages types only. By using peek the mail will not be eager marked as SEEN on the mail server, which allows us to rollback the mail message if there is an error processing in Camel. | true | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
skipFailedMessage (consumer) | If the mail consumer cannot retrieve a given mail message, then this option allows to skip the message and move on to retrieve the next mail message. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. | false | boolean |
unseen (consumer) | Whether to limit by unseen mails only. | true | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
fetchSize (consumer (advanced)) | Sets the maximum number of messages to consume during a poll. This can be used to avoid overloading a mail server, if a mailbox folder contains a lot of messages. Default value of -1 means no fetch size and all messages will be consumed. Setting the value to 0 is a special corner case, where Camel will not consume any messages at all. | -1 | int |
folderName (consumer (advanced)) | The folder to poll. | INBOX | String |
mailUidGenerator (consumer (advanced)) | A pluggable MailUidGenerator that allows to use custom logic to generate UUID of the mail message. | MailUidGenerator | |
mapMailMessage (consumer (advanced)) | Specifies whether Camel should map the received mail message to Camel body/headers/attachments. If set to true, the body of the mail message is mapped to the body of the Camel IN message, the mail headers are mapped to IN headers, and the attachments to Camel IN attachment message. If this option is set to false then the IN message contains a raw javax.mail.Message. You can retrieve this raw message by calling exchange.getIn().getBody(javax.mail.Message.class). | true | boolean |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
postProcessAction (consumer (advanced)) | Refers to an MailBoxPostProcessAction for doing post processing tasks on the mailbox once the normal processing ended. | MailBoxPostProcessAction | |
bcc (producer) | Sets the BCC email address. Separate multiple email addresses with comma. | String | |
cc (producer) | Sets the CC email address. Separate multiple email addresses with comma. | String | |
from (producer) | The from email address. | camel@localhost | String |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
replyTo (producer) | The Reply-To recipients (the receivers of the response mail). Separate multiple email addresses with a comma. | String | |
subject (producer) | The Subject of the message being sent. Note: Setting the subject in the header takes precedence over this option. | String | |
to (producer) | Sets the To email address. Separate multiple email addresses with comma. | String | |
javaMailSender (producer (advanced)) | To use a custom org.apache.camel.component.mail.JavaMailSender for sending emails. | JavaMailSender | |
additionalJavaMailProperties (advanced) | Sets additional java mail properties, that will append/override any default properties that is set based on all the other options. This is useful if you need to add some special options but want to keep the others as is. | Properties | |
alternativeBodyHeader (advanced) | Specifies the key to an IN message header that contains an alternative email body. For example, if you send emails in text/html format and want to provide an alternative mail body for non-HTML email clients, set the alternative mail body with this key as a header. | CamelMailAlternativeBody | String |
attachmentsContentTransferEncodingResolver (advanced) | To use a custom AttachmentsContentTransferEncodingResolver to resolve what content-type-encoding to use for attachments. | AttachmentsContentTransferEncodingResolver | |
authenticator (advanced) | The authenticator for login. If set then the password and username are ignored. Can be used for tokens which can expire and therefore must be read dynamically. | MailAuthenticator | |
binding (advanced) | Sets the binding used to convert from a Camel message to and from a Mail message. | MailBinding | |
connectionTimeout (advanced) | The connection timeout in milliseconds. | 30000 | int |
contentType (advanced) | The mail message content type. Use text/html for HTML mails. | text/plain | String |
contentTypeResolver (advanced) | Resolver to determine Content-Type for file attachments. | ContentTypeResolver | |
debugMode (advanced) | Enable debug mode on the underlying mail framework. The SUN Mail framework logs the debug messages to System.out by default. | false | boolean |
headerFilterStrategy (advanced) | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. | HeaderFilterStrategy | |
ignoreUnsupportedCharset (advanced) | Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. | false | boolean |
ignoreUriScheme (advanced) | Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. | false | boolean |
javaMailProperties (advanced) | Sets the java mail options. Will clear any default properties and only use the properties provided for this method. | Properties | |
session (advanced) | Specifies the mail session that camel should use for all mail interactions. Useful in scenarios where mail sessions are created and managed by some other resource, such as a JavaEE container. When using a custom mail session, then the hostname and port from the mail session will be used (if configured on the session). | Session | |
useInlineAttachments (advanced) | Whether to use disposition inline or attachment. | false | boolean |
idempotentRepository (filter) | A pluggable repository org.apache.camel.spi.IdempotentRepository which allows to cluster consuming from the same mailbox, and let the repository coordinate whether a mail message is valid for the consumer to process. By default no repository is in use. | IdempotentRepository | |
idempotentRepositoryRemoveOnCommit (filter) | When using idempotent repository, then when the mail message has been successfully processed and is committed, should the message id be removed from the idempotent repository (default) or be kept in the repository. By default its assumed the message id is unique and has no value to be kept in the repository, because the mail message will be marked as seen/moved or deleted to prevent it from being consumed again. And therefore having the message id stored in the idempotent repository has little value. However this option allows to store the message id, for whatever reason you may have. | true | boolean |
searchTerm (filter) | Refers to a javax.mail.search.SearchTerm which allows to filter mails based on search criteria such as subject, body, from, sent after a certain date etc. | SearchTerm | |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 60000 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
password (security) | The password for login. See also setAuthenticator(MailAuthenticator). | String | |
sslContextParameters (security) | To configure security using SSLContextParameters. | SSLContextParameters | |
username (security) | The username for login. See also setAuthenticator(MailAuthenticator). | String | |
sortTerm (sort) | Sorting order for messages. Only natively supported for IMAP. Emulated to some degree when using POP3 or when IMAP server does not have the SORT capability. | SortTerm[] |
33.4.3. Sample endpoints
Typically, you specify a URI with login credentials as follows (taking SMTP as an example):
smtp://[username@]host[:port][?password=somepwd]
Alternatively, it is possible to specify both the user name and the password as query options:
smtp://host[:port]?password=somepwd&username=someuser
For example:
smtp://mycompany.mailserver:30?password=tiger&username=scott
33.4.4. Component alias names
- IMAP
- IMAPs
- POP3s
- SMTP
- SMTPs
33.4.5. Default ports
Default port numbers are supported. If the port number is omitted, Camel determines the port number to use based on the protocol.
Protocol | Default Port Number |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
33.5. SSL support
The underlying mail framework is responsible for providing SSL support. You may either configure SSL/TLS support by completely specifying the necessary Java Mail API configuration options, or you may provide a configured SSLContextParameters through the component or endpoint configuration.
33.5.1. Using the JSSE Configuration Utility
The mail component supports SSL/TLS configuration through the Camel JSSE Configuration Utility. This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the mail component.
Programmatic configuration of the endpoint
KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/truststore.jks"); ksp.setPassword("keystorePassword"); TrustManagersParameters tmp = new TrustManagersParameters(); tmp.setKeyStore(ksp); SSLContextParameters scp = new SSLContextParameters(); scp.setTrustManagers(tmp); Registry registry = ... registry.bind("sslContextParameters", scp); ... from(...) .to("smtps://smtp.google.com?username=user@gmail.com&password=password&sslContextParameters=#sslContextParameters");
Spring DSL based configuration of endpoint
... <camel:sslContextParameters id="sslContextParameters"> <camel:trustManagers> <camel:keyStore resource="/users/home/server/truststore.jks" password="keystorePassword"/> </camel:trustManagers> </camel:sslContextParameters>... ... <to uri="smtps://smtp.google.com?username=user@gmail.com&password=password&sslContextParameters=#sslContextParameters"/>...
33.5.2. Configuring JavaMail Directly
Camel uses Jakarta JavaMail, which only trusts certificates issued by well known Certificate Authorities (the default JVM trust configuration). If you issue your own certificates, you have to import the CA certificates into the JVM’s Java trust/key store files, override the default JVM trust/key store files (see SSLNOTES.txt
in JavaMail for details).
33.6. Mail Message Content
Camel uses the message exchange’s IN body as the MimeMessage text content. The body is converted to String.class.
Camel copies all of the exchange’s IN headers to the MimeMessage headers.
The subject of the MimeMessage can be configured using a header property on the IN message. The code below demonstrates this:
The same applies for other MimeMessage headers such as recipients, so you can use a header property as To:
When using the MailProducer the send the mail to server, you should be able to get the message id of the MimeMessage with the key CamelMailMessageId from the Camel message header.
33.7. Headers take precedence over pre-configured recipients
The recipients specified in the message headers always take precedence over recipients pre-configured in the endpoint URI. The idea is that if you provide any recipients in the message headers, that is what you get. The recipients pre-configured in the endpoint URI are treated as a fallback.
In the sample code below, the email message is sent to davsclaus@apache.org
, because it takes precedence over the pre-configured recipient, info@mycompany.com
. Any CC
and BCC
settings in the endpoint URI are also ignored and those recipients will not receive any mail. The choice between headers and pre-configured settings is all or nothing: the mail component either takes the recipients exclusively from the headers or exclusively from the pre-configured settings. It is not possible to mix and match headers and pre-configured settings.
Map<String, Object> headers = new HashMap<String, Object>(); headers.put("to", "davsclaus@apache.org"); template.sendBodyAndHeaders("smtp://admin@localhost?to=info@mycompany.com", "Hello World", headers);
33.8. Multiple recipients for easier configuration
It is possible to set multiple recipients using a comma-separated or a semicolon-separated list. This applies both to header settings and to settings in an endpoint URI. For example:
Map<String, Object> headers = new HashMap<String, Object>(); headers.put("to", "davsclaus@apache.org ; jstrachan@apache.org ; ningjiang@apache.org");
The preceding example uses a semicolon, ;
, as the separator character.
33.9. Setting sender name and email
You can specify recipients in the format, name <email>
, to include both the name and the email address of the recipient.
For example, you define the following headers on the a Message:
Map headers = new HashMap(); map.put("To", "Claus Ibsen <davsclaus@apache.org>"); map.put("From", "James Strachan <jstrachan@apache.org>"); map.put("Subject", "Camel is cool");
33.10. JavaMail API (ex SUN JavaMail)
JavaMail API is used under the hood for consuming and producing mails. We encourage end-users to consult these references when using either POP3 or IMAP protocol. Note particularly that POP3 has a much more limited set of features than IMAP.
- JavaMail POP3 API
- JavaMail IMAP API
- And generally about the MAIL Flags
33.11. Samples
We start with a simple route that sends the messages received from a JMS queue as emails. The email account is the admin
account on mymailserver.com
.
from("jms://queue:subscription").to("smtp://admin@mymailserver.com?password=secret");
In the next sample, we poll a mailbox for new emails once every minute.
from("imap://admin@mymailserver.com?password=secret&unseen=true&delay=60000") .to("seda://mails");
33.12. Sending mail with attachment sample
Attachments are not support by all Camel components
The Attachments API is based on the Java Activation Framework and is generally only used by the Mail API. Since many of the other Camel components do not support attachments, the attachments could potentially be lost as they propagate along the route. The rule of thumb, therefore, is to add attachments just before sending a message to the mail endpoint.
The mail component supports attachments. In the sample below, we send a mail message containing a plain text message with a logo file attachment.
33.13. SSL sample
In this sample, we want to poll our Google mail inbox for mails. To download mail onto a local mail client, Google mail requires you to enable and configure SSL. This is done by logging into your Google mail account and changing your settings to allow IMAP access. Google have extensive documentation on how to do this.
from("imaps://imap.gmail.com?username=YOUR_USERNAME@gmail.com&password=YOUR_PASSWORD" + "&delete=false&unseen=true&delay=60000").to("log:newmail");
The preceding route polls the Google mail inbox for new mails once every minute and logs the received messages to the newmail
logger category.
Running the sample with DEBUG
logging enabled, we can monitor the progress in the logs:
2008-05-08 06:32:09,640 DEBUG MailConsumer - Connecting to MailStore imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX 2008-05-08 06:32:11,203 DEBUG MailConsumer - Polling mailfolder: imaps//imap.gmail.com:993 (SSL enabled), folder=INBOX 2008-05-08 06:32:11,640 DEBUG MailConsumer - Fetching 1 messages. Total 1 messages. 2008-05-08 06:32:12,171 DEBUG MailConsumer - Processing message: messageNumber=[332], from=[James Bond <007@mi5.co.uk>], to=YOUR_USERNAME@gmail.com], subject=[... 2008-05-08 06:32:12,187 INFO newmail - Exchange[MailMessage: messageNumber=[332], from=[James Bond <007@mi5.co.uk>], to=YOUR_USERNAME@gmail.com], subject=[...
33.14. Consuming mails with attachment sample
In this sample we poll a mailbox and store all attachments from the mails as files. First, we define a route to poll the mailbox. As this sample is based on google mail, it uses the same route as shown in the SSL sample:
from("imaps://imap.gmail.com?username=YOUR_USERNAME@gmail.com&password=YOUR_PASSWORD" + "&delete=false&unseen=true&delay=60000").process(new MyMailProcessor());
Instead of logging the mail we use a processor where we can process the mail from java code:
public void process(Exchange exchange) throws Exception { // the API is a bit clunky so we need to loop AttachmentMessage attachmentMessage = exchange.getMessage(AttachmentMessage.class); Map<String, DataHandler> attachments = attachmentMessage.getAttachments(); if (attachments.size() > 0) { for (String name : attachments.keySet()) { DataHandler dh = attachments.get(name); // get the file name String filename = dh.getName(); // get the content and convert it to byte[] byte[] data = exchange.getContext().getTypeConverter() .convertTo(byte[].class, dh.getInputStream()); // write the data to a file FileOutputStream out = new FileOutputStream(filename); out.write(data); out.flush(); out.close(); } } }
As you can see the API to handle attachments is a bit clunky but it’s there so you can get the javax.activation.DataHandler
so you can handle the attachments using standard API.
33.15. How to split a mail message with attachments
In this example we consume mail messages which may have a number of attachments. What we want to do is to use the Splitter EIP per individual attachment, to process the attachments separately. For example if the mail message has 5 attachments, we want the Splitter to process five messages, each having a single attachment. To do this we need to provide a custom Expression to the Splitter where we provide a List<Message> that contains the five messages with the single attachment.
The code is provided out of the box in Camel 2.10 onwards in the camel-mail
component. The code is in the class: org.apache.camel.component.mail.SplitAttachmentsExpression
, which you can find in the source code here.
In the Camel route you then need to use this Expression in the route as shown below:
If you use XML DSL then you need to declare a method call expression in the Splitter as shown below
<split> <method beanType="org.apache.camel.component.mail.SplitAttachmentsExpression"/> <to uri="mock:split"/> </split>
You can also split the attachments as byte[] to be stored as the message body. This is done by creating the expression with boolean true
SplitAttachmentsExpression split = SplitAttachmentsExpression(true);
And then use the expression with the splitter EIP.
33.16. Using custom SearchTerm
You can configure a searchTerm
on the MailEndpoint
which allows you to filter out unwanted mails.
For example to filter mails to contain Camel in either Subject or Text you can do as follows:
<route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm.subjectOrBody=Camel"/> <to uri="bean:myBean"/> </route>
Notice we use the "searchTerm.subjectOrBody"
as parameter key to indicate that we want to search on mail subject or body, to contain the word "Camel".
The class org.apache.camel.component.mail.SimpleSearchTerm
has a number of options you can configure:
Or to get the new unseen emails going 24 hours back in time you can do. Notice the "now-24h" syntax. See the table below for more details.
<route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm.fromSentDate=now-24h"/> <to uri="bean:myBean"/> </route>
You can have multiple searchTerm in the endpoint uri configuration. They would then be combined together using AND operator, eg so both conditions must match. For example to get the last unseen emails going back 24 hours which has Camel in the mail subject you can do:
<route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm.subject=Camel&searchTerm.fromSentDate=now-24h"/> <to uri="bean:myBean"/> </route>
The SimpleSearchTerm
is designed to be easily configurable from a POJO, so you can also configure it using a <bean> style in XML
<bean id="mySearchTerm" class="org.apache.camel.component.mail.SimpleSearchTerm"> <property name="subject" value="Order"/> <property name="to" value="acme-order@acme.com"/> <property name="fromSentDate" value="now"/> </bean>
You can then refer to this bean, using #beanId in your Camel route as shown:
<route> <from uri="imaps://mymailseerver?username=foo&password=secret&searchTerm=#mySearchTerm"/> <to uri="bean:myBean"/> </route>
In Java there is a builder class to build compound SearchTerms
using the org.apache.camel.component.mail.SearchTermBuilder
class. This allows you to build complex terms such as:
// we just want the unseen mails which is not spam SearchTermBuilder builder = new SearchTermBuilder(); builder.unseen().body(Op.not, "Spam").subject(Op.not, "Spam") // which was sent from either foo or bar .from("foo@somewhere.com").from(Op.or, "bar@somewhere.com"); // .. and we could continue building the terms SearchTerm term = builder.build();
33.17. Polling Optimization
The parameter maxMessagePerPoll and fetchSize allow you to restrict the number message that should be processed for each poll. These parameters should help to prevent bad performance when working with folders that contain a lot of messages. In previous versions these parameters have been evaluated too late, so that big mailboxes could still cause performance problems. With Camel 3.1 these parameters are evaluated earlier during the poll to avoid these problems.
33.18. Using headers with additional Java Mail Sender properties
When sending mails, then you can provide dynamic java mail properties for the JavaMailSender
from the Exchange as message headers with keys starting with java.smtp.
.
You can set any of the java.smtp
properties which you can find in the Java Mail documentation.
For example to provide a dynamic uuid in java.smtp.from
(SMTP MAIL command):
.setHeader("from", constant("reply2me@foo.com")); .setHeader("java.smtp.from", method(UUID.class, "randomUUID")); .to("smtp://mymailserver:1234");
This is only supported when not using a custom JavaMailSender
.
33.19. Spring Boot Auto-Configuration
When using imap with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mail-starter</artifactId> </dependency>
The component supports 50 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.mail.additional-java-mail-properties | Sets additional java mail properties, that will append/override any default properties that is set based on all the other options. This is useful if you need to add some special options but want to keep the others as is. The option is a java.util.Properties type. | Properties | |
camel.component.mail.alternative-body-header | Specifies the key to an IN message header that contains an alternative email body. For example, if you send emails in text/html format and want to provide an alternative mail body for non-HTML email clients, set the alternative mail body with this key as a header. | CamelMailAlternativeBody | String |
camel.component.mail.attachments-content-transfer-encoding-resolver | To use a custom AttachmentsContentTransferEncodingResolver to resolve what content-type-encoding to use for attachments. The option is a org.apache.camel.component.mail.AttachmentsContentTransferEncodingResolver type. | AttachmentsContentTransferEncodingResolver | |
camel.component.mail.authenticator | The authenticator for login. If set then the password and username are ignored. Can be used for tokens which can expire and therefore must be read dynamically. The option is a org.apache.camel.component.mail.MailAuthenticator type. | MailAuthenticator | |
camel.component.mail.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.mail.bcc | Sets the BCC email address. Separate multiple email addresses with comma. | String | |
camel.component.mail.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.mail.cc | Sets the CC email address. Separate multiple email addresses with comma. | String | |
camel.component.mail.close-folder | Whether the consumer should close the folder after polling. Setting this option to false and having disconnect=false as well, then the consumer keep the folder open between polls. | true | Boolean |
camel.component.mail.configuration | Sets the Mail configuration. The option is a org.apache.camel.component.mail.MailConfiguration type. | MailConfiguration | |
camel.component.mail.connection-timeout | The connection timeout in milliseconds. | 30000 | Integer |
camel.component.mail.content-type | The mail message content type. Use text/html for HTML mails. | text/plain | String |
camel.component.mail.content-type-resolver | Resolver to determine Content-Type for file attachments. The option is a org.apache.camel.component.mail.ContentTypeResolver type. | ContentTypeResolver | |
camel.component.mail.copy-to | After processing a mail message, it can be copied to a mail folder with the given name. You can override this configuration value, with a header with the key copyTo, allowing you to copy messages to folder names configured at runtime. | String | |
camel.component.mail.debug-mode | Enable debug mode on the underlying mail framework. The SUN Mail framework logs the debug messages to System.out by default. | false | Boolean |
camel.component.mail.decode-filename | If set to true, the MimeUtility.decodeText method will be used to decode the filename. This is similar to setting JVM system property mail.mime.encodefilename. | false | Boolean |
camel.component.mail.delete | Deletes the messages after they have been processed. This is done by setting the DELETED flag on the mail message. If false, the SEEN flag is set instead. As of Camel 2.10 you can override this configuration option by setting a header with the key delete to determine if the mail should be deleted or not. | false | Boolean |
camel.component.mail.disconnect | Whether the consumer should disconnect after polling. If enabled this forces Camel to connect on each poll. | false | Boolean |
camel.component.mail.enabled | Whether to enable auto configuration of the mail component. This is enabled by default. | Boolean | |
camel.component.mail.fetch-size | Sets the maximum number of messages to consume during a poll. This can be used to avoid overloading a mail server, if a mailbox folder contains a lot of messages. Default value of -1 means no fetch size and all messages will be consumed. Setting the value to 0 is a special corner case, where Camel will not consume any messages at all. | -1 | Integer |
camel.component.mail.folder-name | The folder to poll. | INBOX | String |
camel.component.mail.from | The from email address. | camel@localhost | String |
camel.component.mail.handle-failed-message | If the mail consumer cannot retrieve a given mail message, then this option allows to handle the caused exception by the consumer’s error handler. By enable the bridge error handler on the consumer, then the Camel routing error handler can handle the exception instead. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. | false | Boolean |
camel.component.mail.header-filter-strategy | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.mail.ignore-unsupported-charset | Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. | false | Boolean |
camel.component.mail.ignore-uri-scheme | Option to let Camel ignore unsupported charset in the local JVM when sending mails. If the charset is unsupported then charset=XXX (where XXX represents the unsupported charset) is removed from the content-type and it relies on the platform default instead. | false | Boolean |
camel.component.mail.java-mail-properties | Sets the java mail options. Will clear any default properties and only use the properties provided for this method. The option is a java.util.Properties type. | Properties | |
camel.component.mail.java-mail-sender | To use a custom org.apache.camel.component.mail.JavaMailSender for sending emails. The option is a org.apache.camel.component.mail.JavaMailSender type. | JavaMailSender | |
camel.component.mail.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.mail.map-mail-message | Specifies whether Camel should map the received mail message to Camel body/headers/attachments. If set to true, the body of the mail message is mapped to the body of the Camel IN message, the mail headers are mapped to IN headers, and the attachments to Camel IN attachment message. If this option is set to false then the IN message contains a raw javax.mail.Message. You can retrieve this raw message by calling exchange.getIn().getBody(javax.mail.Message.class). | true | Boolean |
camel.component.mail.mime-decode-headers | This option enables transparent MIME decoding and unfolding for mail headers. | false | Boolean |
camel.component.mail.move-to | After processing a mail message, it can be moved to a mail folder with the given name. You can override this configuration value, with a header with the key moveTo, allowing you to move messages to folder names configured at runtime. | String | |
camel.component.mail.password | The password for login. See also setAuthenticator(MailAuthenticator). | String | |
camel.component.mail.peek | Will mark the javax.mail.Message as peeked before processing the mail message. This applies to IMAPMessage messages types only. By using peek the mail will not be eager marked as SEEN on the mail server, which allows us to rollback the mail message if there is an error processing in Camel. | true | Boolean |
camel.component.mail.reply-to | The Reply-To recipients (the receivers of the response mail). Separate multiple email addresses with a comma. | String | |
camel.component.mail.session | Specifies the mail session that camel should use for all mail interactions. Useful in scenarios where mail sessions are created and managed by some other resource, such as a JavaEE container. When using a custom mail session, then the hostname and port from the mail session will be used (if configured on the session). The option is a javax.mail.Session type. | Session | |
camel.component.mail.skip-failed-message | If the mail consumer cannot retrieve a given mail message, then this option allows to skip the message and move on to retrieve the next mail message. The default behavior would be the consumer throws an exception and no mails from the batch would be able to be routed by Camel. | false | Boolean |
camel.component.mail.ssl-context-parameters | To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. | SSLContextParameters | |
camel.component.mail.subject | The Subject of the message being sent. Note: Setting the subject in the header takes precedence over this option. | String | |
camel.component.mail.to | Sets the To email address. Separate multiple email addresses with comma. | String | |
camel.component.mail.unseen | Whether to limit by unseen mails only. | true | Boolean |
camel.component.mail.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
camel.component.mail.use-inline-attachments | Whether to use disposition inline or attachment. | false | Boolean |
camel.component.mail.username | The username for login. See also setAuthenticator(MailAuthenticator). | String | |
camel.dataformat.mime-multipart.binary-content | Defines whether the content of binary parts in the MIME multipart is binary (true) or Base-64 encoded (false) Default is false. | false | Boolean |
camel.dataformat.mime-multipart.enabled | Whether to enable auto configuration of the mime-multipart data format. This is enabled by default. | Boolean | |
camel.dataformat.mime-multipart.headers-inline | Defines whether the MIME-Multipart headers are part of the message body (true) or are set as Camel headers (false). Default is false. | false | Boolean |
camel.dataformat.mime-multipart.include-headers | A regex that defines which Camel headers are also included as MIME headers into the MIME multipart. This will only work if headersInline is set to true. Default is to include no headers. | String | |
camel.dataformat.mime-multipart.multipart-sub-type | Specify the subtype of the MIME Multipart. Default is mixed. | mixed | String |
camel.dataformat.mime-multipart.multipart-without-attachment | Defines whether a message without attachment is also marshaled into a MIME Multipart (with only one body part). Default is false. | false | Boolean |
Chapter 34. Master
Only consumer is supported
The Camel-Master endpoint provides a way to ensure only a single consumer in a cluster consumes from a given endpoint; with automatic failover if that JVM dies.
This can be very useful if you need to consume from some legacy back end which either doesn’t support concurrent consumption or due to commercial or stability reasons you can only have a single connection at any point in time.
34.1. Using the master endpoint
Just prefix any camel endpoint with master:someName: where someName is a logical name and is used to acquire the master lock. e.g.
from("master:cheese:jms:foo").to("activemq:wine");
In this example, there master component ensures that the route is only active in one node, at any given time, in the cluster. So if there are 8 nodes in the cluster, then the master component will elect one route to be the leader, and only this route will be active, and hence only this route will consume messages from jms:foo
. In case this route is stopped or unexpected terminated, then the master component will detect this, and re-elect another node to be active, which will then become active and start consuming messages from jms:foo
.
Apache ActiveMQ 5.x has such feature out of the box called Exclusive Consumers.
34.2. URI format
master:namespace:endpoint[?options]
Where endpoint is any Camel endpoint you want to run in master/slave mode.
34.3. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
34.3.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
34.3.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
34.4. Component Options
The Master component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
service (advanced) | Inject the service to use. | CamelClusterService | |
serviceSelector (advanced) | Inject the service selector used to lookup the CamelClusterService to use. | Selector |
34.5. Endpoint Options
The Master endpoint is configured using URI syntax:
master:namespace:delegateUri
with the following path and query parameters:
34.5.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
namespace (consumer) | Required The name of the cluster namespace to use. | String | |
delegateUri (consumer) | Required The endpoint uri to use in master/slave mode. | String |
34.5.2. Query Parameters (3 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern |
34.6. Example
You can protect a clustered Camel application to only consume files from one active node.
// the file endpoint we want to consume from String url = "file:target/inbox?delete=true"; // use the camel master component in the clustered group named myGroup // to run a master/slave mode in the following Camel url from("master:myGroup:" + url) .log(name + " - Received file: ${file:name}") .delay(delay) .log(name + " - Done file: ${file:name}") .to("file:target/outbox");
The master component leverages CamelClusterService you can configure using
Java
ZooKeeperClusterService service = new ZooKeeperClusterService(); service.setId("camel-node-1"); service.setNodes("myzk:2181"); service.setBasePath("/camel/cluster"); context.addService(service)
Xml (Spring/Blueprint)
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="cluster" class="org.apache.camel.component.zookeeper.cluster.ZooKeeperClusterService"> <property name="id" value="camel-node-1"/> <property name="basePath" value="/camel/cluster"/> <property name="nodes" value="myzk:2181"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring" autoStartup="false"> ... </camelContext> </beans>
Spring boot
camel.component.zookeeper.cluster.service.enabled = true camel.component.zookeeper.cluster.service.id = camel-node-1 camel.component.zookeeper.cluster.service.base-path = /camel/cluster camel.component.zookeeper.cluster.service.nodes = myzk:2181
34.7. Implementations
Camel provides the following ClusterService implementations:
- camel-consul
- camel-file
- camel-infinispan
- camel-jgroups-raft
- camel-jgroups
- camel-kubernetes
- camel-zookeeper
34.8. Spring Boot Auto-Configuration
When using master with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-master-starter</artifactId> </dependency>
The component supports 5 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.master.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.master.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.master.enabled | Whether to enable auto configuration of the master component. This is enabled by default. | Boolean | |
camel.component.master.service | Inject the service to use. The option is a org.apache.camel.cluster.CamelClusterService type. | CamelClusterService | |
camel.component.master.service-selector | Inject the service selector used to lookup the CamelClusterService to use. The option is a org.apache.camel.cluster.CamelClusterService.Selector type. | CamelClusterService$Selector |
Chapter 35. Minio
Since Camel 3.5
Both producer and consumer are supported
The Minio component supports storing and retrieving objects from/to Minio service.
35.1. Prerequisites
You must have valid credentials for authorized access to the buckets/folders. More information is available at Minio.
35.2. URI Format
minio://bucketName[?options]
The bucket will be created if it doesn’t already exist. You can append query options to the URI in the following format,
?options=value&option2=value&…
For example in order to read file hello.txt
from the bucket helloBucket
, use the following snippet:
from("minio://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt") .to("file:/var/downloaded");
35.3. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
35.3.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
35.3.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
35.4. Component Options
The Minio component supports 47 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
autoCreateBucket (common) | Setting the autocreation of the bucket if bucket name not exist. | true | boolean |
configuration (common) | The component configuration. | MinioConfiguration | |
customHttpClient (common) | Set custom HTTP client for authenticated access. | OkHttpClient | |
endpoint (common) | Endpoint can be an URL, domain name, IPv4 address or IPv6 address. | String | |
minioClient (common) | Autowired Reference to a Minio Client object in the registry. | MinioClient | |
objectLock (common) | Set when creating new bucket. | false | boolean |
policy (common) | The policy for this queue to set in the method. | String | |
proxyPort (common) | TCP/IP port number. 80 and 443 are used as defaults for HTTP and HTTPS. | Integer | |
region (common) | The region in which Minio client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1). You’ll need to use the name Region.EU_WEST_1.id(). | String | |
secure (common) | Flag to indicate to use secure connection to minio service or not. | false | boolean |
serverSideEncryption (common) | Server-side encryption. | ServerSideEncryption | |
serverSideEncryptionCustomerKey (common) | Server-side encryption for source object while copy/move objects. | ServerSideEncryptionCustomerKey | |
autoCloseBody (consumer) | If this option is true and includeBody is true, then the MinioObject.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically. | true | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
bypassGovernanceMode (consumer) | Set this flag if you want to bypassGovernanceMode when deleting a particular object. | false | boolean |
deleteAfterRead (consumer) | Delete objects from Minio after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the MinioConstants#BUCKET_NAME and MinioConstants#OBJECT_NAME headers, or only the MinioConstants#OBJECT_NAME header. | true | boolean |
delimiter (consumer) | The delimiter which is used in the ListObjectsRequest to only consume objects we are interested in. | String | |
destinationBucketName (consumer) | Source bucket name. | String | |
destinationObjectName (consumer) | Source object name. | String | |
includeBody (consumer) | If it is true, the exchange body will be set to a stream to the contents of the file. If false, the headers will be set with the Minio object metadata, but the body will be null. This option is strongly related to autocloseBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically. | true | boolean |
includeFolders (consumer) | The flag which is used in the ListObjectsRequest to set include folders. | false | boolean |
includeUserMetadata (consumer) | The flag which is used in the ListObjectsRequest to get objects with user meta data. | false | boolean |
includeVersions (consumer) | The flag which is used in the ListObjectsRequest to get objects with versioning. | false | boolean |
length (consumer) | Number of bytes of object data from offset. | long | |
matchETag (consumer) | Set match ETag parameter for get object(s). | String | |
maxConnections (consumer) | Set the maxConnections parameter in the minio client configuration. | 60 | int |
maxMessagesPerPoll (consumer) | Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited. | 10 | int |
modifiedSince (consumer) | Set modified since parameter for get object(s). | ZonedDateTime | |
moveAfterRead (consumer) | Move objects from bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. | false | boolean |
notMatchETag (consumer) | Set not match ETag parameter for get object(s). | String | |
objectName (consumer) | To get the object from the bucket with the given object name. | String | |
offset (consumer) | Start byte position of object data. | long | |
prefix (consumer) | Object name starts with prefix. | String | |
recursive (consumer) | List recursively than directory structure emulation. | false | boolean |
startAfter (consumer) | list objects in bucket after this object name. | String | |
unModifiedSince (consumer) | Set un modified since parameter for get object(s). | ZonedDateTime | |
useVersion1 (consumer) | when true, version 1 of REST API is used. | false | boolean |
versionId (consumer) | Set specific version_ID of a object when deleting the object. | String | |
deleteAfterWrite (producer) | Delete file object after the Minio file has been uploaded. | false | boolean |
keyName (producer) | Setting the key name for an element in the bucket through endpoint parameter. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
operation (producer) | The operation to do in case the user don’t want to do only an upload. Enum values:
| MinioOperations | |
pojoRequest (producer) | If we want to use a POJO request as body or not. | false | boolean |
storageClass (producer) | The storage class to set in the request. | String | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
accessKey (security) | Amazon AWS Secret Access Key or Minio Access Key. If not set camel will connect to service for anonymous access. | String | |
secretKey (security) | Amazon AWS Access Key Id or Minio Secret Key. If not set camel will connect to service for anonymous access. | String |
35.5. Endpoint Options
The Minio endpoint is configured using URI syntax:
minio:bucketName
with the following path and query parameters:
35.5.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
bucketName (common) | Required Bucket name. | String |
35.5.2. Query Parameters (63 parameters)
Name | Description | Default | Type |
---|---|---|---|
autoCreateBucket (common) | Setting the autocreation of the bucket if bucket name not exist. | true | boolean |
customHttpClient (common) | Set custom HTTP client for authenticated access. | OkHttpClient | |
endpoint (common) | Endpoint can be an URL, domain name, IPv4 address or IPv6 address. | String | |
minioClient (common) | Autowired Reference to a Minio Client object in the registry. | MinioClient | |
objectLock (common) | Set when creating new bucket. | false | boolean |
policy (common) | The policy for this queue to set in the method. | String | |
proxyPort (common) | TCP/IP port number. 80 and 443 are used as defaults for HTTP and HTTPS. | Integer | |
region (common) | The region in which Minio client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1). You’ll need to use the name Region.EU_WEST_1.id(). | String | |
secure (common) | Flag to indicate to use secure connection to minio service or not. | false | boolean |
serverSideEncryption (common) | Server-side encryption. | ServerSideEncryption | |
serverSideEncryptionCustomerKey (common) | Server-side encryption for source object while copy/move objects. | ServerSideEncryptionCustomerKey | |
autoCloseBody (consumer) | If this option is true and includeBody is true, then the MinioObject.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically. | true | boolean |
bypassGovernanceMode (consumer) | Set this flag if you want to bypassGovernanceMode when deleting a particular object. | false | boolean |
deleteAfterRead (consumer) | Delete objects from Minio after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the MinioConstants#BUCKET_NAME and MinioConstants#OBJECT_NAME headers, or only the MinioConstants#OBJECT_NAME header. | true | boolean |
delimiter (consumer) | The delimiter which is used in the ListObjectsRequest to only consume objects we are interested in. | String | |
destinationBucketName (consumer) | Source bucket name. | String | |
destinationObjectName (consumer) | Source object name. | String | |
includeBody (consumer) | If it is true, the exchange body will be set to a stream to the contents of the file. If false, the headers will be set with the Minio object metadata, but the body will be null. This option is strongly related to autocloseBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically. | true | boolean |
includeFolders (consumer) | The flag which is used in the ListObjectsRequest to set include folders. | false | boolean |
includeUserMetadata (consumer) | The flag which is used in the ListObjectsRequest to get objects with user meta data. | false | boolean |
includeVersions (consumer) | The flag which is used in the ListObjectsRequest to get objects with versioning. | false | boolean |
length (consumer) | Number of bytes of object data from offset. | long | |
matchETag (consumer) | Set match ETag parameter for get object(s). | String | |
maxConnections (consumer) | Set the maxConnections parameter in the minio client configuration. | 60 | int |
maxMessagesPerPoll (consumer) | Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited. | 10 | int |
modifiedSince (consumer) | Set modified since parameter for get object(s). | ZonedDateTime | |
moveAfterRead (consumer) | Move objects from bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. | false | boolean |
notMatchETag (consumer) | Set not match ETag parameter for get object(s). | String | |
objectName (consumer) | To get the object from the bucket with the given object name. | String | |
offset (consumer) | Start byte position of object data. | long | |
prefix (consumer) | Object name starts with prefix. | String | |
recursive (consumer) | List recursively than directory structure emulation. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
startAfter (consumer) | list objects in bucket after this object name. | String | |
unModifiedSince (consumer) | Set un modified since parameter for get object(s). | ZonedDateTime | |
useVersion1 (consumer) | when true, version 1 of REST API is used. | false | boolean |
versionId (consumer) | Set specific version_ID of a object when deleting the object. | String | |
bridgeErrorHandler (consumer (advanced)) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
deleteAfterWrite (producer) | Delete file object after the Minio file has been uploaded. | false | boolean |
keyName (producer) | Setting the key name for an element in the bucket through endpoint parameter. | String | |
operation (producer) | The operation to do in case the user don’t want to do only an upload. Enum values:
| MinioOperations | |
pojoRequest (producer) | If we want to use a POJO request as body or not. | false | boolean |
storageClass (producer) | The storage class to set in the request. | String | |
lazyStartProducer (producer (advanced)) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
accessKey (security) | Amazon AWS Secret Access Key or Minio Access Key. If not set camel will connect to service for anonymous access. | String | |
secretKey (security) | Amazon AWS Access Key Id or Minio Secret Key. If not set camel will connect to service for anonymous access. | String |
You have to provide the minioClient in the Registry or your accessKey and secretKey to access the Minio.
35.6. Batch Consumer
This component implements the Batch Consumer.
This allows you for instance to know how many messages exists in this batch and for instance let the Aggregator aggregate this number of messages.
35.7. Message Headers
The Minio component supports 21 message header(s), which is/are listed below:
Name | Description | Default | Type |
---|---|---|---|
CamelMinioBucketName (common) Constant: BUCKET_NAME | Producer: The bucket Name which this object will be stored or which will be used for the current operation. Consumer: The name of the bucket in which this object is contained. | String | |
CamelMinioDestinationBucketName (producer) Constant: DESTINATION_BUCKET_NAME | The bucket Destination Name which will be used for the current operation. | String | |
CamelMinioContentControl (common) Constant: CACHE_CONTROL | Producer: The content control of this object. Consumer: The optional Cache-Control HTTP header which allows the user to specify caching behavior along the HTTP request/reply chain. | String | |
CamelMinioContentDisposition (common) Constant: CONTENT_DISPOSITION | Producer: The content disposition of this object. Consumer: The optional Content-Disposition HTTP header, which specifies presentational information such as the recommended filename for the object to be saved as. | String | |
CamelMinioContentEncoding (common) Constant: CONTENT_ENCODING | Producer: The content encoding of this object. Consumer: The optional Content-Encoding HTTP header specifying what content encodings have been applied to the object and what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type field. | String | |
CamelMinioContentLength (common) Constant: CONTENT_LENGTH | Producer: The content length of this object. Consumer: The Content-Length HTTP header indicating the size of the associated object in bytes. | Long | |
CamelMinioContentMD5 (common) Constant: CONTENT_MD5 | Producer: The md5 checksum of this object. Consumer: The base64 encoded 128-bit MD5 digest of the associated object (content - not including headers) according to RFC 1864. This data is used as a message integrity check to verify that the data received by Minio is the same data that the caller sent. | String | |
CamelMinioContentType (common) Constant: CONTENT_TYPE | Producer: The content type of this object. Consumer: The Content-Type HTTP header, which indicates the type of content stored in the associated object. The value of this header is a standard MIME type. | String | |
CamelMinioETag (common) Constant: E_TAG | Producer: The ETag value for the newly uploaded object. Consumer: The hex encoded 128-bit MD5 digest of the associated object according to RFC 1864. This data is used as an integrity check to verify that the data received by the caller is the same data that was sent by Minio. | String | |
CamelMinioObjectName (common) Constant: OBJECT_NAME | Producer: The key under which this object will be stored or which will be used for the current operation. Consumer: The key under which this object is stored. | String | |
CamelMinioDestinationObjectName (producer) Constant: DESTINATION_OBJECT_NAME | The Destination key which will be used for the current operation. | String | |
CamelMinioLastModified (common) Constant: LAST_MODIFIED | Producer: The last modified timestamp of this object. Consumer: The value of the Last-Modified header, indicating the date and time at which Minio last recorded a modification to the associated object. | Date | |
CamelMinioStorageClass (producer) Constant: STORAGE_CLASS | The storage class of this object. | String | |
CamelMinioVersionId (common) Constant: VERSION_ID | Producer: The version Id of the object to be stored or returned from the current operation. Consumer: The version ID of the associated Minio object if available. Version IDs are only assigned to objects when an object is uploaded to an Minio bucket that has object versioning enabled. | String | |
CamelMinioCannedAcl (producer) Constant: CANNED_ACL | The canned acl that will be applied to the object. see com.amazonaws.services.s3.model.CannedAccessControlList for allowed values. | String | |
CamelMinioOperation (producer) Constant: MINIO_OPERATION | The operation to perform. Enum values:
| MinioOperations | |
CamelMinioServerSideEncryption (common) Constant: SERVER_SIDE_ENCRYPTION | Producer: Sets the server-side encryption algorithm when encrypting the object using Minio-managed keys. For example use AES256. Consumer: The server-side encryption algorithm when encrypting the object using Minio-managed keys. | String | |
CamelMinioExpirationTime (common) Constant: EXPIRATION_TIME | The expiration time. | String | |
CamelMinioReplicationStatus (common) Constant: REPLICATION_STATUS | The replication status. | String | |
CamelMinioOffset (producer) Constant: OFFSET | The offset. | String | |
CamelMinioLength (producer) Constant: LENGTH | The length. | String |
35.7.1. Minio Producer operations
Camel-Minio component provides the following operation on the producer side:
- copyObject
- deleteObject
- deleteObjects
- listBuckets
- deleteBucket
- listObjects
- getObject (this will return a MinioObject instance)
- getObjectRange (this will return a MinioObject instance)
35.7.2. Advanced Minio configuration
If your Camel Application is running behind a firewall or if you need to have more control over the MinioClient
instance configuration, you can create your own instance and refer to it in your Camel minio component configuration:
from("minio://MyBucket?minioClient=#client&delay=5000&maxMessagesPerPoll=5") .to("mock:result");
35.7.3. Minio Producer Operation examples
- CopyObject: this operation copy an object from one bucket to a different one
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(MinioConstants.DESTINATION_BUCKET_NAME, "camelDestinationBucket"); exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); exchange.getIn().setHeader(MinioConstants.DESTINATION_OBJECT_NAME, "camelDestinationKey"); } }) .to("minio://mycamelbucket?minioClient=#minioClient&operation=copyObject") .to("mock:result");
This operation will copy the object with the name expressed in the header camelDestinationKey to the camelDestinationBucket bucket, from the bucket mycamelbucket.
- DeleteObject: this operation deletes an object from a bucket
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); } }) .to("minio://mycamelbucket?minioClient=#minioClient&operation=deleteObject") .to("mock:result");
This operation will delete the object camelKey from the bucket mycamelbucket.
- ListBuckets: this operation list the buckets for this account in this region
from("direct:start") .to("minio://mycamelbucket?minioClient=#minioClient&operation=listBuckets") .to("mock:result");
This operation will list the buckets for this account
- DeleteBucket: this operation delete the bucket specified as URI parameter or header
from("direct:start") .to("minio://mycamelbucket?minioClient=#minioClient&operation=deleteBucket") .to("mock:result");
This operation will delete the bucket mycamelbucket
- ListObjects: this operation list object in a specific bucket
from("direct:start") .to("minio://mycamelbucket?minioClient=#minioClient&operation=listObjects") .to("mock:result");
This operation will list the objects in the mycamelbucket bucket
- GetObject: this operation get a single object in a specific bucket
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); } }) .to("minio://mycamelbucket?minioClient=#minioClient&operation=getObject") .to("mock:result");
This operation will return an MinioObject instance related to the camelKey object in mycamelbucket bucket.
- GetObjectRange: this operation get a single object range in a specific bucket
from("direct:start").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(MinioConstants.OBJECT_NAME, "camelKey"); exchange.getIn().setHeader(MinioConstants.OFFSET, "0"); exchange.getIn().setHeader(MinioConstants.LENGTH, "9"); } }) .to("minio://mycamelbucket?minioClient=#minioClient&operation=getObjectRange") .to("mock:result");
This operation will return an MinioObject instance related to the camelKey object in mycamelbucket bucket, containing bytes from 0 to 9.
35.8. Bucket Autocreation
With the option autoCreateBucket
users are able to avoid the autocreation of a Minio Bucket in case it doesn’t exist. The default for this option is true
. If set to false any operation on a not-existent bucket in Minio won’t be successful, and an error will be returned.
35.9. Automatic detection of Minio client in registry
The component is capable of detecting the presence of a Minio bean into the registry. If it’s the only instance of that type it will be used as client, and you won’t have to define it as uri parameter, like the example above. This may be really useful for smarter configuration of the endpoint.
35.10. Moving stuff between a bucket and another bucket
Some users like to consume stuff from a bucket and move the content in a different one without using the copyObject feature of this component. If this is case for you, don’t forget to remove the bucketName header from the incoming exchange of the consumer, otherwise the file will always be overwritten on the same original bucket.
35.11. MoveAfterRead consumer option
In addition to deleteAfterRead it has been added another option, moveAfterRead. With this option enabled the consumed object will be moved to a target destinationBucket instead of being only deleted. This will require specifying the destinationBucket option. As example:
from("minio://mycamelbucket?minioClient=#minioClient&moveAfterRead=true&destinationBucketName=myothercamelbucket") .to("mock:result");
In this case the objects consumed will be moved to myothercamelbucket bucket and deleted from the original one (because of deleteAfterRead set to true as default).
35.12. Using a POJO as body
Sometimes build a Minio Request can be complex, because of multiple options. We introduce the possibility to use a POJO as body. In Minio there are multiple operations you can submit, as an example for List brokers request, you can do something like:
from("direct:minio") .setBody(ListObjectsArgs.builder() .bucket(bucketName) .recursive(getConfiguration().isRecursive()))) .to("minio://test?minioClient=#minioClient&operation=listObjects&pojoRequest=true")
In this way you’ll pass the request directly without the need of passing headers and options specifically related to this operation.
35.13. Dependencies
Maven users will need to add the following dependency to their pom.xml.
pom.xml
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-minio</artifactId> <version>${camel-version}</version> </dependency>
where $3.14.2
must be replaced by the actual version of Camel.
35.14. Spring Boot Auto-Configuration
When using minio with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-minio-starter</artifactId> </dependency>
The component supports 48 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.minio.access-key | Amazon AWS Secret Access Key or Minio Access Key. If not set camel will connect to service for anonymous access. | String | |
camel.component.minio.auto-close-body | If this option is true and includeBody is true, then the MinioObject.close() method will be called on exchange completion. This option is strongly related to includeBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically. | true | Boolean |
camel.component.minio.auto-create-bucket | Setting the autocreation of the bucket if bucket name not exist. | true | Boolean |
camel.component.minio.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.minio.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.minio.bypass-governance-mode | Set this flag if you want to bypassGovernanceMode when deleting a particular object. | false | Boolean |
camel.component.minio.configuration | The component configuration. The option is a org.apache.camel.component.minio.MinioConfiguration type. | MinioConfiguration | |
camel.component.minio.custom-http-client | Set custom HTTP client for authenticated access. The option is a okhttp3.OkHttpClient type. | OkHttpClient | |
camel.component.minio.delete-after-read | Delete objects from Minio after they have been retrieved. The delete is only performed if the Exchange is committed. If a rollback occurs, the object is not deleted. If this option is false, then the same objects will be retrieve over and over again on the polls. Therefore you need to use the Idempotent Consumer EIP in the route to filter out duplicates. You can filter using the MinioConstants#BUCKET_NAME and MinioConstants#OBJECT_NAME headers, or only the MinioConstants#OBJECT_NAME header. | true | Boolean |
camel.component.minio.delete-after-write | Delete file object after the Minio file has been uploaded. | false | Boolean |
camel.component.minio.delimiter | The delimiter which is used in the ListObjectsRequest to only consume objects we are interested in. | String | |
camel.component.minio.destination-bucket-name | Source bucket name. | String | |
camel.component.minio.destination-object-name | Source object name. | String | |
camel.component.minio.enabled | Whether to enable auto configuration of the minio component. This is enabled by default. | Boolean | |
camel.component.minio.endpoint | Endpoint can be an URL, domain name, IPv4 address or IPv6 address. | String | |
camel.component.minio.include-body | If it is true, the exchange body will be set to a stream to the contents of the file. If false, the headers will be set with the Minio object metadata, but the body will be null. This option is strongly related to autocloseBody option. In case of setting includeBody to true and autocloseBody to false, it will be up to the caller to close the MinioObject stream. Setting autocloseBody to true, will close the MinioObject stream automatically. | true | Boolean |
camel.component.minio.include-folders | The flag which is used in the ListObjectsRequest to set include folders. | false | Boolean |
camel.component.minio.include-user-metadata | The flag which is used in the ListObjectsRequest to get objects with user meta data. | false | Boolean |
camel.component.minio.include-versions | The flag which is used in the ListObjectsRequest to get objects with versioning. | false | Boolean |
camel.component.minio.key-name | Setting the key name for an element in the bucket through endpoint parameter. | String | |
camel.component.minio.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.minio.length | Number of bytes of object data from offset. | Long | |
camel.component.minio.match-e-tag | Set match ETag parameter for get object(s). | String | |
camel.component.minio.max-connections | Set the maxConnections parameter in the minio client configuration. | 60 | Integer |
camel.component.minio.max-messages-per-poll | Gets the maximum number of messages as a limit to poll at each polling. Gets the maximum number of messages as a limit to poll at each polling. The default value is 10. Use 0 or a negative number to set it as unlimited. | 10 | Integer |
camel.component.minio.minio-client | Reference to a Minio Client object in the registry. The option is a io.minio.MinioClient type. | MinioClient | |
camel.component.minio.modified-since | Set modified since parameter for get object(s). The option is a java.time.ZonedDateTime type. | ZonedDateTime | |
camel.component.minio.move-after-read | Move objects from bucket to a different bucket after they have been retrieved. To accomplish the operation the destinationBucket option must be set. The copy bucket operation is only performed if the Exchange is committed. If a rollback occurs, the object is not moved. | false | Boolean |
camel.component.minio.not-match-e-tag | Set not match ETag parameter for get object(s). | String | |
camel.component.minio.object-lock | Set when creating new bucket. | false | Boolean |
camel.component.minio.object-name | To get the object from the bucket with the given object name. | String | |
camel.component.minio.offset | Start byte position of object data. | Long | |
camel.component.minio.operation | The operation to do in case the user don’t want to do only an upload. | MinioOperations | |
camel.component.minio.pojo-request | If we want to use a POJO request as body or not. | false | Boolean |
camel.component.minio.policy | The policy for this queue to set in the method. | String | |
camel.component.minio.prefix | Object name starts with prefix. | String | |
camel.component.minio.proxy-port | TCP/IP port number. 80 and 443 are used as defaults for HTTP and HTTPS. | Integer | |
camel.component.minio.recursive | List recursively than directory structure emulation. | false | Boolean |
camel.component.minio.region | The region in which Minio client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1). You’ll need to use the name Region.EU_WEST_1.id(). | String | |
camel.component.minio.secret-key | Amazon AWS Access Key Id or Minio Secret Key. If not set camel will connect to service for anonymous access. | String | |
camel.component.minio.secure | Flag to indicate to use secure connection to minio service or not. | false | Boolean |
camel.component.minio.server-side-encryption | Server-side encryption. The option is a io.minio.ServerSideEncryption type. | ServerSideEncryption | |
camel.component.minio.server-side-encryption-customer-key | Server-side encryption for source object while copy/move objects. The option is a io.minio.ServerSideEncryptionCustomerKey type. | ServerSideEncryptionCustomerKey | |
camel.component.minio.start-after | list objects in bucket after this object name. | String | |
camel.component.minio.storage-class | The storage class to set in the request. | String | |
camel.component.minio.un-modified-since | Set un modified since parameter for get object(s). The option is a java.time.ZonedDateTime type. | ZonedDateTime | |
camel.component.minio.use-version1 | when true, version 1 of REST API is used. | false | Boolean |
camel.component.minio.version-id | Set specific version_ID of a object when deleting the object. | String |
Chapter 36. MLLP
Both producer and consumer are supported
The MLLP component is specifically designed to handle the nuances of the MLLP protocol and provide the functionality required by Healthcare providers to communicate with other systems using the MLLP protocol.
The MLLP component provides a simple configuration URI, automated HL7 acknowledgment generation and automatic acknowledgment interrogation.
The MLLP protocol does not typically use a large number of concurrent TCP connections - a single active TCP connection is the normal case. Therefore, the MLLP component uses a simple thread-per-connection model based on standard Java Sockets. This keeps the implementation simple and eliminates the dependencies on only Camel itself.
The component supports the following:
- A Camel consumer using a TCP Server
- A Camel producer using a TCP Client
The MLLP component use byte[]
payloads, and relies on Camel type conversion to convert byte[]
to other types.
Maven users will need to add the following dependency to their pom.xml for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mllp</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
36.1. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
36.1.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
36.1.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
36.2. Component Options
The MLLP component supports 30 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
autoAck (common) | Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. | true | boolean |
charsetName (common) | Sets the default charset to use. | String | |
configuration (common) | Sets the default configuration to use when creating MLLP endpoints. | MllpConfiguration | |
hl7Headers (common) | Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. | true | boolean |
requireEndOfData (common) | Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. | true | boolean |
stringPayload (common) | Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. | true | boolean |
validatePayload (common) | Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. | false | boolean |
acceptTimeout (consumer) | Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. | 60000 | int |
backlog (consumer) | The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. | 5 | Integer |
bindRetryInterval (consumer) | TCP Server Only - The number of milliseconds to wait between bind attempts. | 5000 | int |
bindTimeout (consumer) | TCP Server Only - The number of milliseconds to retry binding to a server port. | 30000 | int |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. | true | boolean |
lenientBind (consumer) | TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. | false | boolean |
maxConcurrentConsumers (consumer) | The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. | 5 | int |
reuseAddress (consumer) | Enable/disable the SO_REUSEADDR socket option. | false | Boolean |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| InOut | ExchangePattern |
connectTimeout (producer) | Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. | 30000 | int |
idleTimeoutStrategy (producer) | decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. Enum values:
| RESET | MllpIdleTimeoutStrategy |
keepAlive (producer) | Enable/disable the SO_KEEPALIVE socket option. | true | Boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
tcpNoDelay (producer) | Enable/disable the TCP_NODELAY socket option. | true | Boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
defaultCharset (advanced) | Set the default character set to use for byte to/from String conversions. | ISO-8859-1 | String |
logPhi (advanced) | Whether to log PHI. | true | Boolean |
logPhiMaxBytes (advanced) | Set the maximum number of bytes of PHI that will be logged in a log entry. | 5120 | Integer |
readTimeout (advanced) | The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. | 5000 | int |
receiveBufferSize (advanced) | Sets the SO_RCVBUF option to the specified value (in bytes). | 8192 | Integer |
receiveTimeout (advanced) | The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. | 15000 | int |
sendBufferSize (advanced) | Sets the SO_SNDBUF option to the specified value (in bytes). | 8192 | Integer |
idleTimeout (tcp) | The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. | Integer |
36.3. Endpoint Options
The MLLP endpoint is configured using URI syntax:
mllp:hostname:port
with the following path and query parameters:
36.3.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
hostname (common) | Required Hostname or IP for connection for the TCP connection. The default value is null, which means any local IP address. | String | |
port (common) | Required Port number for the TCP connection. | int |
36.3.2. Query Parameters (26 parameters)
Name | Description | Default | Type |
---|---|---|---|
autoAck (common) | Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. | true | boolean |
charsetName (common) | Sets the default charset to use. | String | |
hl7Headers (common) | Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. | true | boolean |
requireEndOfData (common) | Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. | true | boolean |
stringPayload (common) | Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. | true | boolean |
validatePayload (common) | Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. | false | boolean |
acceptTimeout (consumer) | Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. | 60000 | int |
backlog (consumer) | The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. | 5 | Integer |
bindRetryInterval (consumer) | TCP Server Only - The number of milliseconds to wait between bind attempts. | 5000 | int |
bindTimeout (consumer) | TCP Server Only - The number of milliseconds to retry binding to a server port. | 30000 | int |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. | true | boolean |
lenientBind (consumer) | TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. | false | boolean |
maxConcurrentConsumers (consumer) | The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. | 5 | int |
reuseAddress (consumer) | Enable/disable the SO_REUSEADDR socket option. | false | Boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| InOut | ExchangePattern |
connectTimeout (producer) | Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. | 30000 | int |
idleTimeoutStrategy (producer) | decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. Enum values:
| RESET | MllpIdleTimeoutStrategy |
keepAlive (producer) | Enable/disable the SO_KEEPALIVE socket option. | true | Boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
tcpNoDelay (producer) | Enable/disable the TCP_NODELAY socket option. | true | Boolean |
readTimeout (advanced) | The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. | 5000 | int |
receiveBufferSize (advanced) | Sets the SO_RCVBUF option to the specified value (in bytes). | 8192 | Integer |
receiveTimeout (advanced) | The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. | 15000 | int |
sendBufferSize (advanced) | Sets the SO_SNDBUF option to the specified value (in bytes). | 8192 | Integer |
idleTimeout (tcp) | The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. | Integer |
36.4. MLLP Consumer
The MLLP Consumer supports receiving MLLP-framed messages and sending HL7 Acknowledgements. The MLLP Consumer can automatically generate the HL7 Acknowledgement (HL7 Application Acknowledgements only - AA, AE and AR), or the acknowledgement can be specified using the CamelMllpAcknowledgement exchange property. Additionally, the type of acknowledgement that will be generated can be controlled by setting the CamelMllpAcknowledgementType exchange property. The MLLP Consumer can read messages without sending any HL7 Acknowledgement if the automatic acknowledgement is disabled and exchange pattern is InOnly.
36.4.1. Message Headers
The MLLP Consumer adds these headers on the Camel message:
Key | Description |
CamelMllpLocalAddress | The local TCP Address of the Socket |
CamelMllpRemoteAddress | The local TCP Address of the Socket |
CamelMllpSendingApplication | MSH-3 value |
CamelMllpSendingFacility | MSH-4 value |
CamelMllpReceivingApplication | MSH-5 value |
CamelMllpReceivingFacility | MSH-6 value |
CamelMllpTimestamp | MSH-7 value |
CamelMllpSecurity | MSH-8 value |
CamelMllpMessageType | MSH-9 value |
CamelMllpEventType | MSH-9-1 value |
CamelMllpTriggerEvent | MSH-9-2 value |
CamelMllpMessageControlId | MSH-10 value |
CamelMllpProcessingId | MSH-11 value |
CamelMllpVersionId | MSH-12 value |
CamelMllpCharset | MSH-18 value |
All headers are String types. If a header value is missing, its value is null.
36.4.2. Exchange Properties
The type of acknowledgment the MLLP Consumer generates and state of the TCP Socket can be controlled by these properties on the Camel exchange:
Key | Type | Description |
---|---|---|
CamelMllpAcknowledgement | byte[] | If present, this property will we sent to client as the MLLP Acknowledgement |
CamelMllpAcknowledgementString | String | If present and CamelMllpAcknowledgement is not present, this property will we sent to client as the MLLP Acknowledgement |
CamelMllpAcknowledgementMsaText | String | If neither CamelMllpAcknowledgement or CamelMllpAcknowledgementString are present and autoAck is true, this property can be used to specify the contents of MSA-3 in the generated HL7 acknowledgement |
CamelMllpAcknowledgementType | String | If neither CamelMllpAcknowledgement or CamelMllpAcknowledgementString are present and autoAck is true, this property can be used to specify the HL7 acknowledgement type (i.e. AA, AE, AR) |
CamelMllpAutoAcknowledge | Boolean | Overrides the autoAck query parameter |
CamelMllpCloseConnectionBeforeSend | Boolean | If true, the Socket will be closed before sending data |
CamelMllpResetConnectionBeforeSend | Boolean | If true, the Socket will be reset before sending data |
CamelMllpCloseConnectionAfterSend | Boolean | If true, the Socket will be closed immediately after sending data |
CamelMllpResetConnectionAfterSend | Boolean | If true, the Socket will be reset immediately after sending any data |
36.5. MLLP Producer
The MLLP Producer supports sending MLLP-framed messages and receiving HL7 Acknowledgements. The MLLP Producer interrogates the HL7 Acknowledgments and raises exceptions if a negative acknowledgement is received. The received acknowledgement is interrogated and an exception is raised in the event of a negative acknowledgement. The MLLP Producer can ignore acknowledgements when configured with InOnly exchange pattern.
36.5.1. Message Headers
The MLLP Producer adds these headers on the Camel message:
Key | Description |
---|---|
CamelMllpLocalAddress | The local TCP Address of the Socket |
CamelMllpRemoteAddress | The remote TCP Address of the Socket |
CamelMllpAcknowledgement | The HL7 Acknowledgment byte[] received |
CamelMllpAcknowledgementString | The HL7 Acknowledgment received, converted to a String |
36.5.2. Exchange Properties
The state of the TCP Socket can be controlled by these properties on the Camel exchange:
Key | Type | Description |
---|---|---|
CamelMllpCloseConnectionBeforeSend | Boolean | If true, the Socket will be closed before sending data |
CamelMllpResetConnectionBeforeSend | Boolean | If true, the Socket will be reset before sending data |
CamelMllpCloseConnectionAfterSend | Boolean | If true, the Socket will be closed immediately after sending data |
CamelMllpResetConnectionAfterSend | Boolean | If true, the Socket will be reset immediately after sending any data |
36.6. Spring Boot Auto-Configuration
When using mllp with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mllp-starter</artifactId> </dependency>
The component supports 31 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.mllp.accept-timeout | Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. | 60000 | Integer |
camel.component.mllp.auto-ack | Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. | true | Boolean |
camel.component.mllp.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.mllp.backlog | The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. | 5 | Integer |
camel.component.mllp.bind-retry-interval | TCP Server Only - The number of milliseconds to wait between bind attempts. | 5000 | Integer |
camel.component.mllp.bind-timeout | TCP Server Only - The number of milliseconds to retry binding to a server port. | 30000 | Integer |
camel.component.mllp.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. | true | Boolean |
camel.component.mllp.charset-name | Sets the default charset to use. | String | |
camel.component.mllp.configuration | Sets the default configuration to use when creating MLLP endpoints. The option is a org.apache.camel.component.mllp.MllpConfiguration type. | MllpConfiguration | |
camel.component.mllp.connect-timeout | Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. | 30000 | Integer |
camel.component.mllp.default-charset | Set the default character set to use for byte to/from String conversions. | ISO-8859-1 | String |
camel.component.mllp.enabled | Whether to enable auto configuration of the mllp component. This is enabled by default. | Boolean | |
camel.component.mllp.exchange-pattern | Sets the exchange pattern when the consumer creates an exchange. | ExchangePattern | |
camel.component.mllp.hl7-headers | Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. | true | Boolean |
camel.component.mllp.idle-timeout | The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. | Integer | |
camel.component.mllp.idle-timeout-strategy | decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. | MllpIdleTimeoutStrategy | |
camel.component.mllp.keep-alive | Enable/disable the SO_KEEPALIVE socket option. | true | Boolean |
camel.component.mllp.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.mllp.lenient-bind | TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. | false | Boolean |
camel.component.mllp.log-phi | Whether to log PHI. | true | Boolean |
camel.component.mllp.log-phi-max-bytes | Set the maximum number of bytes of PHI that will be logged in a log entry. | 5120 | Integer |
camel.component.mllp.max-concurrent-consumers | The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. | 5 | Integer |
camel.component.mllp.read-timeout | The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. | 5000 | Integer |
camel.component.mllp.receive-buffer-size | Sets the SO_RCVBUF option to the specified value (in bytes). | 8192 | Integer |
camel.component.mllp.receive-timeout | The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. | 15000 | Integer |
camel.component.mllp.require-end-of-data | Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. | true | Boolean |
camel.component.mllp.reuse-address | Enable/disable the SO_REUSEADDR socket option. | false | Boolean |
camel.component.mllp.send-buffer-size | Sets the SO_SNDBUF option to the specified value (in bytes). | 8192 | Integer |
camel.component.mllp.string-payload | Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. | true | Boolean |
camel.component.mllp.tcp-no-delay | Enable/disable the TCP_NODELAY socket option. | true | Boolean |
camel.component.mllp.validate-payload | Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. | false | Boolean |
Chapter 37. Mock
Only producer is supported
Testing of distributed and asynchronous processing is notoriously difficult. The Mock, Test and Dataset endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel’s large range of Components together with the powerful Bean Integration.
The Mock component provides a powerful declarative testing mechanism, which is similar to jMock in that it allows declarative expectations to be created on any Mock endpoint before a test begins. Then the test is run, which typically fires messages to one or more endpoints, and finally the expectations can be asserted in a test case to ensure the system worked as expected.
This allows you to test various things like:
- The correct number of messages are received on each endpoint,
- The correct payloads are received, in the right order,
- Messages arrive on an endpoint in order, using some Expression to create an order testing function,
- Messages arrive match some kind of Predicate such as that specific headers have certain values, or that messages match some predicate, such as by evaluating an XPath or XQuery Expression.
There is also the Test endpoint which is a Mock endpoint, but which uses a second endpoint to provide the list of expected message bodies and automatically sets up the Mock endpoint assertions. In other words, it’s a Mock endpoint that automatically sets up its assertions from some sample messages in a File or database, for example.
Mock endpoints keep received Exchanges in memory indefinitely.
Remember that Mock is designed for testing. When you add Mock endpoints to a route, each Exchange sent to the endpoint will be stored (to allow for later validation) in memory until explicitly reset or the JVM is restarted. If you are sending high volume and/or large messages, this may cause excessive memory use. If your goal is to test deployable routes inline, consider using NotifyBuilder or AdviceWith in your tests instead of adding Mock endpoints to routes directly. There are two new options retainFirst, and retainLast that can be used to limit the number of messages the Mock endpoints keep in memory.
37.1. URI format
mock:someName[?options]
Where someName
can be any string that uniquely identifies the endpoint.
37.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
37.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
37.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
37.3. Component Options
The Mock component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
log (producer) | To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
exchangeFormatter (advanced) | Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. | ExchangeFormatter |
37.4. Endpoint Options
The Mock endpoint is configured using URI syntax:
mock:name
with the following path and query parameters:
37.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (producer) | Required Name of mock endpoint. | String |
37.4.2. Query Parameters (12 parameters)
Name | Description | Default | Type |
---|---|---|---|
assertPeriod (producer) | Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default this period is disabled. | long | |
expectedCount (producer) | Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n’th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. | -1 | int |
failFast (producer) | Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
log (producer) | To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. | false | boolean |
reportGroup (producer) | A number that is used to turn on throughput logging based on groups of the size. | int | |
resultMinimumWaitTime (producer) | Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. | long | |
resultWaitTime (producer) | Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. | long | |
retainFirst (producer) | Specifies to only retain the first n’th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object…) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. | -1 | int |
retainLast (producer) | Specifies to only retain the last n’th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object…) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. | -1 | int |
sleepForEmptyTest (producer) | Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero. | long | |
copyOnExchange (producer (advanced)) | Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. | true | boolean |
37.5. Simple Example
Here’s a simple example of Mock endpoint in use. First, the endpoint is resolved on the context. Then we set an expectation, and then, after the test has run, we assert that our expectations have been met:
MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); // set expectations resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();
You typically always call the method to test that the expectations were met after running a test.
Camel will by default wait 10 seconds when the assertIsSatisfied()
is invoked. This can be configured by setting the setResultWaitTime(millis)
method.
37.6. Using assertPeriod
When the assertion is satisfied then Camel will stop waiting and continue from the assertIsSatisfied
method. That means if a new message arrives on the mock endpoint, just a bit later, that arrival will not affect the outcome of the assertion. Suppose you do want to test that no new messages arrives after a period thereafter, then you can do that by setting the setAssertPeriod
method, for example:
MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); resultEndpoint.setAssertPeriod(5000); resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();
37.7. Setting expectations
You can see from the Javadoc of MockEndpoint the various helper methods you can use to set expectations. The main methods are as follows:
Method | Description |
---|---|
To define the expected message count on the endpoint. | |
To define the minimum number of expected messages on the endpoint. | |
To define the expected bodies that should be received (in order). | |
To define the expected header that should be received | |
To add an expectation that messages are received in order, using the given Expression to compare messages. | |
To add an expectation that messages are received in order, using the given Expression to compare messages. | |
To add an expectation that no duplicate messages are received; using an Expression to calculate a unique identifier for each message. This could be something like the |
Here’s another example:
resultEndpoint.expectedBodiesReceived("firstMessageBody", "secondMessageBody", "thirdMessageBody");
37.8. Adding expectations to specific messages
In addition, you can use the message(int messageIndex)
method to add assertions about a specific message that is received.
For example, to add expectations of the headers or body of the first message (using zero-based indexing like java.util.List
), you can use the following code:
resultEndpoint.message(0).header("foo").isEqualTo("bar");
There are some examples of the Mock endpoint in use in the camel-core
processor tests.
37.9. Mocking existing endpoints
Camel now allows you to automatically mock existing endpoints in your Camel routes.
How it works
The endpoints are still in action. What happens differently is that a Mock endpoint is injected and receives the message first and then delegates the message to the target endpoint. You can view this as a kind of intercept and delegate or endpoint listener.
Suppose you have the given route below:
Route
@Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").routeId("start") .to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").routeId("foo") .transform(constant("Bye World")); } }; }
You can then use the adviceWith
feature in Camel to mock all the endpoints in a given route from your unit test, as shown below:
adviceWith
mocking all endpoints
@Test public void testAdvisedMockEndpoints() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, "start", a -> // mock all endpoints a.mockEndpoints()); getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); }
Notice that the mock endpoints is given the URI mock:<endpoint>
, for example mock:direct:foo
. Camel logs at INFO
level the endpoints being mocked:
INFO Adviced endpoint [direct://foo] with mock endpoint [mock:direct:foo]
Mocked endpoints are without parameters
Endpoints which are mocked will have their parameters stripped off. For example the endpoint log:foo?showAll=true
will be mocked to the following endpoint mock:log:foo
. Notice the parameters have been removed.
Its also possible to only mock certain endpoints using a pattern. For example to mock all log
endpoints you do as shown:
adviceWith
mocking only log endpoints using a pattern
@Test public void testAdvisedMockEndpointsWithPattern() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, "start", a -> // mock only log endpoints a.mockEndpoints("log*")); // now we can refer to log:foo as a mock and set our expectations getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // only the log:foo endpoint was mocked assertNotNull(context.hasEndpoint("mock:log:foo")); assertNull(context.hasEndpoint("mock:direct:start")); assertNull(context.hasEndpoint("mock:direct:foo")); }
The pattern supported can be a wildcard or a regular expression. See more details about this at Intercept as its the same matching function used by Camel.
Mind that mocking endpoints causes the messages to be copied when they arrive on the mock.
That means Camel will use more memory. This may not be suitable when you send in a lot of messages.
37.10. Mocking existing endpoints using the camel-test
component
Instead of using the adviceWith
to instruct Camel to mock endpoints, you can easily enable this behavior when using the camel-test
Test Kit.
The same route can be tested as follows. Notice that we return "*"
from the isMockEndpoints
method, which tells Camel to mock all endpoints.
If you only want to mock all log
endpoints you can return "log*"
instead.
isMockEndpoints
using camel-test kit
public class IsMockEndpointsJUnit4Test extends CamelTestSupport { @Override public String isMockEndpoints() { // override this method and return the pattern for which endpoints to mock. // use * to indicate all return "*"; } @Test public void testMockAllEndpoints() throws Exception { // notice we have automatic mocked all endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")); } }; } }
37.11. Mocking existing endpoints with XML DSL
If you do not use the camel-test
component for unit testing (as shown above) you can use a different approach when using XML files for routes.
The solution is to create a new XML file used by the unit test and then include the intended XML file which has the route you want to test.
Suppose we have the route in the camel-route.xml
file:
camel-route.xml
<!-- this camel route is in the camel-route.xml file --> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="direct:foo"/> <to uri="log:foo"/> <to uri="mock:result"/> </route> <route> <from uri="direct:foo"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext>
Then we create a new XML file as follows, where we include the camel-route.xml
file and define a spring bean with the class org.apache.camel.impl.InterceptSendToMockEndpointStrategy
which tells Camel to mock all endpoints:
test-camel-route.xml
<!-- the Camel route is defined in another XML file --> <import resource="camel-route.xml"/> <!-- bean which enables mocking all endpoints --> <bean id="mockAllEndpoints" class="org.apache.camel.component.mock.InterceptSendToMockEndpointStrategy"/>
Then in your unit test you load the new XML file (test-camel-route.xml
) instead of camel-route.xml
.
To only mock all Log endpoints you can define the pattern in the constructor for the bean:
<bean id="mockAllEndpoints" class="org.apache.camel.impl.InterceptSendToMockEndpointStrategy"> <constructor-arg index="0" value="log*"/> </bean>
37.12. Mocking endpoints and skip sending to original endpoint
Sometimes you want to easily mock and skip sending to a certain endpoints. So the message is detoured and send to the mock endpoint only. You can now use the mockEndpointsAndSkip
method using AdviceWith. The example below will skip sending to the two endpoints "direct:foo"
, and "direct:bar"
.
adviceWith mock and skip sending to endpoints
@Test public void testAdvisedMockEndpointsWithSkip() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context.getRouteDefinitions().get(0), context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock sending to direct:foo and direct:bar and skip send to it mockEndpointsAndSkip("direct:foo", "direct:bar"); } }); getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); getMockEndpoint("mock:direct:bar").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to // the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); }
The same example using the Test Kit
isMockEndpointsAndSkip using camel-test kit
public class IsMockEndpointsAndSkipJUnit4Test extends CamelTestSupport { @Override public String isMockEndpointsAndSkip() { // override this method and return the pattern for which endpoints to mock, // and skip sending to the original endpoint. return "direct:foo"; } @Test public void testMockEndpointAndSkip() throws Exception { // notice we have automatic mocked the direct:foo endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")).to("seda:foo"); } }; } }
37.13. Limiting the number of messages to keep
The Mock endpoints will by default keep a copy of every Exchange that it received. So if you test with a lot of messages, then it will consume memory.
We have introduced two options retainFirst
and retainLast
that can be used to specify to only keep N’th of the first and/or last Exchanges.
For example in the code below, we only want to retain a copy of the first 5 and last 5 Exchanges the mock receives.
MockEndpoint mock = getMockEndpoint("mock:data"); mock.setRetainFirst(5); mock.setRetainLast(5); mock.expectedMessageCount(2000); mock.assertIsSatisfied();
Using this has some limitations. The getExchanges()
and getReceivedExchanges()
methods on the MockEndpoint
will return only the retained copies of the Exchanges. So in the example above, the list will contain 10 Exchanges; the first five, and the last five.
The retainFirst
and retainLast
options also have limitations on which expectation methods you can use. For example the expectedXXX
methods that work on message bodies, headers, etc. will only operate on the retained messages. In the example above they can test only the expectations on the 10 retained messages.
37.14. Testing with arrival times
The Mock endpoint stores the arrival time of the message as a property on the Exchange
Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class);
You can use this information to know when the message arrived on the mock. But it also provides foundation to know the time interval between the previous and next message arrived on the mock. You can use this to set expectations using the arrives
DSL on the Mock endpoint.
For example to say that the first message should arrive between 0-2 seconds before the next you can do:
mock.message(0).arrives().noLaterThan(2).seconds().beforeNext();
You can also define this as that 2nd message (0 index based) should arrive no later than 0-2 seconds after the previous:
mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious();
You can also use between to set a lower bound. For example suppose that it should be between 1-4 seconds:
mock.message(1).arrives().between(1, 4).seconds().afterPrevious();
You can also set the expectation on all messages, for example to say that the gap between them should be at most 1 second:
mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext();
Time units
In the example above we use seconds
as the time unit, but Camel offers milliseconds
, and minutes
as well.
37.15. Spring Boot Auto-Configuration
When using mock with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mock-starter</artifactId> </dependency>
The component supports 5 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.mock.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.mock.enabled | Whether to enable auto configuration of the mock component. This is enabled by default. | Boolean | |
camel.component.mock.exchange-formatter | Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. | ExchangeFormatter | |
camel.component.mock.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.mock.log | To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. | false | Boolean |
Chapter 38. MongoDB
Both producer and consumer are supported
According to Wikipedia: "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees." NoSQL solutions have grown in popularity in the last few years, and major extremely-used sites and services such as Facebook, LinkedIn, Twitter, etc. are known to use them extensively to achieve scalability and agility.
Basically, NoSQL solutions differ from traditional RDBMS (Relational Database Management Systems) in that they don’t use SQL as their query language and generally don’t offer ACID-like transactional behaviour nor relational data. Instead, they are designed around the concept of flexible data structures and schemas (meaning that the traditional concept of a database table with a fixed schema is dropped), extreme scalability on commodity hardware and blazing-fast processing.
MongoDB is a very popular NoSQL solution and the camel-mongodb component integrates Camel with MongoDB allowing you to interact with MongoDB collections both as a producer (performing operations on the collection) and as a consumer (consuming documents from a MongoDB collection).
MongoDB revolves around the concepts of documents (not as is office documents, but rather hierarchical data defined in JSON/BSON) and collections. This component page will assume you are familiar with them. Otherwise, visit http://www.mongodb.org/.
The MongoDB Camel component uses Mongo Java Driver 4.x.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mongodb</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
38.1. URI format
mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...]
38.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
38.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
38.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
38.3. Component Options
The MongoDB component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
mongoConnection (common) | Autowired Shared client used for connection. All endpoints generated from the component will share this connection client. | MongoClient | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
38.4. Endpoint Options
The MongoDB endpoint is configured using URI syntax:
mongodb:connectionBean
with the following path and query parameters:
38.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
connectionBean (common) | Required Sets the connection bean reference used to lookup a client for connecting to a database. | String |
38.4.2. Query Parameters (27 parameters)
Name | Description | Default | Type |
---|---|---|---|
collection (common) | Sets the name of the MongoDB collection to bind to this endpoint. | String | |
collectionIndex (common) | Sets the collection index (JSON FORMAT : \\{ field1 : order1, field2 : order2}). | String | |
createCollection (common) | Create collection during initialisation if it doesn’t exist. Default is true. | true | boolean |
database (common) | Sets the name of the MongoDB database to target. | String | |
hosts (common) | Host address of mongodb server in host:port format. It’s possible also use more than one address, as comma separated list of hosts: host1:port1,host2:port2. If hosts parameter is specified, provided connectionBean is ignored. | String | |
mongoConnection (common) | Sets the connection bean used as a client for connecting to a database. | MongoClient | |
operation (common) | Sets the operation this endpoint will execute against MongoDB. Enum values:
| MongoDbOperation | |
outputType (common) | Convert the output of the producer to the selected type : DocumentList Document or MongoIterable. DocumentList or MongoIterable applies to findAll and aggregate. Document applies to all other operations. Enum values:
| MongoDbOutputType | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
consumerType (consumer) | Consumer type. | String | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
cursorRegenerationDelay (advanced) | MongoDB tailable cursors will block until new data arrives. If no new data is inserted, after some time the cursor will be automatically freed and closed by the MongoDB server. The client is expected to regenerate the cursor if needed. This value specifies the time to wait before attempting to fetch a new cursor, and if the attempt fails, how long before the next attempt is made. Default value is 1000ms. | 1000 | long |
dynamicity (advanced) | Sets whether this endpoint will attempt to dynamically resolve the target database and collection from the incoming Exchange properties. Can be used to override at runtime the database and collection specified on the otherwise static endpoint URI. It is disabled by default to boost performance. Enabling it will take a minimal performance hit. | false | boolean |
readPreference (advanced) | Configure how MongoDB clients route read operations to the members of a replica set. Possible values are PRIMARY, PRIMARY_PREFERRED, SECONDARY, SECONDARY_PREFERRED or NEAREST. Enum values:
| PRIMARY | String |
writeConcern (advanced) | Configure the connection bean with the level of acknowledgment requested from MongoDB for write operations to a standalone mongod, replicaset or cluster. Possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED or MAJORITY. Enum values:
| ACKNOWLEDGED | String |
writeResultAsHeader (advanced) | In write operations, it determines whether instead of returning WriteResult as the body of the OUT message, we transfer the IN message to the OUT and attach the WriteResult as a header. | false | boolean |
streamFilter (changeStream) | Filter condition for change streams consumer. | String | |
password (security) | User password for mongodb connection. | String | |
username (security) | Username for mongodb connection. | String | |
persistentId (tail) | One tail tracking collection can host many trackers for several tailable consumers. To keep them separate, each tracker should have its own unique persistentId. | String | |
persistentTailTracking (tail) | Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The next time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. | false | boolean |
tailTrackCollection (tail) | Collection where tail tracking information will be persisted. If not specified, MongoDbTailTrackingConfig#DEFAULT_COLLECTION will be used by default. | String | |
tailTrackDb (tail) | Indicates what database the tail tracking mechanism will persist to. If not specified, the current database will be picked by default. Dynamicity will not be taken into account even if enabled, i.e. the tail tracking database will not vary past endpoint initialisation. | String | |
tailTrackField (tail) | Field where the last tracked value will be placed. If not specified, MongoDbTailTrackingConfig#DEFAULT_FIELD will be used by default. | String | |
tailTrackIncreasingField (tail) | Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. The cursor will be (re)created with a query of type: tailTrackIncreasingField greater than lastValue (possibly recovered from persistent tail tracking). Can be of type Integer, Date, String, etc. NOTE: No support for dot notation at the current time, so the field should be at the top level of the document. | String |
38.5. Configuration of database in Spring XML
The following Spring XML creates a bean defining the connection to a MongoDB instance.
Since mongo java driver 3, the WriteConcern and readPreference options are not dynamically modifiable. They are defined in the mongoClient object
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:mongo="http://www.springframework.org/schema/data/mongo" xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <mongo:mongo-client id="mongoBean" host="${mongo.url}" port="${mongo.port}" credentials="${mongo.user}:${mongo.pass}@${mongo.dbname}"> <mongo:client-options write-concern="NORMAL" /> </mongo:mongo-client> </beans>
38.6. Sample route
The following route defined in Spring XML executes the operation getDbStats
on a collection.
Get DB stats for specified collection
<route> <from uri="direct:start" /> <!-- using bean 'mongoBean' defined above --> <to uri="mongodb:mongoBean?database=${mongodb.database}&collection=${mongodb.collection}&operation=getDbStats" /> <to uri="direct:result" /> </route>
38.7. MongoDB operations - producer endpoints
38.7.1. Query operations
38.7.1.1. findById
This operation retrieves only one element from the collection whose _id field matches the content of the IN message body. The incoming object can be anything that has an equivalent to a Bson
type. See http://bsonspec.org/spec.html and http://www.mongodb.org/display/DOCS/Java+Types.
from("direct:findById") .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById");
Please, note that the default _id is treated by Mongo as and ObjectId
type, so you may need to convert it properly.
from("direct:findById") .convertBodyTo(ObjectId.class) .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById");
Supports optional parameters
This operation supports projection operators. See Specifying a fields filter (projection).
38.7.1.2. findOneByQuery
Retrieve the first element from a collection matching a MongoDB query selector. If the CamelMongoDbCriteria
header is set, then its value is used as the query selector. If the CamelMongoDbCriteria
header is null, then the IN message body is used as the query selector. In both cases, the query selector should be of type Bson
or convertible to Bson
(for instance, a JSON string or HashMap
). See Type conversions for more info.
Create query selectors using the Filters
provided by the MongoDB Driver.
38.7.1.3. Example without a query selector (returns the first document in a collection)
from("direct:findOneByQuery") .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery");
38.7.1.4. Example with a query selector (returns the first matching document in a collection):
from("direct:findOneByQuery") .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery");
Supports optional parameters
This operation supports projection operators and sort clauses. See Specifying a fields filter (projection), Specifying a sort clause.
38.7.1.5. findAll
The findAll
operation returns all documents matching a query, or none at all, in which case all documents contained in the collection are returned. The query object is extracted CamelMongoDbCriteria
header. if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson
or convertible to Bson
. It can be a JSON String or a Hashmap. See Type conversions for more info.
38.7.1.5.1. Example without a query selector (returns all documents in a collection)
from("direct:findAll") .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll");
38.7.1.5.2. Example with a query selector (returns all matching documents in a collection)
from("direct:findAll") .setHeader(MongoDbConstants.CRITERIA, Filters.eq("name", "Raul Kripalani")) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll");
Paging and efficient retrieval is supported via the following headers:
Header key | Quick constant | Description (extracted from MongoDB API doc) | Expected type |
---|---|---|---|
|
| Discards a given number of elements at the beginning of the cursor. | int/Integer |
|
| Limits the number of elements returned. | int/Integer |
|
| Limits the number of elements returned in one batch. A cursor typically fetches a batch of result objects and store them locally. If batchSize is positive, it represents the size of each batch of objects retrieved. It can be adjusted to optimize performance and limit data transfer. If batchSize is negative, it will limit of number objects returned, that fit within the max batch size limit (usually 4MB), and cursor will be closed. For example if batchSize is -10, then the server will return a maximum of 10 documents and as many as can fit in 4MB, then close the cursor. Note that this feature is different from limit() in that documents must fit within a maximum size, and it removes the need to send a request to close the cursor server-side. The batch size can be changed even after a cursor is iterated, in which case the setting will apply on the next batch retrieval. | int/Integer |
|
| Sets allowDiskUse MongoDB flag. This is supported since MongoDB Server 4.3.1. Using this header with older MongoDB Server version can cause query to fail. | boolean/Boolean |
38.7.1.5.3. Example with option outputType=MongoIterable and batch size
from("direct:findAll") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable") .to("mock:resultFindAll");
The findAll
operation will also return the following OUT headers to enable you to iterate through result pages if you are using paging:
Header key | Quick constant | Description (extracted from MongoDB API doc) | Data type |
---|---|---|---|
|
| Number of objects matching the query. This does not take limit/skip into consideration. | int/Integer |
|
| Number of objects matching the query. This does not take limit/skip into consideration. | int/Integer |
Supports optional parameters
This operation supports projection operators and sort clauses. See Specifying a fields filter (projection), Specifying a sort clause.
38.7.1.6. count
Returns the total number of objects in a collection, returning a Long as the OUT message body.
The following example will count the number of records in the "dynamicCollectionName" collection. Notice how dynamicity is enabled, and as a result, the operation will not run against the "notableScientists" collection, but against the "dynamicCollectionName" collection.
// from("direct:count").to("mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true"); Long result = template.requestBodyAndHeader("direct:count", "irrelevantBody", MongoDbConstants.COLLECTION, "dynamicCollectionName"); assertTrue("Result is not of type Long", result instanceof Long);
You can provide a query The query object is extracted CamelMongoDbCriteria
header. if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson
or convertible to Bson
., and operation will return the amount of documents matching this criteria.
Document query = ... Long count = template.requestBodyAndHeader("direct:count", query, MongoDbConstants.COLLECTION, "dynamicCollectionName");
38.7.1.7. Specifying a fields filter (projection)
Query operations will, by default, return the matching objects in their entirety (with all their fields). If your documents are large and you only require retrieving a subset of their fields, you can specify a field filter in all query operations, simply by setting the relevant Bson
(or type convertible to Bson
, such as a JSON String, Map, etc.) on the CamelMongoDbFieldsProjection
header, constant shortcut: MongoDbConstants.FIELDS_PROJECTION
.
Here is an example that uses MongoDB’s Projections
to simplify the creation of Bson. It retrieves all fields except _id
and boringField
:
// route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);
Here is an example that uses MongoDB’s Projections
to simplify the creation of Bson. It retrieves all fields except _id
and boringField
:
// route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);
38.7.1.8. Specifying a sort clause
There is a often a requirement to fetch the min/max record from a collection based on sorting by a particular field that uses MongoDB’s Sorts
to simplify the creation of Bson. It retrieves all fields except _id
and boringField
:
// route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson sorts = Sorts.descending("_id"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts);
In a Camel route the SORT_BY header can be used with the findOneByQuery operation to achieve the same result. If the FIELDS_PROJECTION header is also specified the operation will return a single field/value pair that can be passed directly to another component (for example, a parameterized MyBatis SELECT query). This example demonstrates fetching the temporally newest document from a collection and reducing the result to a single field, based on the documentTimestamp
field:
.from("direct:someTriggeringEvent") .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending("documentTimestamp")) .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include("documentTimestamp")) .setBody().constant("{}") .to("mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery") .to("direct:aMyBatisParameterizedSelect");
38.7.2. Create/update operations
38.7.2.1. insert
Inserts an new object into the MongoDB collection, taken from the IN message body. Type conversion is attempted to turn it into Document
or a List
.
Two modes are supported: single insert and multiple insert. For multiple insert, the endpoint will expect a List, Array or Collections of objects of any type, as long as they are - or can be converted to - Document
. Example:
from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=insert");
The operation will return a WriteResult, and depending on the WriteConcern
or the value of the invokeGetLastError
option, getLastError()
would have been called already or not. If you want to access the ultimate result of the write operation, you need to retrieve the CommandResult
by calling getLastError()
or getCachedLastError()
on the WriteResult
. Then you can verify the result by calling CommandResult.ok()
, CommandResult.getErrorMessage()
and/or CommandResult.getException()
.
Note that the new object’s _id
must be unique in the collection. If you don’t specify the value, MongoDB will automatically generate one for you. But if you do specify it and it is not unique, the insert operation will fail (and for Camel to notice, you will need to enable invokeGetLastError or set a WriteConcern that waits for the write result).
This is not a limitation of the component, but it is how things work in MongoDB for higher throughput. If you are using a custom _id
, you are expected to ensure at the application level that is unique (and this is a good practice too).
OID(s) of the inserted record(s) is stored in the message header under CamelMongoOid
key (MongoDbConstants.OID
constant). The value stored is org.bson.types.ObjectId
for single insert or java.util.List<org.bson.types.ObjectId>
if multiple records have been inserted.
In MongoDB Java Driver 3.x the insertOne and insertMany operation return void. The Camel insert operation return the Document or List of Documents inserted. Note that each Documents are Updated by a new OID if need.
38.7.2.2. save
The save operation is equivalent to an upsert (UPdate, inSERT) operation, where the record will be updated, and if it doesn’t exist, it will be inserted, all in one atomic operation. MongoDB will perform the matching based on the _id
field.
Beware that in case of an update, the object is replaced entirely and the usage of MongoDB’s $modifiers is not permitted. Therefore, if you want to manipulate the object if it already exists, you have two options:
- perform a query to retrieve the entire object first along with all its fields (may not be efficient), alter it inside Camel and then save it.
- use the update operation with $modifiers, which will execute the update at the server-side instead. You can enable the upsert flag, in which case if an insert is required, MongoDB will apply the $modifiers to the filter query object and insert the result.
If the document to be saved does not contain the _id
attribute, the operation will be an insert, and the new _id
created will be placed in the CamelMongoOid
header.
For example:
from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=save");
// route: from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=save"); org.bson.Document docForSave = new org.bson.Document(); docForSave.put("key", "value"); Object result = template.requestBody("direct:insert", docForSave);
38.7.2.3. update
Update one or multiple records on the collection. Requires a filter query and a update rules.
You can define the filter using MongoDBConstants.CRITERIA header as Bson
and define the update rules as Bson
in Body.
Update after enrich
While defining the filter by using MongoDBConstants.CRITERIA header as Bson
to query mongodb before you do update, you should notice you need to remove it from the resulting camel exchange during aggregation if you use enrich pattern with a aggregation strategy and then apply mongodb update. If you don’t remove this header during aggregation and/or redefine MongoDBConstants.CRITERIA header before sending camel exchange to mongodb producer endpoint, you may end up with invalid camel exchange payload while updating mongodb.
The second way Require a List<Bson> as the IN message body containing exactly 2 elements:
- Element 1 (index 0) ⇒ filter query ⇒ determines what objects will be affected, same as a typical query object
- Element 2 (index 1) ⇒ update rules ⇒ how matched objects will be updated. All modifier operations from MongoDB are supported.
Multiupdates
By default, MongoDB will only update 1 object even if multiple objects match the filter query. To instruct MongoDB to update all matching records, set the CamelMongoDbMultiUpdate
IN message header to true
.
A header with key CamelMongoDbRecordsAffected
will be returned (MongoDbConstants.RECORDS_AFFECTED
constant) with the number of records updated (copied from WriteResult.getN()
).
Supports the following IN message headers:
Header key | Quick constant | Description (extracted from MongoDB API doc) | Expected type |
---|---|---|---|
|
| If the update should be applied to all objects matching. See http://www.mongodb.org/display/DOCS/Atomic+Operations | boolean/Boolean |
|
| If the database should create the element if it does not exist | boolean/Boolean |
For example, the following will update all records whose filterField field equals true by setting the value of the "scientist" field to "Darwin":
// route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); List<Bson> body = new ArrayList<>(); Bson filterField = Filters.eq("filterField", true); body.add(filterField); BsonDocument updateObj = new BsonDocument().append("$set", new BsonDocument("scientist", new BsonString("Darwin"))); body.add(updateObj); Object result = template.requestBodyAndHeader("direct:update", body, MongoDbConstants.MULTIUPDATE, true);
// route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); Maps<String, Object> headers = new HashMap<>(2); headers.add(MongoDbConstants.MULTIUPDATE, true); headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq("filterField", true)); String updateObj = Updates.set("scientist", "Darwin");; Object result = template.requestBodyAndHeaders("direct:update", updateObj, headers);
// route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); String updateObj = "[{\"filterField\": true}, {\"$set\", {\"scientist\", \"Darwin\"}}]"; Object result = template.requestBodyAndHeader("direct:update", updateObj, MongoDbConstants.MULTIUPDATE, true);
38.7.3. Delete operations
38.7.3.1. remove
Remove matching records from the collection. The IN message body will act as the removal filter query, and is expected to be of type DBObject
or a type convertible to it.
The following example will remove all objects whose field 'conditionField' equals true, in the science database, notableScientists collection:
// route: from("direct:remove").to("mongodb:myDb?database=science&collection=notableScientists&operation=remove"); Bson conditionField = Filters.eq("conditionField", true); Object result = template.requestBody("direct:remove", conditionField);
A header with key CamelMongoDbRecordsAffected
is returned (MongoDbConstants.RECORDS_AFFECTED
constant) with type int
, containing the number of records deleted (copied from WriteResult.getN()
).
38.7.4. Bulk Write Operations
38.7.4.1. bulkWrite
Performs write operations in bulk with controls for order of execution. Requires a List<WriteModel<Document>>
as the IN message body containing commands for insert, update, and delete operations.
The following example will insert a new scientist "Pierre Curie", update record with id "5" by setting the value of the "scientist" field to "Marie Curie" and delete record with id "3" :
// route: from("direct:bulkWrite").to("mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite"); List<WriteModel<Document>> bulkOperations = Arrays.asList( new InsertOneModel<>(new Document("scientist", "Pierre Curie")), new UpdateOneModel<>(new Document("_id", "5"), new Document("$set", new Document("scientist", "Marie Curie"))), new DeleteOneModel<>(new Document("_id", "3"))); BulkWriteResult result = template.requestBody("direct:bulkWrite", bulkOperations, BulkWriteResult.class);
By default, operations are executed in order and interrupted on the first write error without processing any remaining write operations in the list. To instruct MongoDB to continue to process remaining write operations in the list, set the CamelMongoDbBulkOrdered
IN message header to false
. Unordered operations are executed in parallel and this behavior is not guaranteed.
Header key | Quick constant | Description (extracted from MongoDB API doc) | Expected type |
---|---|---|---|
|
| Perform an ordered or unordered operation execution. Defaults to true. | boolean/Boolean |
38.7.5. Other operations
38.7.5.1. aggregate
Perform a aggregation with the given pipeline contained in the body. Aggregations could be long and heavy operations. Use with care.
// route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate"); List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("$scientist", sum("count", 1))); from("direct:aggregate") .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate") .to("mock:resultAggregate");
Supports the following IN message headers:
Header key | Quick constant | Description (extracted from MongoDB API doc) | Expected type |
---|---|---|---|
|
| Sets the number of documents to return per batch. | int/Integer |
|
| Enable aggregation pipeline stages to write data to temporary files. | boolean/Boolean |
By default a List of all results is returned. This can be heavy on memory depending on the size of the results. A safer alternative is to set your outputType=MongoIterable. The next Processor will see an iterable in the message body allowing it to step through the results one by one. Thus setting a batch size and returning an iterable allows for efficient retrieval and processing of the result.
An example would look like:
List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("$scientist", sum("count", 1))); from("direct:aggregate") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable") .split(body()) .streaming() .to("mock:resultAggregate");
Note that calling .split(body())
is enough to send the entries down the route one-by-one, however it would still load all the entries into memory first. Calling .streaming()
is thus required to load data into memory by batches.
38.7.5.2. getDbStats
Equivalent of running the db.stats()
command in the MongoDB shell, which displays useful statistic figures about the database.
For example:
> db.stats(); { "db" : "test", "collections" : 7, "objects" : 719, "avgObjSize" : 59.73296244784423, "dataSize" : 42948, "storageSize" : 1000058880, "numExtents" : 9, "indexes" : 4, "indexSize" : 32704, "fileSize" : 1275068416, "nsSizeMB" : 16, "ok" : 1 }
Usage example:
// from("direct:getDbStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getDbStats"); Object result = template.requestBody("direct:getDbStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document);
The operation will return a data structure similar to the one displayed in the shell, in the form of a Document
in the OUT message body.
38.7.5.3. getColStats
Equivalent of running the db.collection.stats()
command in the MongoDB shell, which displays useful statistic figures about the collection.
For example:
> db.camelTest.stats(); { "ns" : "test.camelTest", "count" : 100, "size" : 5792, "avgObjSize" : 57.92, "storageSize" : 20480, "numExtents" : 2, "nindexes" : 1, "lastExtentSize" : 16384, "paddingFactor" : 1, "flags" : 1, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1 }
Usage example:
// from("direct:getColStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getColStats"); Object result = template.requestBody("direct:getColStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document);
The operation will return a data structure similar to the one displayed in the shell, in the form of a Document
in the OUT message body.
38.7.5.4. command
Run the body as a command on database. Useful for admin operation as getting host information, replication or sharding status.
Collection parameter is not use for this operation.
// route: from("command").to("mongodb:myDb?database=science&operation=command"); DBObject commandBody = new BasicDBObject("hostInfo", "1"); Object result = template.requestBody("direct:command", commandBody);
38.7.6. Dynamic operations
An Exchange can override the endpoint’s fixed operation by setting the CamelMongoDbOperation
header, defined by the MongoDbConstants.OPERATION_HEADER
constant.
The values supported are determined by the MongoDbOperation enumeration and match the accepted values for the operation
parameter on the endpoint URI.
For example:
// from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); Object result = template.requestBodyAndHeader("direct:insert", "irrelevantBody", MongoDbConstants.OPERATION_HEADER, "count"); assertTrue("Result is not of type Long", result instanceof Long);
38.8. Consumers
There are several types of consumers:
- Tailable Cursor Consumer
- Change Streams Consumer
38.8.1. Tailable Cursor Consumer
MongoDB offers a mechanism to instantaneously consume ongoing data from a collection, by keeping the cursor open just like the tail -f
command of *nix systems. This mechanism is significantly more efficient than a scheduled poll, due to the fact that the server pushes new data to the client as it becomes available, rather than making the client ping back at scheduled intervals to fetch new data. It also reduces otherwise redundant network traffic.
There is only one requisite to use tailable cursors: the collection must be a "capped collection", meaning that it will only hold N objects, and when the limit is reached, MongoDB flushes old objects in the same order they were originally inserted. For more information, please refer to http://www.mongodb.org/display/DOCS/Tailable+Cursors.
The Camel MongoDB component implements a tailable cursor consumer, making this feature available for you to use in your Camel routes. As new objects are inserted, MongoDB will push them as Document
in natural order to your tailable cursor consumer, who will transform them to an Exchange and will trigger your route logic.
38.9. How the tailable cursor consumer works
To turn a cursor into a tailable cursor, a few special flags are to be signalled to MongoDB when first generating the cursor. Once created, the cursor will then stay open and will block upon calling the MongoCursor.next()
method until new data arrives. However, the MongoDB server reserves itself the right to kill your cursor if new data doesn’t appear after an indeterminate period. If you are interested to continue consuming new data, you have to regenerate the cursor. And to do so, you will have to remember the position where you left off or else you will start consuming from the top again.
The Camel MongoDB tailable cursor consumer takes care of all these tasks for you. You will just need to provide the key to some field in your data of increasing nature, which will act as a marker to position your cursor every time it is regenerated, e.g. a timestamp, a sequential ID, etc. It can be of any datatype supported by MongoDB. Date, Strings and Integers are found to work well. We call this mechanism "tail tracking" in the context of this component.
The consumer will remember the last value of this field and whenever the cursor is to be regenerated, it will run the query with a filter like: increasingField > lastValue
, so that only unread data is consumed.
Setting the increasing field: Set the key of the increasing field on the endpoint URI tailTrackingIncreasingField
option. In Camel 2.10, it must be a top-level field in your data, as nested navigation for this field is not yet supported. That is, the "timestamp" field is okay, but "nested.timestamp" will not work. Please open a ticket in the Camel JIRA if you do require support for nested increasing fields.
Cursor regeneration delay: One thing to note is that if new data is not already available upon initialisation, MongoDB will kill the cursor instantly. Since we don’t want to overwhelm the server in this case, a cursorRegenerationDelay
option has been introduced (with a default value of 1000ms.), which you can modify to suit your needs.
An example:
from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime") .id("tailableCursorConsumer1") .autoStartup(false) .to("mock:test");
The above route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms.
38.10. Persistent tail tracking
Standard tail tracking is volatile and the last value is only kept in memory. However, in practice you will need to restart your Camel container every now and then, but your last value would then be lost and your tailable cursor consumer would start consuming from the top again, very likely sending duplicate records into your route.
To overcome this situation, you can enable the persistent tail tracking feature to keep track of the last consumed increasing value in a special collection inside your MongoDB database too. When the consumer initialises again, it will restore the last tracked value and continue as if nothing happened.
The last read value is persisted on two occasions: every time the cursor is regenerated and when the consumer shuts down. We may consider persisting at regular intervals too in the future (flush every 5 seconds) for added robustness if the demand is there. To request this feature, please open a ticket in the Camel JIRA.
38.11. Enabling persistent tail tracking
To enable this function, set at least the following options on the endpoint URI:
-
persistentTailTracking
option totrue
-
persistentId
option to a unique identifier for this consumer, so that the same collection can be reused across many consumers
Additionally, you can set the tailTrackDb
, tailTrackCollection
and tailTrackField
options to customise where the runtime information will be stored. Refer to the endpoint options table at the top of this page for descriptions of each option.
For example, the following route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms, with persistent tail tracking turned on, and persisting under the "cancellationsTracker" id on the "flights.camelTailTracking", storing the last processed value under the "lastTrackingValue" field (camelTailTracking
and lastTrackingValue
are defaults).
from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker") .id("tailableCursorConsumer2") .autoStartup(false) .to("mock:test");
Below is another example identical to the one above, but where the persistent tail tracking runtime information will be stored under the "trackers.camelTrackers" collection, in the "lastProcessedDepartureTime" field:
from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers" + "&tailTrackField=lastProcessedDepartureTime") .id("tailableCursorConsumer3") .autoStartup(false) .to("mock:test");
38.11.1. Change Streams Consumer
Change Streams allow applications to access real-time data changes without the complexity and risk of tailing the MongoDB oplog. Applications can use change streams to subscribe to all data changes on a collection and immediately react to them. Because change streams use the aggregation framework, applications can also filter for specific changes or transform the notifications at will. The exchange body will contain the full document of any change.
To configure Change Streams Consumer you need to specify consumerType
, database
, collection
and optional JSON property streamFilter
to filter events. That JSON property is standard MongoDB $match
aggregation. It could be easily specified using XML DSL configuration:
<route id="filterConsumer"> <from uri="mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ '$match':{'$or':[{'fullDocument.stringValue': 'specificValue'}]} }"/> <to uri="mock:test"/> </route>
Java configuration:
from("mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ '$match':{'$or':[{'fullDocument.stringValue': 'specificValue'}]} }") .to("mock:test");
You can externalize the streamFilter value into a property placeholder which allows the endpoint URI parameters to be cleaner and easier to read.
The changeStreams
consumer type will also return the following OUT headers:
Header key | Quick constant | Description (extracted from MongoDB API doc) | Data type |
---|---|---|---|
|
| The type of operation that occurred. Can be any of the following values: insert, delete, replace, update, drop, rename, dropDatabase, invalidate. | String |
|
| A document that contains the _id of the document created or modified by the insert, replace, delete, update operations (i.e. CRUD operations). For sharded collections, also displays the full shard key for the document. The _id field is not repeated if it is already a part of the shard key. | ObjectId |
38.12. Type conversions
The MongoDbBasicConverters
type converter included with the camel-mongodb component provides the following conversions:
Name | From type | To type | How? |
---|---|---|---|
fromMapToDocument |
|
|
constructs a new |
fromDocumentToMap |
|
|
|
fromStringToDocument |
|
|
uses |
fromStringToObjectId |
|
|
constructs a new |
fromFileToDocument |
|
|
uses |
fromInputStreamToDocument |
|
|
converts the inputstream bytes to a |
fromStringToList |
|
|
uses |
This type converter is auto-discovered, so you don’t need to configure anything manually.
38.13. Spring Boot Auto-Configuration
When using mongodb with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mongodb-starter</artifactId> </dependency>
The component supports 5 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.mongodb.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.mongodb.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.mongodb.enabled | Whether to enable auto configuration of the mongodb component. This is enabled by default. | Boolean | |
camel.component.mongodb.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.mongodb.mongo-connection | Shared client used for connection. All endpoints generated from the component will share this connection client. The option is a com.mongodb.client.MongoClient type. | MongoClient |
Chapter 39. Netty
Both producer and consumer are supported
The Netty component in Camel is a socket communication component, based on the Netty project version 4.
Netty is a NIO client server framework which enables quick and easy development of networkServerInitializerFactory applications such as protocol servers and clients.
Netty greatly simplifies and streamlines network programming such as TCP and UDP socket server.
This camel component supports both producer and consumer endpoints.
The Netty component has several options and allows fine-grained control of a number of TCP/UDP communication parameters (buffer sizes, keepAlives, tcpNoDelay, etc) and facilitates both In-Only and In-Out communication on a Camel route.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
39.1. URI format
The URI scheme for a netty component is as follows
netty:tcp://0.0.0.0:99999[?options] netty:udp://remotehost:99999/[?options]
This component supports producer and consumer endpoints for both TCP and UDP.
39.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
39.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml
), or directly with Java code.
39.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
39.3. Component Options
The Netty component supports 73 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
configuration (common) | To use the NettyConfiguration as configuration when creating endpoints. | NettyConfiguration | |
disconnect (common) | Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. | false | boolean |
keepAlive (common) | Setting to ensure socket is not closed due to inactivity. | true | boolean |
reuseAddress (common) | Setting to facilitate socket multiplexing. | true | boolean |
reuseChannel (common) | This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. | false | boolean |
sync (common) | Setting to set endpoint as one-way or request-response. | true | boolean |
tcpNoDelay (common) | Setting to improve TCP protocol performance. | true | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
broadcast (consumer) | Setting to choose Multicast over UDP. | false | boolean |
clientMode (consumer) | If the clientMode is true, netty consumer will connect the address as a TCP client. | false | boolean |
reconnect (consumer) | Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. | true | boolean |
reconnectInterval (consumer) | Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. | 10000 | int |
backlog (consumer (advanced)) | Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. | int | |
bossCount (consumer (advanced)) | When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. | 1 | int |
bossGroup (consumer (advanced)) | Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. | EventLoopGroup | |
disconnectOnNoReply (consumer (advanced)) | If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. | true | boolean |
executorService (consumer (advanced)) | To use the given EventExecutorGroup. | EventExecutorGroup | |
maximumPoolSize (consumer (advanced)) | Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. | int | |
nettyServerBootstrapFactory (consumer (advanced)) | To use a custom NettyServerBootstrapFactory. | NettyServerBootstrapFactory | |
networkInterface (consumer (advanced)) | When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. | String | |
noReplyLogLevel (consumer (advanced)) | If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values:
| WARN | LoggingLevel |
serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) | If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values:
| DEBUG | LoggingLevel |
serverExceptionCaughtLogLevel (consumer (advanced)) | If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values:
| WARN | LoggingLevel |
serverInitializerFactory (consumer (advanced)) | To use a custom ServerInitializerFactory. | ServerInitializerFactory | |
usingExecutorService (consumer (advanced)) | Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. | true | boolean |
connectTimeout (producer) | Time to wait for a socket connection to be available. Value is in milliseconds. | 10000 | int |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
requestTimeout (producer) | Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty’s ReadTimeoutHandler to trigger the timeout. | long | |
clientInitializerFactory (producer (advanced)) | To use a custom ClientInitializerFactory. | ClientInitializerFactory | |
correlationManager (producer (advanced)) | To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. | NettyCamelStateCorrelationManager | |
lazyChannelCreation (producer (advanced)) | Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. | true | boolean |
producerPoolEnabled (producer (advanced)) | Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. | true | boolean |
producerPoolMaxIdle (producer (advanced)) | Sets the cap on the number of idle instances in the pool. | 100 | int |
producerPoolMaxTotal (producer (advanced)) | Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. | -1 | int |
producerPoolMinEvictableIdle (producer (advanced)) | Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. | 300000 | long |
producerPoolMinIdle (producer (advanced)) | Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. | int | |
udpConnectionlessSending (producer (advanced)) | This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. | false | boolean |
useByteBuf (producer (advanced)) | If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. | false | boolean |
hostnameVerification ( security) | To enable/disable hostname verification on SSLEngine. | false | boolean |
allowSerializedHeaders (advanced) | Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
channelGroup (advanced) | To use a explicit ChannelGroup. | ChannelGroup | |
nativeTransport (advanced) | Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . | false | boolean |
options (advanced) | Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. | Map | |
receiveBufferSize (advanced) | The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. | 65536 | int |
receiveBufferSizePredictor (advanced) | Configures the buffer size predictor. See details at Jetty documentation and this mail thread. | int | |
sendBufferSize (advanced) | The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. | 65536 | int |
transferExchange (advanced) | Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
udpByteArrayCodec (advanced) | For UDP only. If enabled the using byte array codec instead of Java serialization protocol. | false | boolean |
workerCount (advanced) | When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. | int | |
workerGroup (advanced) | To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. | EventLoopGroup | |
allowDefaultCodec (codec) | The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. | true | boolean |
autoAppendDelimiter (codec) | Whether or not to auto append missing end delimiter when sending using the textline codec. | true | boolean |
decoderMaxLineLength (codec) | The max line length to use for the textline codec. | 1024 | int |
decoders (codec) | A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. | List | |
delimiter (codec) | The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values:
| LINE | TextLineDelimiter |
encoders (codec) | A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. | List | |
encoding (codec) | The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. | String | |
textline (codec) | Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. | false | boolean |
enabledProtocols (security) | Which protocols to enable when using SSL. | TLSv1,TLSv1.1,TLSv1.2 | String |
keyStoreFile (security) | Client side certificate keystore to be used for encryption. | File | |
keyStoreFormat (security) | Keystore format to be used for payload encryption. Defaults to JKS if not set. | String | |
keyStoreResource (security) | Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
needClientAuth (security) | Configures whether the server needs client authentication when using SSL. | false | boolean |
passphrase (security) | Password setting to use in order to encrypt/decrypt payloads sent using SSH. | String | |
securityProvider (security) | Security provider to be used for payload encryption. Defaults to SunX509 if not set. | String | |
ssl (security) | Setting to specify whether SSL encryption is applied to this endpoint. | false | boolean |
sslClientCertHeaders (security) | When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. | false | boolean |
sslContextParameters (security) | To configure security using SSLContextParameters. | SSLContextParameters | |
sslHandler (security) | Reference to a class that could be used to return an SSL Handler. | SslHandler | |
trustStoreFile (security) | Server side certificate keystore to be used for encryption. | File | |
trustStoreResource (security) | Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
useGlobalSslContextParameters (security) | Enable usage of global SSL context parameters. | false | boolean |
39.4. Endpoint Options
The Netty endpoint is configured using URI syntax:
netty:protocol://host:port
with the following path and query parameters:
39.4.1. Path Parameters (3 parameters)
Name | Description | Default | Type |
---|---|---|---|
protocol (common) | Required The protocol to use which can be tcp or udp. Enum values:
| String | |
host (common) | Required The hostname. For the consumer the hostname is localhost or 0.0.0.0. For the producer the hostname is the remote host to connect to. | String | |
port (common) | Required The host port number. | int |
39.4.2. Query Parameters (71 parameters)
Name | Description | Default | Type |
---|---|---|---|
disconnect (common) | Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. | false | boolean |
keepAlive (common) | Setting to ensure socket is not closed due to inactivity. | true | boolean |
reuseAddress (common) | Setting to facilitate socket multiplexing. | true | boolean |
reuseChannel (common) | This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. | false | boolean |
sync (common) | Setting to set endpoint as one-way or request-response. | true | boolean |
tcpNoDelay (common) | Setting to improve TCP protocol performance. | true | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
broadcast (consumer) | Setting to choose Multicast over UDP. | false | boolean |
clientMode (consumer) | If the clientMode is true, netty consumer will connect the address as a TCP client. | false | boolean |
reconnect (consumer) | Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. | true | boolean |
reconnectInterval (consumer) | Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. | 10000 | int |
backlog (consumer (advanced)) | Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. | int | |
bossCount (consumer (advanced)) | When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. | 1 | int |
bossGroup (consumer (advanced)) | Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. | EventLoopGroup | |
disconnectOnNoReply (consumer (advanced)) | If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. | true | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
nettyServerBootstrapFactory (consumer (advanced)) | To use a custom NettyServerBootstrapFactory. | NettyServerBootstrapFactory | |
networkInterface (consumer (advanced)) | When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. | String | |
noReplyLogLevel (consumer (advanced)) | If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values:
| WARN | LoggingLevel |
serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) | If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values:
| DEBUG | LoggingLevel |
serverExceptionCaughtLogLevel (consumer (advanced)) | If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values:
| WARN | LoggingLevel |
serverInitializerFactory (consumer (advanced)) | To use a custom ServerInitializerFactory. | ServerInitializerFactory | |
usingExecutorService (consumer (advanced)) | Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. | true | boolean |
connectTimeout (producer) | Time to wait for a socket connection to be available. Value is in milliseconds. | 10000 | int |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
requestTimeout (producer) | Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty’s ReadTimeoutHandler to trigger the timeout. | long | |
clientInitializerFactory (producer (advanced)) | To use a custom ClientInitializerFactory. | ClientInitializerFactory | |
correlationManager (producer (advanced)) | To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. | NettyCamelStateCorrelationManager | |
lazyChannelCreation (producer (advanced)) | Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. | true | boolean |
producerPoolEnabled (producer (advanced)) | Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. | true | boolean |
producerPoolMaxIdle (producer (advanced)) | Sets the cap on the number of idle instances in the pool. | 100 | int |
producerPoolMaxTotal (producer (advanced)) | Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. | -1 | int |
producerPoolMinEvictableIdle (producer (advanced)) | Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. | 300000 | long |
producerPoolMinIdle (producer (advanced)) | Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. | int | |
udpConnectionlessSending (producer (advanced)) | This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. | false | boolean |
useByteBuf (producer (advanced)) | If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. | false | boolean |
hostnameVerification ( security) | To enable/disable hostname verification on SSLEngine. | false | boolean |
allowSerializedHeaders (advanced) | Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
channelGroup (advanced) | To use a explicit ChannelGroup. | ChannelGroup | |
nativeTransport (advanced) | Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . | false | boolean |
options (advanced) | Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. | Map | |
receiveBufferSize (advanced) | The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. | 65536 | int |
receiveBufferSizePredictor (advanced) | Configures the buffer size predictor. See details at Jetty documentation and this mail thread. | int | |
sendBufferSize (advanced) | The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. | 65536 | int |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
transferExchange (advanced) | Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | boolean |
udpByteArrayCodec (advanced) | For UDP only. If enabled the using byte array codec instead of Java serialization protocol. | false | boolean |
workerCount (advanced) | When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. | int | |
workerGroup (advanced) | To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. | EventLoopGroup | |
allowDefaultCodec (codec) | The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. | true | boolean |
autoAppendDelimiter (codec) | Whether or not to auto append missing end delimiter when sending using the textline codec. | true | boolean |
decoderMaxLineLength (codec) | The max line length to use for the textline codec. | 1024 | int |
decoders (codec) | A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. | List | |
delimiter (codec) | The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values:
| LINE | TextLineDelimiter |
encoders (codec) | A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. | List | |
encoding (codec) | The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. | String | |
textline (codec) | Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. | false | boolean |
enabledProtocols (security) | Which protocols to enable when using SSL. | TLSv1,TLSv1.1,TLSv1.2 | String |
keyStoreFile (security) | Client side certificate keystore to be used for encryption. | File | |
keyStoreFormat (security) | Keystore format to be used for payload encryption. Defaults to JKS if not set. | String | |
keyStoreResource (security) | Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
needClientAuth (security) | Configures whether the server needs client authentication when using SSL. | false | boolean |
passphrase (security) | Password setting to use in order to encrypt/decrypt payloads sent using SSH. | String | |
securityProvider (security) | Security provider to be used for payload encryption. Defaults to SunX509 if not set. | String | |
ssl (security) | Setting to specify whether SSL encryption is applied to this endpoint. | false | boolean |
sslClientCertHeaders (security) | When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. | false | boolean |
sslContextParameters (security) | To configure security using SSLContextParameters. | SSLContextParameters | |
sslHandler (security) | Reference to a class that could be used to return an SSL Handler. | SslHandler | |
trustStoreFile (security) | Server side certificate keystore to be used for encryption. | File | |
trustStoreResource (security) | Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String |
39.5. Registry based Options
Codec Handlers and SSL Keystores can be enlisted in the Registry, such as in the Spring XML file. The values that could be passed in, are the following:
Name | Description |
---|---|
| password setting to use in order to encrypt/decrypt payloads sent using SSH |
| keystore format to be used for payload encryption. Defaults to "JKS" if not set |
| Security provider to be used for payload encryption. Defaults to "SunX509" if not set. |
| deprecated: Client side certificate keystore to be used for encryption |
| deprecated: Server side certificate keystore to be used for encryption |
|
Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with |
|
Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with |
| Reference to a class that could be used to return an SSL Handler |
|
A custom |
| A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. |
|
A custom |
| A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. |
Read below about using non shareable encoders/decoders.
39.6. Sending Messages to/from a Netty endpoint
39.6.1. Netty Producer
In Producer mode, the component provides the ability to send payloads to a socket endpoint using either TCP or UDP protocols (with optional SSL support).
The producer mode supports both one-way and request-response based operations.
39.6.2. Netty Consumer
In Consumer mode, the component provides the ability to:
- listen on a specified socket using either TCP or UDP protocols (with optional SSL support),
- receive requests on the socket using text/xml, binary and serialized object based payloads and
- send them along on a route as message exchanges.
The consumer mode supports both one-way and request-response based operations.
39.7. Examples
39.7.1. A UDP Netty endpoint using Request-Reply and serialized object payload
Note that Object serialization is not allowed by default, and so a decoder must be configured.
@BindToRegistry("decoder") public ChannelHandler getDecoder() throws Exception { return new DefaultChannelHandlerFactory() { @Override public ChannelHandler newChannelHandler() { return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); } }; } RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); // Process poetry in some way exchange.getOut().setBody("Message received); } } } };
39.7.2. A TCP based Netty consumer endpoint using One-way communication
RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:tcp://0.0.0.0:5150") .to("mock:result"); } };
39.7.3. An SSL/TCP based Netty consumer endpoint using Request-Reply communication
Using the JSSE Configuration Utility
The Netty component supports SSL/TLS configuration through the Camel JSSE Configuration Utility. This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the Netty component.
Programmatic configuration of the component
KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent("netty", NettyComponent.class); nettyComponent.setSslContextParameters(scp);
Spring DSL based configuration of endpoint
... <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters>... ... <to uri="netty:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters"/> ...
Using Basic SSL/TLS configuration on the Jetty Component
Registry registry = context.getRegistry(); registry.bind("password", "changeit"); registry.bind("ksf", new File("src/test/resources/keystore.jks")); registry.bind("tsf", new File("src/test/resources/keystore.jks")); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password" + "&keyStoreFile=#ksf&trustStoreFile=#tsf"; String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });
Getting access to SSLSession and the client certificate
You can get access to the javax.net.ssl.SSLSession
if you eg need to get details about the client certificate. When ssl=true
then the Netty component will store the SSLSession
as a header on the Camel Message as shown below:
SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN();
Remember to set needClientAuth=true
to authenticate the client, otherwise SSLSession
cannot access information about the client certificate, and you may get an exception javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
. You may also get this exception if the client certificate is expired or not valid etc.
The option sslClientCertHeaders
can be set to true
which then enriches the Camel Message with headers having details about the client certificate. For example the subject name is readily available in the header CamelNettySSLClientCertSubjectName
.
39.7.4. Using Multiple Codecs
In certain cases it may be necessary to add chains of encoders and decoders to the netty pipeline. To add multpile codecs to a camel netty endpoint the 'encoders' and 'decoders' uri parameters should be used. Like the 'encoder' and 'decoder' parameters they are used to supply references (lists of ChannelUpstreamHandlers and ChannelDownstreamHandlers) that should be added to the pipeline. Note that if encoders is specified then the encoder param will be ignored, similarly for decoders and the decoder param.
Read further above about using non shareable encoders/decoders.
The lists of codecs need to be added to the Camel’s registry so they can be resolved when the endpoint is created.
ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind("length-decoder", lengthDecoder); registry.bind("string-decoder", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind("length-encoder", lengthEncoder); registry.bind("string-encoder", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind("encoders", encoders); registry.bind("decoders", decoders);
Spring’s native collections support can be used to specify the codec lists in an application context
<util:list id="decoders" list-class="java.util.LinkedList"> <bean class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringDecoder"/> </util:list> <util:list id="encoders" list-class="java.util.LinkedList"> <bean class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringEncoder"/> </util:list> <bean id="length-encoder" class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean id="string-encoder" class="io.netty.handler.codec.string.StringEncoder"/> <bean id="length-decoder" class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean id="string-decoder" class="io.netty.handler.codec.string.StringDecoder"/>
The bean names can then be used in netty endpoint definitions either as a comma separated list or contained in a List e.g.
from("direct:multiple-codec").to("netty:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false"); from("netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false").to("mock:multiple-codec");
or via XML.
<camelContext id="multiple-netty-codecs-context" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:multiple-codec"/> <to uri="netty:tcp://0.0.0.0:5150?encoders=#encoders&sync=false"/> </route> <route> <from uri="netty:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&sync=false"/> <to uri="mock:multiple-codec"/> </route> </camelContext>
39.8. Closing Channel When Complete
When acting as a server you sometimes want to close the channel when, for example, a client conversion is finished.
You can do this by simply setting the endpoint option disconnect=true
.
However you can also instruct Camel on a per message basis as follows.
To instruct Camel to close the channel, you should add a header with the key CamelNettyCloseChannelWhenComplete
set to a boolean true
value.
For instance, the example below will close the channel after it has written the bye message back to the client:
from("netty:tcp://0.0.0.0:8080").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody("Bye " + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } });
Adding custom channel pipeline factories to gain complete control over a created pipeline.
39.9. Custom pipeline
Custom channel pipelines provide complete control to the user over the handler/interceptor chain by inserting custom handler(s), encoder(s) & decoder(s) without having to specify them in the Netty Endpoint URL in a very simple way.
In order to add a custom pipeline, a custom channel pipeline factory must be created and registered with the context via the context registry (Registry, or the camel-spring ApplicationContextRegistry etc).
A custom pipeline factory must be constructed as follows
-
A Producer linked channel pipeline factory must extend the abstract class
ClientPipelineFactory
. -
A Consumer linked channel pipeline factory must extend the abstract class
ServerInitializerFactory
. -
The classes should override the initChannel() method in order to insert custom handler(s), encoder(s) and decoder(s). Not overriding the
initChannel()
method creates a pipeline with no handlers, encoders or decoders wired to the pipeline.
The example below shows how ServerInitializerFactory factory may be created
39.9.1. Using custom pipeline factory
public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; protected void initChannel(Channel ch) throws Exception { ChannelPipeline channelPipeline = ch.pipeline(); channelPipeline.addLast("encoder-SD", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast("decoder-DELIM", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast("decoder-SD", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast("handler", new ServerChannelHandler(consumer)); } }
The custom channel pipeline factory can then be added to the registry and instantiated/utilized on a camel route in the following way
Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(); registry.bind("spf", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf" String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });
39.10. Reusing Netty boss and worker thread pools
Netty has two kind of thread pools: boss and worker. By default each Netty consumer and producer has their private thread pools. If you want to reuse these thread pools among multiple consumers or producers then the thread pools must be created and enlisted in the Registry.
For example using Spring XML we can create a shared worker thread pool using the NettyWorkerPoolBuilder
with 2 worker threads as shown below:
<!-- use the worker pool builder to help create the shared thread pool --> <bean id="poolBuilder" class="org.apache.camel.component.netty.NettyWorkerPoolBuilder"> <property name="workerCount" value="2"/> </bean> <!-- the shared worker thread pool --> <bean id="sharedPool" class="org.jboss.netty.channel.socket.nio.WorkerPool" factory-bean="poolBuilder" factory-method="build" destroy-method="shutdown"> </bean>
For boss thread pool there is a org.apache.camel.component.netty.NettyServerBossPoolBuilder
builder for Netty consumers, and a org.apache.camel.component.netty.NettyClientBossPoolBuilder
for the Netty producers.
Then in the Camel routes we can refer to this worker pools by configuring the workerPool
option in the URI as shown below:
<route> <from uri="netty:tcp://0.0.0.0:5021?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false"/> <to uri="log:result"/> ... </route>
And if we have another route we can refer to the shared worker pool:
<route> <from uri="netty:tcp://0.0.0.0:5022?textline=true&sync=true&workerPool=#sharedPool&usingExecutorService=false"/> <to uri="log:result"/> ... </route>
and so forth.
39.11. Multiplexing concurrent messages over a single connection with request/reply
When using Netty for request/reply messaging via the netty producer then by default each message is sent via a non-shared connection (pooled). This ensures that replies are automatic being able to map to the correct request thread for further routing in Camel. In other words correlation between request/reply messages happens out-of-the-box because the replies comes back on the same connection that was used for sending the request; and this connection is not shared with others. When the response comes back, the connection is returned back to the connection pool, where it can be reused by others.
However if you want to multiplex concurrent request/responses on a single shared connection, then you need to turn off the connection pooling by setting producerPoolEnabled=false
. Now this means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager
as correlation manager and configure it via the correlationManager=#myManager
option.
We recommend extending the TimeoutCorrelationManagerSupport
when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well.
You can find an example with the Apache Camel source code in the examples directory under the camel-example-netty-custom-correlation
directory.
39.12. Spring Boot Auto-Configuration
When using netty with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-starter</artifactId> </dependency>
The component supports 74 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.netty.allow-default-codec | The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. | true | Boolean |
camel.component.netty.allow-serialized-headers | Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. | false | Boolean |
camel.component.netty.auto-append-delimiter | Whether or not to auto append missing end delimiter when sending using the textline codec. | true | Boolean |
camel.component.netty.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.netty.backlog | Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. | Integer | |
camel.component.netty.boss-count | When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. | 1 | Integer |
camel.component.netty.boss-group | Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. The option is a io.netty.channel.EventLoopGroup type. | EventLoopGroup | |
camel.component.netty.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.netty.broadcast | Setting to choose Multicast over UDP. | false | Boolean |
camel.component.netty.channel-group | To use a explicit ChannelGroup. The option is a io.netty.channel.group.ChannelGroup type. | ChannelGroup | |
camel.component.netty.client-initializer-factory | To use a custom ClientInitializerFactory. The option is a org.apache.camel.component.netty.ClientInitializerFactory type. | ClientInitializerFactory | |
camel.component.netty.client-mode | If the clientMode is true, netty consumer will connect the address as a TCP client. | false | Boolean |
camel.component.netty.configuration | To use the NettyConfiguration as configuration when creating endpoints. The option is a org.apache.camel.component.netty.NettyConfiguration type. | NettyConfiguration | |
camel.component.netty.connect-timeout | Time to wait for a socket connection to be available. Value is in milliseconds. | 10000 | Integer |
camel.component.netty.correlation-manager | To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. The option is a org.apache.camel.component.netty.NettyCamelStateCorrelationManager type. | NettyCamelStateCorrelationManager | |
camel.component.netty.decoder-max-line-length | The max line length to use for the textline codec. | 1024 | Integer |
camel.component.netty.decoders | A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. | String | |
camel.component.netty.delimiter | The delimiter to use for the textline codec. Possible values are LINE and NULL. | TextLineDelimiter | |
camel.component.netty.disconnect | Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. | false | Boolean |
camel.component.netty.disconnect-on-no-reply | If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. | true | Boolean |
camel.component.netty.enabled | Whether to enable auto configuration of the netty component. This is enabled by default. | Boolean | |
camel.component.netty.enabled-protocols | Which protocols to enable when using SSL. | TLSv1,TLSv1.1,TLSv1.2 | String |
camel.component.netty.encoders | A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. | String | |
camel.component.netty.encoding | The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. | String | |
camel.component.netty.executor-service | To use the given EventExecutorGroup. The option is a io.netty.util.concurrent.EventExecutorGroup type. | EventExecutorGroup | |
camel.component.netty.hostname-verification | To enable/disable hostname verification on SSLEngine. | false | Boolean |
camel.component.netty.keep-alive | Setting to ensure socket is not closed due to inactivity. | true | Boolean |
camel.component.netty.key-store-file | Client side certificate keystore to be used for encryption. | File | |
camel.component.netty.key-store-format | Keystore format to be used for payload encryption. Defaults to JKS if not set. | String | |
camel.component.netty.key-store-resource | Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
camel.component.netty.lazy-channel-creation | Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. | true | Boolean |
camel.component.netty.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.netty.maximum-pool-size | Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. | Integer | |
camel.component.netty.native-transport | Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . | false | Boolean |
camel.component.netty.need-client-auth | Configures whether the server needs client authentication when using SSL. | false | Boolean |
camel.component.netty.netty-server-bootstrap-factory | To use a custom NettyServerBootstrapFactory. The option is a org.apache.camel.component.netty.NettyServerBootstrapFactory type. | NettyServerBootstrapFactory | |
camel.component.netty.network-interface | When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. | String | |
camel.component.netty.no-reply-log-level | If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. | LoggingLevel | |
camel.component.netty.options | Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. | Map | |
camel.component.netty.passphrase | Password setting to use in order to encrypt/decrypt payloads sent using SSH. | String | |
camel.component.netty.producer-pool-enabled | Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. | true | Boolean |
camel.component.netty.producer-pool-max-idle | Sets the cap on the number of idle instances in the pool. | 100 | Integer |
camel.component.netty.producer-pool-max-total | Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. | -1 | Integer |
camel.component.netty.producer-pool-min-evictable-idle | Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. | 300000 | Long |
camel.component.netty.producer-pool-min-idle | Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. | Integer | |
camel.component.netty.receive-buffer-size | The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. | 65536 | Integer |
camel.component.netty.receive-buffer-size-predictor | Configures the buffer size predictor. See details at Jetty documentation and this mail thread. | Integer | |
camel.component.netty.reconnect | Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. | true | Boolean |
camel.component.netty.reconnect-interval | Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. | 10000 | Integer |
camel.component.netty.request-timeout | Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty’s ReadTimeoutHandler to trigger the timeout. | Long | |
camel.component.netty.reuse-address | Setting to facilitate socket multiplexing. | true | Boolean |
camel.component.netty.reuse-channel | This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. | false | Boolean |
camel.component.netty.security-provider | Security provider to be used for payload encryption. Defaults to SunX509 if not set. | String | |
camel.component.netty.send-buffer-size | The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. | 65536 | Integer |
camel.component.netty.server-closed-channel-exception-caught-log-level | If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. | LoggingLevel | |
camel.component.netty.server-exception-caught-log-level | If the server (NettyConsumer) catches an exception then its logged using this logging level. | LoggingLevel | |
camel.component.netty.server-initializer-factory | To use a custom ServerInitializerFactory. The option is a org.apache.camel.component.netty.ServerInitializerFactory type. | ServerInitializerFactory | |
camel.component.netty.ssl | Setting to specify whether SSL encryption is applied to this endpoint. | false | Boolean |
camel.component.netty.ssl-client-cert-headers | When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. | false | Boolean |
camel.component.netty.ssl-context-parameters | To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. | SSLContextParameters | |
camel.component.netty.ssl-handler | Reference to a class that could be used to return an SSL Handler. The option is a io.netty.handler.ssl.SslHandler type. | SslHandler | |
camel.component.netty.sync | Setting to set endpoint as one-way or request-response. | true | Boolean |
camel.component.netty.tcp-no-delay | Setting to improve TCP protocol performance. | true | Boolean |
camel.component.netty.textline | Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. | false | Boolean |
camel.component.netty.transfer-exchange | Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. | false | Boolean |
camel.component.netty.trust-store-file | Server side certificate keystore to be used for encryption. | File | |
camel.component.netty.trust-store-resource | Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. | String | |
camel.component.netty.udp-byte-array-codec | For UDP only. If enabled the using byte array codec instead of Java serialization protocol. | false | Boolean |
camel.component.netty.udp-connectionless-sending | This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. | false | Boolean |
camel.component.netty.use-byte-buf | If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. | false | Boolean |
camel.component.netty.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
camel.component.netty.using-executor-service | Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. | true | Boolean |
camel.component.netty.worker-count | When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. | Integer | |
camel.component.netty.worker-group | To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. The option is a io.netty.channel.EventLoopGroup type. | EventLoopGroup |
Chapter 40. Paho
Both producer and consumer are supported
Paho component provides connector for the MQTT messaging protocol using the Eclipse Paho library. Paho is one of the most popular MQTT libraries, so if you would like to integrate it with your Java project - Camel Paho connector is a way to go.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-paho</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
40.1. URI format
paho:topic[?options]
Where topic is the name of the topic.
40.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
40.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
40.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings.
In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
40.3. Component Options
The Paho component supports 31 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
automaticReconnect (common) | Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. | true | boolean |
brokerUrl (common) | The URL of the MQTT broker. | tcp://localhost:1883 | String |
cleanSession (common) | Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. | true | boolean |
clientId (common) | MQTT client identifier. The identifier must be unique. | String | |
configuration (common) | To use the shared Paho configuration. | PahoConfiguration | |
connectionTimeout (common) | Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. | 30 | int |
filePersistenceDirectory (common) | Base directory used by file persistence. Will by default use user directory. | String | |
keepAliveInterval (common) | Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. | 60 | int |
maxInflight (common) | Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. | 10 | int |
maxReconnectDelay (common) | Get the maximum time (in millis) to wait between reconnects. | 128000 | int |
mqttVersion (common) | Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. | int | |
persistence (common) | Client persistence to be used - memory or file. Enum values:
| MEMORY | PahoPersistence |
qos (common) | Client quality of service level (0-2). | 2 | int |
retained (common) | Retain option. | false | boolean |
serverURIs (common) | Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. | String | |
willPayload (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | String | |
willQos (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | int | |
willRetained (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | false | boolean |
willTopic (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
client (advanced) | To use a shared Paho client. | MqttClient | |
customWebSocketHeaders (advanced) | Sets the Custom WebSocket Headers for the WebSocket Connection. | Properties | |
executorServiceTimeout (advanced) | Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. | 1 | int |
httpsHostnameVerificationEnabled (security) | Whether SSL HostnameVerifier is enabled or not. The default value is true. | true | boolean |
password (security) | Password to be used for authentication against the MQTT broker. | String | |
socketFactory (security) | Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. | SocketFactory | |
sslClientProps (security) | Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. | Properties | |
sslHostnameVerifier (security) | Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. | HostnameVerifier | |
userName (security) | Username to be used for authentication against the MQTT broker. | String |
40.4. Endpoint Options
The Paho endpoint is configured using URI syntax:
paho:topic
with the following path and query parameters:
40.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
topic (common) | Required Name of the topic. | String |
40.4.2. Query Parameters (31 parameters)
Name | Description | Default | Type |
---|---|---|---|
automaticReconnect (common) | Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. | true | boolean |
brokerUrl (common) | The URL of the MQTT broker. | tcp://localhost:1883 | String |
cleanSession (common) | Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. | true | boolean |
clientId (common) | MQTT client identifier. The identifier must be unique. | String | |
connectionTimeout (common) | Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. | 30 | int |
filePersistenceDirectory (common) | Base directory used by file persistence. Will by default use user directory. | String | |
keepAliveInterval (common) | Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. | 60 | int |
maxInflight (common) | Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. | 10 | int |
maxReconnectDelay (common) | Get the maximum time (in millis) to wait between reconnects. | 128000 | int |
mqttVersion (common) | Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. | int | |
persistence (common) | Client persistence to be used - memory or file. Enum values:
| MEMORY | PahoPersistence |
qos (common) | Client quality of service level (0-2). | 2 | int |
retained (common) | Retain option. | false | boolean |
serverURIs (common) | Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. | String | |
willPayload (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | String | |
willQos (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | int | |
willRetained (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | false | boolean |
willTopic (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
client (advanced) | To use an existing mqtt client. | MqttClient | |
customWebSocketHeaders (advanced) | Sets the Custom WebSocket Headers for the WebSocket Connection. | Properties | |
executorServiceTimeout (advanced) | Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. | 1 | int |
httpsHostnameVerificationEnabled (security) | Whether SSL HostnameVerifier is enabled or not. The default value is true. | true | boolean |
password (security) | Password to be used for authentication against the MQTT broker. | String | |
socketFactory (security) | Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. | SocketFactory | |
sslClientProps (security) | Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. | Properties | |
sslHostnameVerifier (security) | Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. | HostnameVerifier | |
userName (security) | Username to be used for authentication against the MQTT broker. | String |
40.5. Headers
The following headers are recognized by the Paho component:
Header | Java constant | Endpoint type | Value type | Description |
---|---|---|---|---|
CamelMqttTopic | PahoConstants.MQTT_TOPIC | Consumer | String | The name of the topic |
CamelMqttQoS | PahoConstants.MQTT_QOS | Consumer | Integer | QualityOfService of the incoming message |
CamelPahoOverrideTopic | PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC | Producer | String | Name of topic to override and send to instead of topic specified on endpoint |
40.6. Default payload type
By default Camel Paho component operates on the binary payloads extracted out of (or put into) the MQTT message:
// Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody("paho:topic"); // Send payload byte[] payload = "message".getBytes(); producerTemplate.sendBody("paho:topic", payload);
But of course Camel build-in type conversion API can perform the automatic data type transformations for you. In the example below Camel automatically converts binary payload into String
(and conversely):
// Receive payload String payload = consumerTemplate.receiveBody("paho:topic", String.class); // Send payload String payload = "message"; producerTemplate.sendBody("paho:topic", payload);
40.7. Samples
For example the following snippet reads messages from the MQTT broker installed on the same host as the Camel router:
from("paho:some/queue") .to("mock:test");
While the snippet below sends message to the MQTT broker:
from("direct:test") .to("paho:some/target/queue");
For example this is how to read messages from the remote MQTT broker:
from("paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883") .to("mock:test");
And here we override the default topic and set to a dynamic topic
from("direct:test") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("${header.customerId}")) .to("paho:some/target/queue");
40.8. Spring Boot Auto-Configuration
When using paho with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-starter</artifactId> </dependency>
The component supports 32 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.paho.automatic-reconnect | Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. | true | Boolean |
camel.component.paho.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.paho.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.paho.broker-url | The URL of the MQTT broker. | tcp://localhost:1883 | String |
camel.component.paho.clean-session | Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. | true | Boolean |
camel.component.paho.client | To use a shared Paho client. The option is a org.eclipse.paho.client.mqttv3.MqttClient type. | MqttClient | |
camel.component.paho.client-id | MQTT client identifier. The identifier must be unique. | String | |
camel.component.paho.configuration | To use the shared Paho configuration. The option is a org.apache.camel.component.paho.PahoConfiguration type. | PahoConfiguration | |
camel.component.paho.connection-timeout | Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. | 30 | Integer |
camel.component.paho.custom-web-socket-headers | Sets the Custom WebSocket Headers for the WebSocket Connection. The option is a java.util.Properties type. | Properties | |
camel.component.paho.enabled | Whether to enable auto configuration of the paho component. This is enabled by default. | Boolean | |
camel.component.paho.executor-service-timeout | Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. | 1 | Integer |
camel.component.paho.file-persistence-directory | Base directory used by file persistence. Will by default use user directory. | String | |
camel.component.paho.https-hostname-verification-enabled | Whether SSL HostnameVerifier is enabled or not. The default value is true. | true | Boolean |
camel.component.paho.keep-alive-interval | Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. | 60 | Integer |
camel.component.paho.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.paho.max-inflight | Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. | 10 | Integer |
camel.component.paho.max-reconnect-delay | Get the maximum time (in millis) to wait between reconnects. | 128000 | Integer |
camel.component.paho.mqtt-version | Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. | Integer | |
camel.component.paho.password | Password to be used for authentication against the MQTT broker. | String | |
camel.component.paho.persistence | Client persistence to be used - memory or file. | PahoPersistence | |
camel.component.paho.qos | Client quality of service level (0-2). | 2 | Integer |
camel.component.paho.retained | Retain option. | false | Boolean |
camel.component.paho.server-u-r-is | Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. | String | |
camel.component.paho.socket-factory | Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. The option is a javax.net.SocketFactory type. | SocketFactory | |
camel.component.paho.ssl-client-props | Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. The option is a java.util.Properties type. | Properties | |
camel.component.paho.ssl-hostname-verifier | Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. The option is a javax.net.ssl.HostnameVerifier type. | HostnameVerifier | |
camel.component.paho.user-name | Username to be used for authentication against the MQTT broker. | String | |
camel.component.paho.will-payload | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | String | |
camel.component.paho.will-qos | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | Integer | |
camel.component.paho.will-retained | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | false | Boolean |
camel.component.paho.will-topic | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. | String |
Chapter 41. Paho MQTT 5
Both producer and consumer are supported
Paho MQTT5 component provides connector for the MQTT messaging protocol using the Eclipse Paho library with MQTT v5. Paho is one of the most popular MQTT libraries, so if you would like to integrate it with your Java project - Camel Paho connector is a way to go.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-paho-mqtt5</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
41.1. URI format
paho-mqtt5:topic[?options]
Where topic is the name of the topic.
41.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
41.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
41.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
41.3. Component Options
The Paho MQTT 5 component supports 32 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
automaticReconnect (common) | Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. | true | boolean |
brokerUrl (common) | The URL of the MQTT broker. | tcp://localhost:1883 | String |
cleanStart (common) | Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. | true | boolean |
clientId (common) | MQTT client identifier. The identifier must be unique. | String | |
configuration (common) | To use the shared Paho configuration. | PahoMqtt5Configuration | |
connectionTimeout (common) | Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. | 30 | int |
filePersistenceDirectory (common) | Base directory used by file persistence. Will by default use user directory. | String | |
keepAliveInterval (common) | Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. | 60 | int |
maxReconnectDelay (common) | Get the maximum time (in millis) to wait between reconnects. | 128000 | int |
persistence (common) | Client persistence to be used - memory or file. Enum values:
| MEMORY | PahoMqtt5Persistence |
qos (common) | Client quality of service level (0-2). | 2 | int |
receiveMaximum (common) | Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535. | 65535 | int |
retained (common) | Retain option. | false | boolean |
serverURIs (common) | Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. | String | |
sessionExpiryInterval (common) | Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0. | -1 | long |
willMqttProperties (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message. | MqttProperties | |
willPayload (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message. | String | |
willQos (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2). | 1 | int |
willRetained (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained. | false | boolean |
willTopic (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
client (advanced) | To use a shared Paho client. | MqttClient | |
customWebSocketHeaders (advanced) | Sets the Custom WebSocket Headers for the WebSocket Connection. | Map | |
executorServiceTimeout (advanced) | Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. | 1 | int |
httpsHostnameVerificationEnabled (security) | Whether SSL HostnameVerifier is enabled or not. The default value is true. | true | boolean |
password (security) | Password to be used for authentication against the MQTT broker. | String | |
socketFactory (security) | Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. | SocketFactory | |
sslClientProps (security) | Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. | Properties | |
sslHostnameVerifier (security) | Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. | HostnameVerifier | |
userName (security) | Username to be used for authentication against the MQTT broker. | String |
41.4. Endpoint Options
The Paho MQTT 5 endpoint is configured using URI syntax:
paho-mqtt5:topic
with the following path and query parameters:
41.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
topic (common) | Required Name of the topic. | String |
41.4.2. Query Parameters (32 parameters)
Name | Description | Default | Type |
---|---|---|---|
automaticReconnect (common) | Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. | true | boolean |
brokerUrl (common) | The URL of the MQTT broker. | tcp://localhost:1883 | String |
cleanStart (common) | Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. | true | boolean |
clientId (common) | MQTT client identifier. The identifier must be unique. | String | |
connectionTimeout (common) | Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. | 30 | int |
filePersistenceDirectory (common) | Base directory used by file persistence. Will by default use user directory. | String | |
keepAliveInterval (common) | Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. | 60 | int |
maxReconnectDelay (common) | Get the maximum time (in millis) to wait between reconnects. | 128000 | int |
persistence (common) | Client persistence to be used - memory or file. Enum values:
| MEMORY | PahoMqtt5Persistence |
qos (common) | Client quality of service level (0-2). | 2 | int |
receiveMaximum (common) | Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535. | 65535 | int |
retained (common) | Retain option. | false | boolean |
serverURIs (common) | Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. | String | |
sessionExpiryInterval (common) | Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0. | -1 | long |
willMqttProperties (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message. | MqttProperties | |
willPayload (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message. | String | |
willQos (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2). | 1 | int |
willRetained (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained. | false | boolean |
willTopic (common) | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
client (advanced) | To use an existing mqtt client. | MqttClient | |
customWebSocketHeaders (advanced) | Sets the Custom WebSocket Headers for the WebSocket Connection. | Map | |
executorServiceTimeout (advanced) | Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. | 1 | int |
httpsHostnameVerificationEnabled (security) | Whether SSL HostnameVerifier is enabled or not. The default value is true. | true | boolean |
password (security) | Password to be used for authentication against the MQTT broker. | String | |
socketFactory (security) | Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. | SocketFactory | |
sslClientProps (security) | Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. | Properties | |
sslHostnameVerifier (security) | Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. | HostnameVerifier | |
userName (security) | Username to be used for authentication against the MQTT broker. | String |
41.5. Headers
The following headers are recognized by the Paho component:
Header | Java constant | Endpoint type | Value type | Description |
---|---|---|---|---|
CamelMqttTopic | PahoConstants.MQTT_TOPIC | Consumer | String | The name of the topic |
CamelMqttQoS | PahoConstants.MQTT_QOS | Consumer | Integer | QualityOfService of the incoming message |
CamelPahoOverrideTopic | PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC | Producer | String | Name of topic to override and send to instead of topic specified on endpoint |
41.6. Default payload type
By default Camel Paho component operates on the binary payloads extracted out of (or put into) the MQTT message:
// Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody("paho:topic"); // Send payload byte[] payload = "message".getBytes(); producerTemplate.sendBody("paho:topic", payload);
But of course Camel build-in type conversion API can perform the automatic data type transformations for you. In the example below Camel automatically converts binary payload into String
(and conversely):
// Receive payload String payload = consumerTemplate.receiveBody("paho:topic", String.class); // Send payload String payload = "message"; producerTemplate.sendBody("paho:topic", payload);
41.7. Samples
For example the following snippet reads messages from the MQTT broker installed on the same host as the Camel router:
from("paho:some/queue") .to("mock:test");
While the snippet below sends message to the MQTT broker:
from("direct:test") .to("paho:some/target/queue");
For example this is how to read messages from the remote MQTT broker:
from("paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883") .to("mock:test");
And here we override the default topic and set to a dynamic topic
from("direct:test") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("${header.customerId}")) .to("paho:some/target/queue");
41.8. Spring Boot Auto-Configuration
When using paho-mqtt5 with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-mqtt5-starter</artifactId> </dependency>
The component supports 33 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.paho-mqtt5.automatic-reconnect | Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. | true | Boolean |
camel.component.paho-mqtt5.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.paho-mqtt5.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.paho-mqtt5.broker-url | The URL of the MQTT broker. | tcp://localhost:1883 | String |
camel.component.paho-mqtt5.clean-start | Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. | true | Boolean |
camel.component.paho-mqtt5.client | To use a shared Paho client. The option is a org.eclipse.paho.mqttv5.client.MqttClient type. | MqttClient | |
camel.component.paho-mqtt5.client-id | MQTT client identifier. The identifier must be unique. | String | |
camel.component.paho-mqtt5.configuration | To use the shared Paho configuration. The option is a org.apache.camel.component.paho.mqtt5.PahoMqtt5Configuration type. | PahoMqtt5Configuration | |
camel.component.paho-mqtt5.connection-timeout | Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. | 30 | Integer |
camel.component.paho-mqtt5.custom-web-socket-headers | Sets the Custom WebSocket Headers for the WebSocket Connection. | Map | |
camel.component.paho-mqtt5.enabled | Whether to enable auto configuration of the paho-mqtt5 component. This is enabled by default. | Boolean | |
camel.component.paho-mqtt5.executor-service-timeout | Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. | 1 | Integer |
camel.component.paho-mqtt5.file-persistence-directory | Base directory used by file persistence. Will by default use user directory. | String | |
camel.component.paho-mqtt5.https-hostname-verification-enabled | Whether SSL HostnameVerifier is enabled or not. The default value is true. | true | Boolean |
camel.component.paho-mqtt5.keep-alive-interval | Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. | 60 | Integer |
camel.component.paho-mqtt5.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.paho-mqtt5.max-reconnect-delay | Get the maximum time (in millis) to wait between reconnects. | 128000 | Integer |
camel.component.paho-mqtt5.password | Password to be used for authentication against the MQTT broker. | String | |
camel.component.paho-mqtt5.persistence | Client persistence to be used - memory or file. | PahoMqtt5Persistence | |
camel.component.paho-mqtt5.qos | Client quality of service level (0-2). | 2 | Integer |
camel.component.paho-mqtt5.receive-maximum | Sets the Receive Maximum. This value represents the limit of QoS 1 and QoS 2 publications that the client is willing to process concurrently. There is no mechanism to limit the number of QoS 0 publications that the Server might try to send. The default value is 65535. | 65535 | Integer |
camel.component.paho-mqtt5.retained | Retain option. | false | Boolean |
camel.component.paho-mqtt5.server-u-r-is | Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. | String | |
camel.component.paho-mqtt5.session-expiry-interval | Sets the Session Expiry Interval. This value, measured in seconds, defines the maximum time that the broker will maintain the session for once the client disconnects. Clients should only connect with a long Session Expiry interval if they intend to connect to the server at some later point in time. By default this value is -1 and so will not be sent, in this case, the session will not expire. If a 0 is sent, the session will end immediately once the Network Connection is closed. When the client has determined that it has no longer any use for the session, it should disconnect with a Session Expiry Interval set to 0. | -1 | Long |
camel.component.paho-mqtt5.socket-factory | Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. The option is a javax.net.SocketFactory type. | SocketFactory | |
camel.component.paho-mqtt5.ssl-client-props | Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. The option is a java.util.Properties type. | Properties | |
camel.component.paho-mqtt5.ssl-hostname-verifier | Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. The option is a javax.net.ssl.HostnameVerifier type. | HostnameVerifier | |
camel.component.paho-mqtt5.user-name | Username to be used for authentication against the MQTT broker. | String | |
camel.component.paho-mqtt5.will-mqtt-properties | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The MQTT properties set for the message. The option is a org.eclipse.paho.mqttv5.common.packet.MqttProperties type. | MqttProperties | |
camel.component.paho-mqtt5.will-payload | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The byte payload for the message. | String | |
camel.component.paho-mqtt5.will-qos | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The quality of service to publish the message at (0, 1 or 2). | 1 | Integer |
camel.component.paho-mqtt5.will-retained | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. Whether or not the message should be retained. | false | Boolean |
camel.component.paho-mqtt5.will-topic | Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to. | String |
Chapter 42. Quartz
Only consumer is supported
The Quartz component provides a scheduled delivery of messages using the Quartz Scheduler 2.x. Each endpoint represents a different timer (in Quartz terms, a Trigger and JobDetail).
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-quartz</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
42.1. URI format
quartz://timerName?options quartz://groupName/timerName?options quartz://groupName/timerName?cron=expression quartz://timerName?cron=expression
The component uses either a CronTrigger
or a SimpleTrigger
. If no cron expression is provided, the component uses a simple trigger. If no groupName
is provided, the quartz component uses the Camel
group name.
42.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
42.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
42.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
42.3. Component Options
The Quartz component supports 13 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
enableJmx (consumer) | Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true. | true | boolean |
prefixInstanceName (consumer) | Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext’s. | true | boolean |
prefixJobNameWithEndpointId (consumer) | Whether to prefix the quartz job with the endpoint id. This option is default false. | false | boolean |
properties (consumer) | Properties to configure the Quartz scheduler. | Map | |
propertiesFile (consumer) | File name of the properties to load from the classpath. | String | |
propertiesRef (consumer) | References to an existing Properties or Map to lookup in the registry to use for configuring quartz. | String | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
scheduler (advanced) | To use the custom configured Quartz scheduler, instead of creating a new Scheduler. | Scheduler | |
schedulerFactory (advanced) | To use the custom SchedulerFactory which is used to create the Scheduler. | SchedulerFactory | |
autoStartScheduler (scheduler) | Whether or not the scheduler should be auto started. This options is default true. | true | boolean |
interruptJobsOnShutdown (scheduler) | Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully. | false | boolean |
startDelayedSeconds (scheduler) | Seconds to wait before starting the quartz scheduler. | int |
42.4. Endpoint Options
The Quartz endpoint is configured using URI syntax:
quartz:groupName/triggerName
with the following path and query parameters:
42.4.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
groupName (consumer) | The quartz group name to use. The combination of group name and trigger name should be unique. | Camel | String |
triggerName (consumer) | Required The quartz trigger name to use. The combination of group name and trigger name should be unique. | String |
42.4.2. Query Parameters (17 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
cron (consumer) | Specifies a cron expression to define when to trigger. | String | |
deleteJob (consumer) | If set to true, then the trigger automatically delete when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true. | true | boolean |
durableJob (consumer) | Whether or not the job should remain stored after it is orphaned (no triggers point to it). | false | boolean |
pauseJob (consumer) | If set to true, then the trigger automatically pauses when route stop. Else if set to false, it will remain in scheduler. When set to false, it will also mean user may reuse pre-configured trigger with camel Uri. Just ensure the names match. Notice you cannot have both deleteJob and pauseJob set to true. | false | boolean |
recoverableJob (consumer) | Instructs the scheduler whether or not the job should be re-executed if a 'recovery' or 'fail-over' situation is encountered. | false | boolean |
stateful (consumer) | Uses a Quartz PersistJobDataAfterExecution and DisallowConcurrentExecution instead of the default job. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
customCalendar (advanced) | Specifies a custom calendar to avoid specific range of date. | Calendar | |
jobParameters (advanced) | To configure additional options on the job. | Map | |
prefixJobNameWithEndpointId (advanced) | Whether the job name should be prefixed with endpoint id. | false | boolean |
triggerParameters (advanced) | To configure additional options on the trigger. | Map | |
usingFixedCamelContextName (advanced) | If it is true, JobDataMap uses the CamelContext name directly to reference the CamelContext, if it is false, JobDataMap uses use the CamelContext management name which could be changed during the deploy time. | false | boolean |
autoStartScheduler (scheduler) | Whether or not the scheduler should be auto started. | true | boolean |
startDelayedSeconds (scheduler) | Seconds to wait before starting the quartz scheduler. | int | |
triggerStartDelay (scheduler) | In case of scheduler has already started, we want the trigger start slightly after current time to ensure endpoint is fully started before the job kicks in. Negative value shifts trigger start time in the past. | 500 | long |
42.4.3. Configuring quartz.properties file
By default Quartz will look for a quartz.properties
file in the org/quartz
directory of the classpath. If you are using WAR deployments this means just drop the quartz.properties in WEB-INF/classes/org/quartz
.
However the Camel Quartz component also allows you to configure properties:
Parameter | Default | Type | Description |
---|---|---|---|
|
|
|
You can configure a |
|
|
| File name of the properties to load from the classpath |
To do this you can configure this in Spring XML as follows
<bean id="quartz" class="org.apache.camel.component.quartz.QuartzComponent"> <property name="propertiesFile" value="com/mycompany/myquartz.properties"/> </bean>
42.5. Enabling Quartz scheduler in JMX
You need to configure the quartz scheduler properties to enable JMX.
That is typically setting the option "org.quartz.scheduler.jmx.export"
to a true
value in the configuration file.
This option is set to true by default, unless explicitly disabled.
42.6. Starting the Quartz scheduler
The Quartz component offers an option to let the Quartz scheduler be started delayed, or not auto started at all.
This is an example:
<bean id="quartz" class="org.apache.camel.component.quartz.QuartzComponent"> <property name="startDelayedSeconds" value="5"/> </bean>
42.7. Clustering
If you use Quartz in clustered mode, e.g. the JobStore
is clustered. Then the Quartz component will not pause/remove triggers when a node is being stopped/shutdown. This allows the trigger to keep running on the other nodes in the cluster.
When running in clustered node no checking is done to ensure unique job name/group for endpoints.
42.8. Message Headers
Camel adds the getters from the Quartz Execution Context as header values. The following headers are added:calendar
, fireTime
, jobDetail
, jobInstance
, jobRuntTime
, mergedJobDataMap
, nextFireTime
, previousFireTime
, refireCount
, result
, scheduledFireTime
, scheduler
, trigger
, triggerName
, triggerGroup
.
The fireTime
header contains the java.util.Date
of when the exchange was fired.
42.9. Using Cron Triggers
Quartz supports Cron-like expressions for specifying timers in a handy format. You can use these expressions in the cron
URI parameter; though to preserve valid URI encoding we allow +
to be used instead of spaces.
For example, the following will fire a message every five minutes starting at 12pm (noon) to 6pm on weekdays:
from("quartz://myGroup/myTimerName?cron=0+0/5+12-18+?+*+MON-FRI") .to("activemq:Totally.Rocks");
which is equivalent to using the cron expression
0 0/5 12-18 ? * MON-FRI
The following table shows the URI character encodings we use to preserve valid URI syntax:
URI Character | Cron character |
---|---|
| Space |
42.10. Specifying time zone
The Quartz Scheduler allows you to configure time zone per trigger. For example to use a timezone of your country, then you can do as follows:
quartz://groupName/timerName?cron=0+0/5+12-18+?+*+MON-FRI&trigger.timeZone=Europe/Stockholm
The timeZone value is the values accepted by java.util.TimeZone
.
42.11. Configuring misfire instructions
The quartz scheduler can be configured with a misfire instruction to handle misfire situations for the trigger. The concrete trigger type that you are using will have defined a set of additional MISFIRE_INSTRUCTION_XXX
constants that may be set as this property’s value.
For example to configure the simple trigger to use misfire instruction 4:
quartz://myGroup/myTimerName?trigger.repeatInterval=2000&trigger.misfireInstruction=4
And likewise you can configure the cron trigger with one of its misfire instructions as well:
quartz://myGroup/myTimerName?cron=0/2+*+*+*+*+?&trigger.misfireInstruction=2
The simple and cron triggers has the following misfire instructions representative:
42.11.1. SimpleTrigger.MISFIRE_INSTRUCTION_FIRE_NOW = 1 (default)
Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be fired now by Scheduler.
This instruction should typically only be used for 'one-shot' (non-repeating) Triggers. If it is used on a trigger with a repeat count > 0 then it is equivalent to the instruction MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT.
42.11.2. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_EXISTING_REPEAT_COUNT = 2
Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to 'now' (even if the associated Calendar excludes 'now') with the repeat count left as-is. This does obey the Trigger end-time however, so if 'now' is after the end-time the Trigger will not fire again.
Use of this instruction causes the trigger to 'forget' the start-time and repeat-count that it was originally setup with (this is only an issue if you for some reason wanted to be able to tell what the original values were at some later time).
42.11.3. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NOW_WITH_REMAINING_REPEAT_COUNT = 3
Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to 'now' (even if the associated Calendar excludes 'now') with the repeat count set to what it would be, if it had not missed any firings. This does obey the Trigger end-time however, so if 'now' is after the end-time the Trigger will not fire again.
Use of this instruction causes the trigger to 'forget' the start-time and repeat-count that it was originally setup with. Instead, the repeat count on the trigger will be changed to whatever the remaining repeat count is (this is only an issue if you for some reason wanted to be able to tell what the original values were at some later time).
This instruction could cause the Trigger to go to the 'COMPLETE' state after firing 'now', if all the repeat-fire-times where missed.
42.11.4. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NEXT_WITH_REMAINING_COUNT = 4
Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to the next scheduled time after 'now' - taking into account any associated Calendar and with the repeat count set to what it would be, if it had not missed any firings.
This instruction could cause the Trigger to go directly to the 'COMPLETE' state if all fire-times where missed.
42.11.5. SimpleTrigger.MISFIRE_INSTRUCTION_RESCHEDULE_NEXT_WITH_EXISTING_COUNT = 5
Instructs the Scheduler that upon a mis-fire situation, the SimpleTrigger wants to be re-scheduled to the next scheduled time after 'now' - taking into account any associated Calendar, and with the repeat count left unchanged.
This instruction could cause the Trigger to go directly to the 'COMPLETE' state if the end-time of the trigger has arrived.
42.11.6. CronTrigger.MISFIRE_INSTRUCTION_FIRE_ONCE_NOW = 1 (default)
Instructs the Scheduler that upon a mis-fire situation, the CronTrigger wants to be fired now by Scheduler.
42.11.7. CronTrigger.MISFIRE_INSTRUCTION_DO_NOTHING = 2
Instructs the Scheduler that upon a mis-fire situation, the CronTrigger wants to have it’s next-fire-time updated to the next time in the schedule after the current time (taking into account any associated Calendar but it does not want to be fired now.
42.12. Using QuartzScheduledPollConsumerScheduler
The Quartz component provides a Polling Consumer scheduler which allows to use cron based scheduling for Polling Consumer such as the File and FTP consumers.
For example to use a cron based expression to poll for files every 2nd second, then a Camel route can be define simply as:
from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?") .to("bean:process");
Notice we define the scheduler=quartz
to instruct Camel to use the Quartz based scheduler. Then we use scheduler.xxx
options to configure the scheduler. The Quartz scheduler requires the cron option to be set.
The following options is supported:
Parameter | Default | Type | Description |
---|---|---|---|
|
|
| To use a custom Quartz scheduler. If none configure then the shared scheduler from the component is used. |
|
|
| Mandatory: To define the cron expression for triggering the polls. |
|
|
| To specify the trigger id. If none provided then an UUID is generated and used. |
|
|
| To specify the trigger group. |
|
|
| The time zone to use for the CRON trigger. |
Remember configuring these options from the endpoint URIs must be prefixed with scheduler
.
For example to configure the trigger id and group:
from("file:inbox?scheduler=quartz&scheduler.cron=0/2+*+*+*+*+?&scheduler.triggerId=myId&scheduler.triggerGroup=myGroup") .to("bean:process");
There is also a CRON scheduler in Spring, so you can use the following as well:
from("file:inbox?scheduler=spring&scheduler.cron=0/2+*+*+*+*+?") .to("bean:process");
42.13. Cron Component Support
The Quartz component can be used as implementation of the Camel Cron component.
Maven users will need to add the following additional dependency to their pom.xml
:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cron</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
Users can then use the cron component instead of the quartz component, as in the following route:
from("cron://name?schedule=0+0/5+12-18+?+*+MON-FRI") .to("activemq:Totally.Rocks");
42.14. Spring Boot Auto-Configuration
When using quartz with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-quartz-starter</artifactId> </dependency>
The component supports 14 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.quartz.auto-start-scheduler | Whether or not the scheduler should be auto started. This options is default true. | true | Boolean |
camel.component.quartz.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.quartz.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.quartz.enable-jmx | Whether to enable Quartz JMX which allows to manage the Quartz scheduler from JMX. This options is default true. | true | Boolean |
camel.component.quartz.enabled | Whether to enable auto configuration of the quartz component. This is enabled by default. | Boolean | |
camel.component.quartz.interrupt-jobs-on-shutdown | Whether to interrupt jobs on shutdown which forces the scheduler to shutdown quicker and attempt to interrupt any running jobs. If this is enabled then any running jobs can fail due to being interrupted. When a job is interrupted then Camel will mark the exchange to stop continue routing and set java.util.concurrent.RejectedExecutionException as caused exception. Therefore use this with care, as its often better to allow Camel jobs to complete and shutdown gracefully. | false | Boolean |
camel.component.quartz.prefix-instance-name | Whether to prefix the Quartz Scheduler instance name with the CamelContext name. This is enabled by default, to let each CamelContext use its own Quartz scheduler instance by default. You can set this option to false to reuse Quartz scheduler instances between multiple CamelContext’s. | true | Boolean |
camel.component.quartz.prefix-job-name-with-endpoint-id | Whether to prefix the quartz job with the endpoint id. This option is default false. | false | Boolean |
camel.component.quartz.properties | Properties to configure the Quartz scheduler. | Map | |
camel.component.quartz.properties-file | File name of the properties to load from the classpath. | String | |
camel.component.quartz.properties-ref | References to an existing Properties or Map to lookup in the registry to use for configuring quartz. | String | |
camel.component.quartz.scheduler | To use the custom configured Quartz scheduler, instead of creating a new Scheduler. The option is a org.quartz.Scheduler type. | Scheduler | |
camel.component.quartz.scheduler-factory | To use the custom SchedulerFactory which is used to create the Scheduler. The option is a org.quartz.SchedulerFactory type. | SchedulerFactory | |
camel.component.quartz.start-delayed-seconds | Seconds to wait before starting the quartz scheduler. | Integer |
Chapter 43. Ref
Both producer and consumer are supported
The Ref component is used for lookup of existing endpoints bound in the Registry.
43.1. URI format
ref:someName[?options]
Where someName is the name of an endpoint in the Registry (usually, but not always, the Spring registry). If you are using the Spring registry, someName
would be the bean ID of an endpoint in the Spring registry.
43.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
43.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
43.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
43.3. Component Options
The Ref component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
43.4. Endpoint Options
The Ref endpoint is configured using URI syntax:
ref:name
with the following path and query parameters:
43.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (common) | Required Name of endpoint to lookup in the registry. | String |
43.4.2. Query Parameters (4 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
43.5. Runtime lookup
This component can be used when you need dynamic discovery of endpoints in the Registry where you can compute the URI at runtime. Then you can look up the endpoint using the following code:
// lookup the endpoint String myEndpointRef = "bigspenderOrder"; Endpoint endpoint = context.getEndpoint("ref:" + myEndpointRef); Producer producer = endpoint.createProducer(); Exchange exchange = producer.createExchange(); exchange.getIn().setBody(payloadToSend); // send the exchange producer.process(exchange);
And you could have a list of endpoints defined in the Registry such as:
<camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring"> <endpoint id="normalOrder" uri="activemq:order.slow"/> <endpoint id="bigspenderOrder" uri="activemq:order.high"/> </camelContext>
43.6. Sample
In the sample below we use the ref:
in the URI to reference the endpoint with the spring ID, endpoint2
:
You could, of course, have used the ref
attribute instead:
<to uri="ref:endpoint2"/>
Which is the more common way to write it.
43.7. Spring Boot Auto-Configuration
When using ref with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ref-starter</artifactId> </dependency>
The component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.ref.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.ref.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.ref.enabled | Whether to enable auto configuration of the ref component. This is enabled by default. | Boolean | |
camel.component.ref.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 44. REST
Both producer and consumer are supported
The REST component allows to define REST endpoints (consumer) using the Rest DSL and plugin to other Camel components as the REST transport.
The rest component can also be used as a client (producer) to call REST services.
44.1. URI format
rest://method:path[:uriTemplate]?[options]
44.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
44.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
44.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
44.3. Component Options
The REST component supports 8 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
consumerComponentName (consumer) | The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. | String | |
apiDoc (producer) | The swagger api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. | String | |
componentName (producer) | Deprecated The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. | String | |
host (producer) | Host and port of HTTP service to use (override host in swagger schema). | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
producerComponentName (producer) | The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. | String | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
44.4. Endpoint Options
The REST endpoint is configured using URI syntax:
rest:method:path:uriTemplate
with the following path and query parameters:
44.4.1. Path Parameters (3 parameters)
Name | Description | Default | Type |
---|---|---|---|
method (common) | Required HTTP method to use. Enum values:
| String | |
path (common) | Required The base path. | String | |
uriTemplate (common) | The uri template. | String |
44.4.2. Query Parameters (16 parameters)
Name | Description | Default | Type |
---|---|---|---|
consumes (common) | Media type such as: 'text/xml', or 'application/json' this REST service accepts. By default we accept all kinds of types. | String | |
inType (common) | To declare the incoming POJO binding type as a FQN class name. | String | |
outType (common) | To declare the outgoing POJO binding type as a FQN class name. | String | |
produces (common) | Media type such as: 'text/xml', or 'application/json' this REST service returns. | String | |
routeId (common) | Name of the route this REST services creates. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
consumerComponentName (consumer) | The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. | String | |
description (consumer) | Human description to document this REST service. | String | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
apiDoc (producer) | The openapi api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. | String | |
bindingMode (producer) | Configures the binding mode for the producer. If set to anything other than 'off' the producer will try to convert the body of the incoming message from inType to the json or xml, and the response from json or xml to outType. Enum values:
| RestBindingMode | |
host (producer) | Host and port of HTTP service to use (override host in openapi schema). | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
producerComponentName (producer) | The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. | String | |
queryParameters (producer) | Query parameters for the HTTP service to call. The query parameters can contain multiple parameters separated by ampersand such such as foo=123&bar=456. | String |
44.5. Supported rest components
The following components support rest consumer (Rest DSL):
- camel-servlet
The following components support rest producer:
- camel-http
44.6. Path and uriTemplate syntax
The path and uriTemplate option is defined using a REST syntax where you define the REST context path using support for parameters.
If no uriTemplate is configured then path option works the same way. It does not matter if you configure only path or if you configure both options. Though configuring both a path and uriTemplate is a more common practice with REST.
The following is a Camel route using a path only
from("rest:get:hello") .transform().constant("Bye World");
And the following route uses a parameter which is mapped to a Camel header with the key "me".
from("rest:get:hello/{me}") .transform().simple("Bye ${header.me}");
The following examples have configured a base path as "hello" and then have two REST services configured using uriTemplates.
from("rest:get:hello:/{me}") .transform().simple("Hi ${header.me}"); from("rest:get:hello:/french/{me}") .transform().simple("Bonjour ${header.me}");
44.7. Rest producer examples
You can use the rest component to call REST services like any other Camel component.
For example to call a REST service on using hello/{me}
you can do
from("direct:start") .to("rest:get:hello/{me}");
And then the dynamic value {me}
is mapped to Camel message with the same name. So to call this REST service you can send an empty message body and a header as shown:
template.sendBodyAndHeader("direct:start", null, "me", "Donald Duck");
The Rest producer needs to know the hostname and port of the REST service, which you can configure using the host option as shown:
from("direct:start") .to("rest:get:hello/{me}?host=myserver:8080/foo");
Instead of using the host option, you can configure the host on the restConfiguration
as shown:
restConfiguration().host("myserver:8080/foo"); from("direct:start") .to("rest:get:hello/{me}");
You can use the producerComponent
to select which Camel component to use as the HTTP client, for example to use http you can do:
restConfiguration().host("myserver:8080/foo").producerComponent("http"); from("direct:start") .to("rest:get:hello/{me}");
44.8. Rest producer binding
The REST producer supports binding using JSon or XML like the rest-dsl does.
For example to use jetty with json binding mode turned on you can configure this in the rest configuration:
restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); from("direct:start") .to("rest:post:user");
Then when calling the REST service using rest producer it will automatic bind any POJOs to json before calling the REST service:
UserPojo user = new UserPojo(); user.setId(123); user.setName("Donald Duck"); template.sendBody("direct:start", user);
In the example above we send a POJO instance UserPojo
as the message body. And because we have turned on JSon binding in the rest configuration, then the POJO will be marshalled from POJO to JSon before calling the REST service.
However if you want to also perform binding for the response message (eg what the REST service send back as response) you would need to configure the outType
option to specify what is the classname of the POJO to unmarshal from JSon to POJO.
For example if the REST service returns a JSon payload that binds to com.foo.MyResponsePojo
you can configure this as shown:
restConfiguration().component("jetty").host("localhost").port(8080).bindingMode(RestBindingMode.json); from("direct:start") .to("rest:post:user?outType=com.foo.MyResponsePojo");
You must configure outType
option if you want POJO binding to happen for the response messages received from calling the REST service.
44.9. More examples
See Rest DSL which offers more examples and how you can use the Rest DSL to define those in a nicer RESTful way.
There is a camel-example-servlet-rest-tomcat example in the Apache Camel distribution, that demonstrates how to use the Rest DSL with SERVLET as transport that can be deployed on Apache Tomcat, or similar web containers.
44.10. Spring Boot Auto-Configuration
When using rest with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-rest-starter</artifactId> </dependency>
The component supports 12 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.rest-api.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.rest-api.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.rest-api.enabled | Whether to enable auto configuration of the rest-api component. This is enabled by default. | Boolean | |
camel.component.rest.api-doc | The swagger api doc resource to use. The resource is loaded from classpath by default and must be in JSON format. | String | |
camel.component.rest.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.rest.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.rest.consumer-component-name | The Camel Rest component to use for (consumer) the REST transport, such as jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. | String | |
camel.component.rest.enabled | Whether to enable auto configuration of the rest component. This is enabled by default. | Boolean | |
camel.component.rest.host | Host and port of HTTP service to use (override host in swagger schema). | String | |
camel.component.rest.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.rest.producer-component-name | The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. | String | |
camel.component.rest.component-name | Deprecated The Camel Rest component to use for (producer) the REST transport, such as http, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestProducerFactory is registered in the registry. If either one is found, then that is being used. | String |
Chapter 45. Saga
Only producer is supported
The Saga component provides a bridge to execute custom actions within a route using the Saga EIP.
The component should be used for advanced tasks, such as deciding to complete or compensate a Saga with completionMode set to MANUAL.
Refer to the Saga EIP documentation for help on using sagas in common scenarios.
45.1. URI format
saga:action
45.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
45.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
45.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
45.3. Component Options
The Saga component supports 2 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
45.4. Endpoint Options
The Saga endpoint is configured using URI syntax:
saga:action
with the following path and query parameters:
45.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
action (producer) | Required Action to execute (complete or compensate). Enum values:
| SagaEndpointAction |
45.4.2. Query Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
45.5. Spring Boot Auto-Configuration
When using saga with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-saga-starter</artifactId> </dependency>
The component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.saga.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.saga.enabled | Whether to enable auto configuration of the saga component. This is enabled by default. | Boolean | |
camel.component.saga.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
Chapter 46. Salesforce
Both producer and consumer are supported
This component supports producer and consumer endpoints to communicate with Salesforce using Java DTOs.
There is a companion maven plugin Camel Salesforce Plugin that generates these DTOs (see further below).
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-salesforce</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
Developers wishing to contribute to the component are instructed to look at the README.md file on instructions on how to get started and setup your environment for running integration tests.
46.1. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
46.1.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
46.1.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
46.2. Component Options
The Salesforce component supports 90 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
apexMethod (common) | APEX method name. | String | |
apexQueryParams (common) | Query params for APEX method. | Map | |
apiVersion (common) | Salesforce API version. | 53.0 | String |
backoffIncrement (common) | Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. | 1000 | long |
batchId (common) | Bulk API Batch ID. | String | |
contentType (common) | Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values:
| ContentType | |
defaultReplayId (common) | Default replayId setting if no value is found in initialReplayIdMap. | -1 | Long |
fallBackReplayId (common) | ReplayId to fall back to after an Invalid Replay Id response. | -1 | Long |
format (common) | Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values:
| PayloadFormat | |
httpClient (common) | Custom Jetty Http Client to use to connect to Salesforce. | SalesforceHttpClient | |
httpClientConnectionTimeout (common) | Connection timeout used by the HttpClient when connecting to the Salesforce server. | 60000 | long |
httpClientIdleTimeout (common) | Timeout used by the HttpClient when waiting for response from the Salesforce server. | 10000 | long |
httpMaxContentLength (common) | Max content length of an HTTP response. | Integer | |
httpRequestBufferSize (common) | HTTP request buffer size. May need to be increased for large SOQL queries. | 8192 | Integer |
includeDetails (common) | Include details in Salesforce1 Analytics report, defaults to false. | Boolean | |
initialReplayIdMap (common) | Replay IDs to start from per channel name. | Map | |
instanceId (common) | Salesforce1 Analytics report execution instance ID. | String | |
jobId (common) | Bulk API Job ID. | String | |
limit (common) | Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. | Integer | |
locator (common) | Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. | String | |
maxBackoff (common) | Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. | 30000 | long |
maxRecords (common) | The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. | Integer | |
notFoundBehaviour (common) | Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values:
| EXCEPTION | NotFoundBehaviour |
notifyForFields (common) | Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values:
| NotifyForFieldsEnum | |
notifyForOperationCreate (common) | Notify for create operation, defaults to false (API version = 29.0). | Boolean | |
notifyForOperationDelete (common) | Notify for delete operation, defaults to false (API version = 29.0). | Boolean | |
notifyForOperations (common) | Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values:
| NotifyForOperationsEnum | |
notifyForOperationUndelete (common) | Notify for un-delete operation, defaults to false (API version = 29.0). | Boolean | |
notifyForOperationUpdate (common) | Notify for update operation, defaults to false (API version = 29.0). | Boolean | |
objectMapper (common) | Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. | ObjectMapper | |
packages (common) | In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. | String | |
pkChunking (common) | Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. | Boolean | |
pkChunkingChunkSize (common) | Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. | Integer | |
pkChunkingParent (common) | Specifies the parent object when you’re enabling PK chunking for queries on sharing objects. The chunks are based on the parent object’s records rather than the sharing object’s records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. | String | |
pkChunkingStartRow (common) | Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. | String | |
queryLocator (common) | Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. | String | |
rawPayload (common) | Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. | false | boolean |
reportId (common) | Salesforce1 Analytics report Id. | String | |
reportMetadata (common) | Salesforce1 Analytics report metadata for filtering. | ReportMetadata | |
resultId (common) | Bulk API Result ID. | String | |
sObjectBlobFieldName (common) | SObject blob field name. | String | |
sObjectClass (common) | Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. | String | |
sObjectFields (common) | SObject fields to retrieve. | String | |
sObjectId (common) | SObject ID if required by API. | String | |
sObjectIdName (common) | SObject external ID field name. | String | |
sObjectIdValue (common) | SObject external ID field value. | String | |
sObjectName (common) | SObject name if required or supported by API. | String | |
sObjectQuery (common) | Salesforce SOQL query string. | String | |
sObjectSearch (common) | Salesforce SOSL search string. | String | |
updateTopic (common) | Whether to update an existing Push Topic when using the Streaming API, defaults to false. | false | boolean |
config (common (advanced)) | Global endpoint configuration - use to set values that are common to all endpoints. | SalesforceEndpointConfig | |
httpClientProperties (common (advanced)) | Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. | Map | |
longPollingTransportProperties (common (advanced)) | Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. | Map | |
workerPoolMaxSize (common (advanced)) | Maximum size of the thread pool used to handle HTTP responses. | 20 | int |
workerPoolSize (common (advanced)) | Size of the thread pool used to handle HTTP responses. | 10 | int |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
allOrNone (producer) | Composite API option to indicate to rollback all records if any are not successful. | false | boolean |
apexUrl (producer) | APEX method URL. | String | |
compositeMethod (producer) | Composite (raw) method. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
rawHttpHeaders (producer) | Comma separated list of message headers to include as HTTP parameters for Raw operation. | String | |
rawMethod (producer) | HTTP method to use for the Raw operation. | String | |
rawPath (producer) | The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. | String | |
rawQueryParameters (producer) | Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. | String | |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
httpProxyExcludedAddresses (proxy) | A list of addresses for which HTTP proxy server should not be used. | Set | |
httpProxyHost (proxy) | Hostname of the HTTP proxy server to use. | String | |
httpProxyIncludedAddresses (proxy) | A list of addresses for which HTTP proxy server should be used. | Set | |
httpProxyPort (proxy) | Port number of the HTTP proxy server to use. | Integer | |
httpProxySocks4 (proxy) | If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. | false | boolean |
authenticationType (security) | Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. Enum values:
| AuthenticationType | |
clientId (security) | Required OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. | String | |
clientSecret (security) | OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. | String | |
httpProxyAuthUri (security) | Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. | String | |
httpProxyPassword (security) | Password to use to authenticate against the HTTP proxy server. | String | |
httpProxyRealm (security) | Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. | String | |
httpProxySecure (security) | If set to false disables the use of TLS when accessing the HTTP proxy. | true | boolean |
httpProxyUseDigestAuth (security) | If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. | false | boolean |
httpProxyUsername (security) | Username to use to authenticate against the HTTP proxy server. | String | |
instanceUrl (security) | URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. | String | |
jwtAudience (security) | Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. | String | |
keystore (security) | KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. | KeyStoreParameters | |
lazyLogin (security) | If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. | false | boolean |
loginConfig (security) | All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. | SalesforceLoginConfig | |
loginUrl (security) | Required URL of the Salesforce instance used for authentication, by default set to https://login.salesforce.com. | String | |
password (security) | Password used in OAuth flow to gain access to access token. It’s easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. | String | |
refreshToken (security) | Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at https://login.salesforce.com/services/oauth2/success or https://test.salesforce.com/services/oauth2/success and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. | String | |
sslContextParameters (security) | SSL parameters to use, see SSLContextParameters class for all available options. | SSLContextParameters | |
useGlobalSslContextParameters (security) | Enable usage of global SSL context parameters. | false | boolean |
userName (security) | Username used in OAuth flow to gain access to access token. It’s easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. | String |
46.3. Endpoint Options
The Salesforce endpoint is configured using URI syntax:
salesforce:operationName:topicName
with the following path and query parameters:
46.3.1. Path Parameters (2 parameters)
Name | Description | Default | Type |
---|---|---|---|
operationName (producer) | The operation to use. Enum values:
| OperationName | |
topicName (consumer) | The name of the topic/channel to use. | String |
46.3.2. Query Parameters (57 parameters)
Name | Description | Default | Type |
---|---|---|---|
apexMethod (common) | APEX method name. | String | |
apexQueryParams (common) | Query params for APEX method. | Map | |
apiVersion (common) | Salesforce API version. | 53.0 | String |
backoffIncrement (common) | Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. | 1000 | long |
batchId (common) | Bulk API Batch ID. | String | |
contentType (common) | Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. Enum values:
| ContentType | |
defaultReplayId (common) | Default replayId setting if no value is found in initialReplayIdMap. | -1 | Long |
fallBackReplayId (common) | ReplayId to fall back to after an Invalid Replay Id response. | -1 | Long |
format (common) | Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. Enum values:
| PayloadFormat | |
httpClient (common) | Custom Jetty Http Client to use to connect to Salesforce. | SalesforceHttpClient | |
includeDetails (common) | Include details in Salesforce1 Analytics report, defaults to false. | Boolean | |
initialReplayIdMap (common) | Replay IDs to start from per channel name. | Map | |
instanceId (common) | Salesforce1 Analytics report execution instance ID. | String | |
jobId (common) | Bulk API Job ID. | String | |
limit (common) | Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. | Integer | |
locator (common) | Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. | String | |
maxBackoff (common) | Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. | 30000 | long |
maxRecords (common) | The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. | Integer | |
notFoundBehaviour (common) | Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. Enum values:
| EXCEPTION | NotFoundBehaviour |
notifyForFields (common) | Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. Enum values:
| NotifyForFieldsEnum | |
notifyForOperationCreate (common) | Notify for create operation, defaults to false (API version = 29.0). | Boolean | |
notifyForOperationDelete (common) | Notify for delete operation, defaults to false (API version = 29.0). | Boolean | |
notifyForOperations (common) | Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). Enum values:
| NotifyForOperationsEnum | |
notifyForOperationUndelete (common) | Notify for un-delete operation, defaults to false (API version = 29.0). | Boolean | |
notifyForOperationUpdate (common) | Notify for update operation, defaults to false (API version = 29.0). | Boolean | |
objectMapper (common) | Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. | ObjectMapper | |
pkChunking (common) | Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. | Boolean | |
pkChunkingChunkSize (common) | Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. | Integer | |
pkChunkingParent (common) | Specifies the parent object when you’re enabling PK chunking for queries on sharing objects. The chunks are based on the parent object’s records rather than the sharing object’s records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. | String | |
pkChunkingStartRow (common) | Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. | String | |
queryLocator (common) | Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. | String | |
rawPayload (common) | Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. | false | boolean |
reportId (common) | Salesforce1 Analytics report Id. | String | |
reportMetadata (common) | Salesforce1 Analytics report metadata for filtering. | ReportMetadata | |
resultId (common) | Bulk API Result ID. | String | |
sObjectBlobFieldName (common) | SObject blob field name. | String | |
sObjectClass (common) | Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. | String | |
sObjectFields (common) | SObject fields to retrieve. | String | |
sObjectId (common) | SObject ID if required by API. | String | |
sObjectIdName (common) | SObject external ID field name. | String | |
sObjectIdValue (common) | SObject external ID field value. | String | |
sObjectName (common) | SObject name if required or supported by API. | String | |
sObjectQuery (common) | Salesforce SOQL query string. | String | |
sObjectSearch (common) | Salesforce SOSL search string. | String | |
updateTopic (common) | Whether to update an existing Push Topic when using the Streaming API, defaults to false. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
replayId (consumer) | The replayId value to use when subscribing. | Long | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
allOrNone (producer) | Composite API option to indicate to rollback all records if any are not successful. | false | boolean |
apexUrl (producer) | APEX method URL. | String | |
compositeMethod (producer) | Composite (raw) method. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
rawHttpHeaders (producer) | Comma separated list of message headers to include as HTTP parameters for Raw operation. | String | |
rawMethod (producer) | HTTP method to use for the Raw operation. | String | |
rawPath (producer) | The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. | String | |
rawQueryParameters (producer) | Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. | String |
46.4. Authenticating to Salesforce
The component supports three OAuth authentication flows:
For each of the flow different set of properties needs to be set:
Property | Where to find it on Salesforce | Flow |
---|---|---|
clientId | Connected App, Consumer Key | All flows |
clientSecret | Connected App, Consumer Secret | Username-Password, Refresh Token |
userName | Salesforce user username | Username-Password, JWT Bearer Token |
password | Salesforce user password | Username-Password |
refreshToken | From OAuth flow callback | Refresh Token |
keystore | Connected App, Digital Certificate | JWT Bearer Token |
The component auto determines what flow you’re trying to configure, to be remove ambiguity set the authenticationType
property.
Using Username-Password Flow in production is not encouraged.
The certificate used in JWT Bearer Token Flow can be a selfsigned certificate. The KeyStore holding the certificate and the private key must contain only single certificate-private key entry.
46.5. URI format
When used as a consumer, receiving streaming events, the URI scheme is:
salesforce:topic?options
When used as a producer, invoking the Salesforce REST APIs, the URI scheme is:
salesforce:operationName?options
46.6. Passing in Salesforce headers and fetching Salesforce response headers
There is support to pass Salesforce headers via inbound message headers, header names that start with Sforce
or x-sfdc
on the Camel message will be passed on in the request, and response headers that start with Sforce
will be present in the outbound message headers.
For example to fetch API limits you can specify:
// in your Camel route set the header before Salesforce endpoint //... .setHeader("Sforce-Limit-Info", constant("api-usage")) .to("salesforce:getGlobalObjects") .to(myProcessor); // myProcessor will receive `Sforce-Limit-Info` header on the outbound // message class MyProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); String apiLimits = in.getHeader("Sforce-Limit-Info", String.class); } }
In addition, HTTP response status code and text are available as headers Exchange.HTTP_RESPONSE_CODE
and Exchange.HTTP_RESPONSE_TEXT
.
46.7. Supported Salesforce APIs
The component supports the following Salesforce APIs
Producer endpoints can use the following APIs. Most of the APIs process one record at a time, the Query API can retrieve multiple Records.
46.7.1. Rest API
You can use the following for operationName
:
- getVersions - Gets supported Salesforce REST API versions
- getResources - Gets available Salesforce REST Resource endpoints
- getGlobalObjects - Gets metadata for all available SObject types
- getBasicInfo - Gets basic metadata for a specific SObject type
- getDescription - Gets comprehensive metadata for a specific SObject type
- getSObject - Gets an SObject using its Salesforce Id
- createSObject - Creates an SObject
- updateSObject - Updates an SObject using Id
- deleteSObject - Deletes an SObject using Id
- getSObjectWithId - Gets an SObject using an external (user defined) id field
- upsertSObject - Updates or inserts an SObject using an external id
- deleteSObjectWithId - Deletes an SObject using an external id
- query - Runs a Salesforce SOQL query
- queryMore - Retrieves more results (in case of large number of results) using result link returned from the 'query' API
- search - Runs a Salesforce SOSL query
- limits - fetching organization API usage limits
- recent - fetching recent items
- approval - submit a record or records (batch) for approval process
- approvals - fetch a list of all approval processes
- composite - submit up to 25 possibly related REST requests and receive individual responses. It’s also possible to use "raw" composite without limitation.
- composite-tree - create up to 200 records with parent-child relationships (up to 5 levels) in one go
- composite-batch - submit a composition of requests in batch
- compositeRetrieveSObjectCollections - Retrieve one or more records of the same object type.
- compositeCreateSObjectCollections - Add up to 200 records, returning a list of SaveSObjectResult objects.
- compositeUpdateSObjectCollections - Update up to 200 records, returning a list of SaveSObjectResult objects.
- compositeUpsertSObjectCollections - Create or update (upsert) up to 200 records based on an external ID field. Returns a list of UpsertSObjectResult objects.
- compositeDeleteSObjectCollections - Delete up to 200 records, returning a list of SaveSObjectResult objects.
- queryAll - Runs a SOQL query. It returns the results that are deleted because of a merge (merges up to three records into one of the records, deletes the others, and reparents any related records) or delete. Also returns the information about archived Task and Event records.
- getBlobField - Retrieves the specified blob field from an individual record.
- apexCall - Executes a user defined APEX REST API call.
- raw - Send requests to salesforce and have full, raw control over endpoint, parameters, body, etc.
For example, the following producer endpoint uses the upsertSObject API, with the sObjectIdName parameter specifying 'Name' as the external id field. The request message body should be an SObject DTO generated using the maven plugin. The response message will either be null
if an existing record was updated, or CreateSObjectResult
with an id of the new record, or a list of errors while creating the new object.
...to("salesforce:upsertSObject?sObjectIdName=Name")...
46.7.2. Bulk 2.0 API
The Bulk 2.0 API has a simplified model over the original Bulk API. Use it to quickly load a large amount of data into salesforce, or query a large amount of data out of salesforce. Data must be provided in CSV format. The minimum API version for Bulk 2.0 is v41.0. The minimum API version for Bulk Queries is v47.0. DTO classes mentioned below are from the org.apache.camel.component.salesforce.api.dto.bulkv2
package. The following operations are supported:
-
bulk2CreateJob - Create a bulk job. Supply an instance of
Job
in the message body. -
bulk2GetJob - Get an existing Job.
jobId
parameter is required. -
bulk2CreateBatch - Add a Batch of CSV records to a job. Supply CSV data in the message body. The first row must contain headers.
jobId
parameter is required. -
bulk2CloseJob - Close a job. You must close the job in order for it to be processed or aborted/deleted.
jobId
parameter is required. -
bulk2AbortJob - Abort a job.
jobId
parameter is required. -
bulk2DeleteJob - Delete a job.
jobId
parameter is required. -
bulk2GetSuccessfulResults - Get successful results for a job. Returned message body will contain an InputStream of CSV data.
jobId
parameter is required. -
bulk2GetFailedResults - Get failed results for a job. Returned message body will contain an InputStream of CSV data.
jobId
parameter is required. -
bulk2GetUnprocessedRecords - Get unprocessed records for a job. Returned message body will contain an InputStream of CSV data.
jobId
parameter is required. -
bulk2GetAllJobs - Get all jobs. Response body is an instance of
Jobs
. If thedone
property is false, there are additional pages to fetch, and thenextRecordsUrl
property contains the value to be set in thequeryLocator
parameter on subsequent calls. -
bulk2CreateQueryJob - Create a bulk query job. Supply an instance of
QueryJob
in the message body. -
bulk2GetQueryJob - Get a bulk query job.
jobId
parameter is required. -
bulk2GetQueryJobResults - Get bulk query job results.
jobId
parameter is required. AcceptsmaxRecords
andlocator
parameters. Response message headers includeSforce-NumberOfRecords
andSforce-Locator
headers. The value ofSforce-Locator
can be passed into subsequent calls via thelocator
parameter. -
bulk2AbortQueryJob - Abort a bulk query job.
jobId
parameter is required. -
bulk2DeleteQueryJob - Delete a bulk query job.
jobId
parameter is required. -
bulk2GetAllQueryJobs - Get all jobs. Response body is an instance of
QueryJobs
. If thedone
property is false, there are additional pages to fetch, and thenextRecordsUrl
property contains the value to be set in thequeryLocator
parameter on subsequent calls.
46.7.3. Rest Bulk (original) API
Producer endpoints can use the following APIs. All Job data formats, i.e. xml, csv, zip/xml, and zip/csv are supported.
The request and response have to be marshalled/unmarshalled by the route. Usually the request will be some stream source like a CSV file,
and the response may also be saved to a file to be correlated with the request.
You can use the following for operationName
:
-
createJob - Creates a Salesforce Bulk Job. Must supply a
JobInfo
instance in body. PK Chunking is supported via the pkChunking* options. See an explanation here. - getJob - Gets a Job using its Salesforce Id
- closeJob - Closes a Job
- abortJob - Aborts a Job
- createBatch - Submits a Batch within a Bulk Job
- getBatch - Gets a Batch using Id
- getAllBatches - Gets all Batches for a Bulk Job Id
- getRequest - Gets Request data (XML/CSV) for a Batch
- getResults - Gets the results of the Batch when its complete
- createBatchQuery - Creates a Batch from an SOQL query
- getQueryResultIds - Gets a list of Result Ids for a Batch Query
- getQueryResult - Gets results for a Result Id
- getRecentReports - Gets up to 200 of the reports you most recently viewed by sending a GET request to the Report List resource.
- getReportDescription - Retrieves the report, report type, and related metadata for a report, either in a tabular or summary or matrix format.
- executeSyncReport - Runs a report synchronously with or without changing filters and returns the latest summary data.
- executeAsyncReport - Runs an instance of a report asynchronously with or without filters and returns the summary data with or without details.
- getReportInstances - Returns a list of instances for a report that you requested to be run asynchronously. Each item in the list is treated as a separate instance of the report.
- getReportResults: Contains the results of running a report.
For example, the following producer endpoint uses the createBatch API to create a Job Batch. The in message must contain a body that can be converted into an InputStream
(usually UTF-8 CSV or XML content from a file, etc.) and header fields 'jobId' for the Job and 'contentType' for the Job content type, which can be XML, CSV, ZIP_XML or ZIP_CSV. The put message body will contain BatchInfo
on success, or throw a SalesforceException
on error.
...to("salesforce:createBatch")..
46.7.4. Rest Streaming API
Consumer endpoints can use the following syntax for streaming endpoints to receive Salesforce notifications on create/update.
To create and subscribe to a topic
from("salesforce:CamelTestTopic?notifyForFields=ALL¬ifyForOperations=ALL&sObjectName=Merchandise__c&updateTopic=true&sObjectQuery=SELECT Id, Name FROM Merchandise__c")...
To subscribe to an existing topic
from("salesforce:CamelTestTopic&sObjectName=Merchandise__c")...
46.7.5. Platform events
To emit a platform event use createSObject
operation. And set the message body can be JSON string or InputStream with key-value data — in that case sObjectName
needs to be set to the API name of the event, or a class that extends from AbstractDTOBase with the appropriate class name for the event.
For example using a DTO:
class Order_Event__e extends AbstractDTOBase { @JsonProperty("OrderNumber") private String orderNumber; // ... other properties and getters/setters } from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); Order_Event__e event = new Order_Event__e(); event.setOrderNumber(orderNumber); in.setBody(event); }) .to("salesforce:createSObject");
Or using JSON event data:
from("timer:tick") .process(exchange -> { final Message in = exchange.getIn(); String orderNumber = "ORD" + exchange.getProperty(Exchange.TIMER_COUNTER); in.setBody("{\"OrderNumber\":\"" + orderNumber + "\"}"); }) .to("salesforce:createSObject?sObjectName=Order_Event__e");
To receive platform events use the consumer endpoint with the API name of the platform event prefixed with event/
(or /event/
), e.g.: salesforce:events/Order_Event__e
. Processor consuming from that endpoint will receive either org.apache.camel.component.salesforce.api.dto.PlatformEvent
object or org.cometd.bayeux.Message
in the body depending on the rawPayload
being false
or true
respectively.
For example, in the simplest form to consume one event:
PlatformEvent event = consumer.receiveBody("salesforce:event/Order_Event__e", PlatformEvent.class);
46.7.6. Change data capture events
On the one hand, Salesforce could be configured to emit notifications for record changes of select objects. On the other hand, the Camel Salesforce component could react to such notifications, allowing for instance to synchronize those changes into an external system.
The notifications of interest could be specified in the from("salesforce:XXX")
clause of a Camel route via the subscription channel, e.g:
from("salesforce:data/ChangeEvents?replayId=-1").log("being notified of all change events") from("salesforce:data/AccountChangeEvent?replayId=-1").log("being notified of change events for Account records") from("salesforce:data/Employee__ChangeEvent?replayId=-1").log("being notified of change events for Employee__c custom object")
The received message contains either java.util.Map<String,Object>
or org.cometd.bayeux.Message
in the body depending on the rawPayload
being false
or true
respectively. The CamelSalesforceChangeType
header could be valued to one of CREATE
, UPDATE
, DELETE
or UNDELETE
.
More details about how to use the Camel Salesforce component change data capture capabilities could be found in the ChangeEventsConsumerIntegrationTest.
The Salesforce developer guide is a good fit to better know the subtleties of implementing a change data capture integration application. The dynamic nature of change event body fields, high level replication steps as well as security considerations could be of interest.
46.8. Examples
46.8.1. Uploading a document to a ContentWorkspace
Create the ContentVersion in Java, using a Processor instance:
public class ContentProcessor implements Processor { public void process(Exchange exchange) throws Exception { Message message = exchange.getIn(); ContentVersion cv = new ContentVersion(); ContentWorkspace cw = getWorkspace(exchange); cv.setFirstPublishLocationId(cw.getId()); cv.setTitle("test document"); cv.setPathOnClient("test_doc.html"); byte[] document = message.getBody(byte[].class); ObjectMapper mapper = new ObjectMapper(); String enc = mapper.convertValue(document, String.class); cv.setVersionDataUrl(enc); message.setBody(cv); } protected ContentWorkspace getWorkSpace(Exchange exchange) { // Look up the content workspace somehow, maybe use enrich() to add it to a // header that can be extracted here ---- } }
Give the output from the processor to the Salesforce component:
from("file:///home/camel/library") .to(new ContentProcessor()) // convert bytes from the file into a ContentVersion SObject // for the salesforce component .to("salesforce:createSObject");
46.9. Using Salesforce Limits API
With salesforce:limits
operation you can fetch of API limits from Salesforce and then act upon that data received. The result of salesforce:limits
operation is mapped to org.apache.camel.component.salesforce.api.dto.Limits
class and can be used in a custom processors or expressions.
For instance, consider that you need to limit the API usage of Salesforce so that 10% of daily API requests is left for other routes. The body of output message contains an instance of org.apache.camel.component.salesforce.api.dto.Limits
object that can be used in conjunction with Content Based Router and Content Based Router and Spring Expression Language (SpEL) to choose when to perform queries.
Notice how multiplying 1.0
with the integer value held in body.dailyApiRequests.remaining
makes the expression evaluate as with floating point arithmetic, without it - it would end up making integral division which would result with either 0
(some API limits consumed) or 1
(no API limits consumed).
from("direct:querySalesforce") .to("salesforce:limits") .choice() .when(spel("#{1.0 * body.dailyApiRequests.remaining / body.dailyApiRequests.max < 0.1}")) .to("salesforce:query?...") .otherwise() .setBody(constant("Used up Salesforce API limits, leaving 10% for critical routes")) .endChoice()
46.10. Working with approvals
All the properties are named exactly the same as in the Salesforce REST API prefixed with approval.
. You can set approval properties by setting approval.PropertyName
of the Endpoint these will be used as template — meaning that any property not present in either body or header will be taken from the Endpoint configuration. Or you can set the approval template on the Endpoint by assigning approval
property to a reference onto a bean in the Registry.
You can also provide header values using the same approval.PropertyName
in the incoming message headers.
And finally body can contain one AprovalRequest
or an Iterable
of ApprovalRequest
objects to process as a batch.
The important thing to remember is the priority of the values specified in these three mechanisms:
- value in body takes precedence before any other
- value in message header takes precedence before template value
- value in template is set if no other value in header or body was given
For example to send one record for approval using values in headers use:
Given a route:
from("direct:example1")// .setHeader("approval.ContextId", simple("${body['contextId']}")) .setHeader("approval.NextApproverIds", simple("${body['nextApproverIds']}")) .to("salesforce:approval?"// + "approval.actionType=Submit"// + "&approval.comments=this is a test"// + "&approval.processDefinitionNameOrId=Test_Account_Process"// + "&approval.skipEntryCriteria=true");
You could send a record for approval using:
final Map<String, String> body = new HashMap<>(); body.put("contextId", accountIds.iterator().next()); body.put("nextApproverIds", userId); final ApprovalResult result = template.requestBody("direct:example1", body, ApprovalResult.class);
46.11. Using Salesforce Recent Items API
To fetch the recent items use salesforce:recent
operation. This operation returns an java.util.List
of org.apache.camel.component.salesforce.api.dto.RecentItem
objects (List<RecentItem>
) that in turn contain the Id
, Name
and Attributes
(with type
and url
properties). You can limit the number of returned items by specifying limit
parameter set to maximum number of records to return. For example:
from("direct:fetchRecentItems") to("salesforce:recent") .split().body() .log("${body.name} at ${body.attributes.url}");
46.12. Using Salesforce Composite API to submit SObject tree
To create up to 200 records including parent-child relationships use salesforce:composite-tree
operation. This requires an instance of org.apache.camel.component.salesforce.api.dto.composite.SObjectTree
in the input message and returns the same tree of objects in the output message. The org.apache.camel.component.salesforce.api.dto.AbstractSObjectBase
instances within the tree get updated with the identifier values (Id
property) or their corresponding org.apache.camel.component.salesforce.api.dto.composite.SObjectNode
is populated with errors
on failure.
Note that for some records operation can succeed and for some it can fail — so you need to manually check for errors.
Easiest way to use this functionality is to use the DTOs generated by the camel-salesforce-maven-plugin
, but you also have the option of customizing the references that identify the each object in the tree, for instance primary keys from your database.
Lets look at an example:
Account account = ... Contact president = ... Contact marketing = ... Account anotherAccount = ... Contact sales = ... Asset someAsset = ... // build the tree SObjectTree request = new SObjectTree(); request.addObject(account).addChildren(president, marketing); request.addObject(anotherAccount).addChild(sales).addChild(someAsset); final SObjectTree response = template.requestBody("salesforce:composite-tree", tree, SObjectTree.class); final Map<Boolean, List<SObjectNode>> result = response.allNodes() .collect(Collectors.groupingBy(SObjectNode::hasErrors)); final List<SObjectNode> withErrors = result.get(true); final List<SObjectNode> succeeded = result.get(false); final String firstId = succeeded.get(0).getId();
46.13. Using Salesforce Composite API to submit multiple requests in a batch
The Composite API batch operation (composite-batch
) allows you to accumulate multiple requests in a batch and then submit them in one go, saving the round trip cost of multiple individual requests. Each response is then received in a list of responses with the order preserved, so that the n-th requests response is in the n-th place of the response.
The results can vary from API to API so the result of the request is given as a java.lang.Object
. In most cases the result will be a java.util.Map
with string keys and values or other java.util.Map
as value. Requests are made in JSON format and hold some type information (i.e. it is known what values are strings and what values are numbers).
Lets look at an example:
final String acountId = ... final SObjectBatch batch = new SObjectBatch("38.0"); final Account updates = new Account(); updates.setName("NewName"); batch.addUpdate("Account", accountId, updates); final Account newAccount = new Account(); newAccount.setName("Account created from Composite batch API"); batch.addCreate(newAccount); batch.addGet("Account", accountId, "Name", "BillingPostalCode"); batch.addDelete("Account", accountId); final SObjectBatchResponse response = template.requestBody("salesforce:composite-batch", batch, SObjectBatchResponse.class); boolean hasErrors = response.hasErrors(); // if any of the requests has resulted in either 4xx or 5xx HTTP status final List<SObjectBatchResult> results = response.getResults(); // results of three operations sent in batch final SObjectBatchResult updateResult = results.get(0); // update result final int updateStatus = updateResult.getStatusCode(); // probably 204 final Object updateResultData = updateResult.getResult(); // probably null final SObjectBatchResult createResult = results.get(1); // create result @SuppressWarnings("unchecked") final Map<String, Object> createData = (Map<String, Object>) createResult.getResult(); final String newAccountId = createData.get("id"); // id of the new account, this is for JSON, for XML it would be createData.get("Result").get("id") final SObjectBatchResult retrieveResult = results.get(2); // retrieve result @SuppressWarnings("unchecked") final Map<String, Object> retrieveData = (Map<String, Object>) retrieveResult.getResult(); final String accountName = retrieveData.get("Name"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("Name") final String accountBillingPostalCode = retrieveData.get("BillingPostalCode"); // Name of the retrieved account, this is for JSON, for XML it would be createData.get("Account").get("BillingPostalCode") final SObjectBatchResult deleteResult = results.get(3); // delete result final int updateStatus = deleteResult.getStatusCode(); // probably 204 final Object updateResultData = deleteResult.getResult(); // probably null
46.14. Using Salesforce Composite API to submit multiple chained requests
The composite
operation allows submitting up to 25 requests that can be chained together, for instance identifier generated in previous request can be used in subsequent request. Individual requests and responses are linked with the provided reference.
Composite API supports only JSON payloads.
As with the batch API the results can vary from API to API so the result of the request is given as a java.lang.Object
. In most cases the result will be a java.util.Map
with string keys and values or other java.util.Map
as value. Requests are made in JSON format hold some type information (i.e. it is known what values are strings and what values are numbers).
Lets look at an example:
SObjectComposite composite = new SObjectComposite("38.0", true); // first insert operation via an external id final Account updateAccount = new TestAccount(); updateAccount.setName("Salesforce"); updateAccount.setBillingStreet("Landmark @ 1 Market Street"); updateAccount.setBillingCity("San Francisco"); updateAccount.setBillingState("California"); updateAccount.setIndustry(Account_IndustryEnum.TECHNOLOGY); composite.addUpdate("Account", "001xx000003DIpcAAG", updateAccount, "UpdatedAccount"); final Contact newContact = new TestContact(); newContact.setLastName("John Doe"); newContact.setPhone("1234567890"); composite.addCreate(newContact, "NewContact"); final AccountContactJunction__c junction = new AccountContactJunction__c(); junction.setAccount__c("001xx000003DIpcAAG"); junction.setContactId__c("@{NewContact.id}"); composite.addCreate(junction, "JunctionRecord"); final SObjectCompositeResponse response = template.requestBody("salesforce:composite", composite, SObjectCompositeResponse.class); final List<SObjectCompositeResult> results = response.getCompositeResponse(); final SObjectCompositeResult accountUpdateResult = results.stream().filter(r -> "UpdatedAccount".equals(r.getReferenceId())).findFirst().get() final int statusCode = accountUpdateResult.getHttpStatusCode(); // should be 200 final Map<String, ?> accountUpdateBody = accountUpdateResult.getBody(); final SObjectCompositeResult contactCreationResult = results.stream().filter(r -> "JunctionRecord".equals(r.getReferenceId())).findFirst().get()
46.15. Using "raw" Salesforce composite
It’s possible to directly call Salesforce composite by preparing the Salesforce JSON request in the route thanks to the rawPayload
option.
For instance, you can have the following route:
from("timer:fire?period=2000").setBody(constant("{\n" + " \"allOrNone\" : true,\n" + " \"records\" : [ { \n" + " \"attributes\" : {\"type\" : \"FOO\"},\n" + " \"Name\" : \"123456789\",\n" + " \"FOO\" : \"XXXX\",\n" + " \"ACCOUNT\" : 2100.0\n" + " \"ExternalID\" : \"EXTERNAL\"\n" " }]\n" + "}") .to("salesforce:composite?rawPayload=true") .log("${body}");
The route directly creates the body as JSON and directly submit to salesforce endpoint using rawPayload=true
option.
With this approach, you have the complete control on the Salesforce request.
POST
is the default HTTP method used to send raw Composite requests to salesforce. Use the compositeMethod
option to override to the other supported value, GET
, which returns a list of other available composite resources.
46.16. Using Raw Operation
Send HTTP requests to salesforce with full, raw control of all aspects of the call. Any serialization or deserialization of request and response bodies must be performed in the route. The Content-Type
HTTP header will be automatically set based on the format
option, but this can be overridden with the rawHttpHeaders
option.
Parameter | Type | Description | Default | Required |
---|---|---|---|---|
request body |
| Body of the HTTP request | ||
rawPath |
| The portion of the endpoint URL after the domain name, e.g., '/services/data/v51.0/sobjects/Account/' | x | |
rawMethod |
| The HTTP method | x | |
rawQueryParameters |
| Comma separated list of message headers to include as query parameters. Do not url-encode values as this will be done automatically. | ||
rawHttpHeaders |
| Comma separated list of message headers to include as HTTP headers |
46.16.1. Query example
In this example we’ll send a query to the REST API. The query must be passed in a URL parameter called "q", so we’ll create a message header called q and tell the raw operation to include that message header as a URL parameter:
from("direct:queryExample") .setHeader("q", "SELECT Id, LastName FROM Contact") .to("salesforce:raw?format=JSON&rawMethod=GET&rawQueryParameters=q&rawPath=/services/data/v51.0/query") // deserialize JSON results or handle in some other way
46.16.2. SObject example
In this example, we’ll pass a Contact the REST API in a create
operation. Since the raw
operation does not perform any serialization, we make sure to pass XML in the message body
from("direct:createAContact") .setBody(constant("<Contact><LastName>TestLast</LastName></Contact>")) .to("salesforce:raw?format=XML&rawMethod=POST&rawPath=/services/data/v51.0/sobjects/Contact")
The response is:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Result> <id>0034x00000RnV6zAAF</id> <success>true</success> </Result>
46.17. Using Composite SObject Collections
The SObject Collections API executes actions on multiple records in one request. Use sObject Collections to reduce the number of round-trips between the client and server. The entire request counts as a single call toward your API limits. This resource is available in API version 42.0 and later. SObject
records (aka DTOs) supplied to these operations must be instances of subclasses of AbstractDescribedSObjectBase
. See the Maven Plugin section for information on generating these DTO classes. These operations serialize supplied DTOs to JSON.
46.17.1. compositeRetrieveSObjectCollections
Retrieve one or more records of the same object type.
Parameter | Type | Description | Default | Required |
---|---|---|---|---|
ids | List of String or comma-separated string | A list of one or more IDs of the objects to return. All IDs must belong to the same object type. | x | |
fields | List of String or comma-separated string | A list of fields to include in the response. The field names you specify must be valid, and you must have read-level permissions to each field. | x | |
sObjectName | String |
Type of SObject, e.g. | x | |
sObjectClass | String | Fully-qualified class name of DTO class to use for deserializing the response. |
Required if |
46.17.2. compositeCreateSObjectCollections
Add up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported.
Parameter | Type | Description | Default | Required |
---|---|---|---|---|
request body |
List of | A list of SObjects to create | x | |
allOrNone | boolean | Indicates whether to roll back the entire request when the creation of any object fails (true) or to continue with the independent creation of other objects in the request. | false |
46.17.3. compositeUpdateSObjectCollections
Update up to 200 records, returning a list of SaveSObjectResult objects. Mixed SObject types is supported.
Parameter | Type | Description | Default | Required |
---|---|---|---|---|
request body |
List of | A list of SObjects to update | x | |
allOrNone | boolean | Indicates whether to roll back the entire request when the update of any object fails (true) or to continue with the independent update of other objects in the request. | false |
46.17.4. compositeUpsertSObjectCollections
Create or update (upsert) up to 200 records based on an external ID field, returning a list of UpsertSObjectResult objects. Mixed SObject types is not supported.
Parameter | Type | Description | Default | Required |
---|---|---|---|---|
request body |
List of | A list of SObjects to upsert | x | |
allOrNone | boolean | Indicates whether to roll back the entire request when the upsert of any object fails (true) or to continue with the independent upsert of other objects in the request. | false | |
sObjectName | String |
Type of SObject, e.g. | x | |
sObjectIdName | String | Name of External ID field | x |
46.17.5. compositeDeleteSObjectCollections
Delete up to 200 records, returning a list of DeleteSObjectResult objects. Mixed SObject types is supported.
Parameter | Type | Description | Default | Required |
---|---|---|---|---|
| List of String or comma-separated string | A list of up to 200 IDs of objects to be deleted. | x | |
| boolean | Indicates whether to roll back the entire request when the deletion of any object fails (true) or to continue with the independent deletion of other objects in the request. | false |
46.18. Sending null values to salesforce
By default, SObject fields with null values are not sent to salesforce. In order to send null values to salesforce, use the fieldsToNull
property, as follows:
accountSObject.getFieldsToNull().add("Site");
46.19. Generating SOQL query strings
org.apache.camel.component.salesforce.api.utils.QueryHelper
contains helper methods to generate SOQL queries. For instance to fetch all custom fields from Account SObject you can simply generate the SOQL SELECT by invoking:
String allCustomFieldsQuery = QueryHelper.queryToFetchFilteredFieldsOf(new Account(), SObjectField::isCustom);
46.20. Camel Salesforce Maven Plugin
This Maven plugin generates DTOs for the Camel.
For obvious security reasons it is recommended that the clientId, clientSecret, userName and password fields be not set in the pom.xml. The plugin should be configured for the rest of the properties, and can be executed using the following command:
mvn camel-salesforce:generate -DcamelSalesforce.clientId=<clientid> -DcamelSalesforce.clientSecret=<clientsecret> \ -DcamelSalesforce.userName=<username> -DcamelSalesforce.password=<password>
The generated DTOs use Jackson annotations. All Salesforce field types are supported. Date and time fields are mapped to java.time.ZonedDateTime
by default, and picklist fields are mapped to generated Java Enumerations.
Please refer to README.md for details on how to generate the DTO.
46.21. Spring Boot Auto-Configuration
When using salesforce with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-salesforce-starter</artifactId> </dependency>
The component supports 91 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.salesforce.all-or-none | Composite API option to indicate to rollback all records if any are not successful. | false | Boolean |
camel.component.salesforce.apex-method | APEX method name. | String | |
camel.component.salesforce.apex-query-params | Query params for APEX method. | Map | |
camel.component.salesforce.apex-url | APEX method URL. | String | |
camel.component.salesforce.api-version | Salesforce API version. | 53.0 | String |
camel.component.salesforce.authentication-type | Explicit authentication method to be used, one of USERNAME_PASSWORD, REFRESH_TOKEN or JWT. Salesforce component can auto-determine the authentication method to use from the properties set, set this property to eliminate any ambiguity. | AuthenticationType | |
camel.component.salesforce.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.salesforce.backoff-increment | Backoff interval increment for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. | 1000 | Long |
camel.component.salesforce.batch-id | Bulk API Batch ID. | String | |
camel.component.salesforce.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.salesforce.client-id | OAuth Consumer Key of the connected app configured in the Salesforce instance setup. Typically a connected app needs to be configured but one can be provided by installing a package. | String | |
camel.component.salesforce.client-secret | OAuth Consumer Secret of the connected app configured in the Salesforce instance setup. | String | |
camel.component.salesforce.composite-method | Composite (raw) method. | String | |
camel.component.salesforce.config | Global endpoint configuration - use to set values that are common to all endpoints. The option is a org.apache.camel.component.salesforce.SalesforceEndpointConfig type. | SalesforceEndpointConfig | |
camel.component.salesforce.content-type | Bulk API content type, one of XML, CSV, ZIP_XML, ZIP_CSV. | ContentType | |
camel.component.salesforce.default-replay-id | Default replayId setting if no value is found in initialReplayIdMap. | -1 | Long |
camel.component.salesforce.enabled | Whether to enable auto configuration of the salesforce component. This is enabled by default. | Boolean | |
camel.component.salesforce.fall-back-replay-id | ReplayId to fall back to after an Invalid Replay Id response. | -1 | Long |
camel.component.salesforce.format | Payload format to use for Salesforce API calls, either JSON or XML, defaults to JSON. As of Camel 3.12, this option only applies to the Raw operation. | PayloadFormat | |
camel.component.salesforce.http-client | Custom Jetty Http Client to use to connect to Salesforce. The option is a org.apache.camel.component.salesforce.SalesforceHttpClient type. | SalesforceHttpClient | |
camel.component.salesforce.http-client-connection-timeout | Connection timeout used by the HttpClient when connecting to the Salesforce server. | 60000 | Long |
camel.component.salesforce.http-client-idle-timeout | Timeout used by the HttpClient when waiting for response from the Salesforce server. | 10000 | Long |
camel.component.salesforce.http-client-properties | Used to set any properties that can be configured on the underlying HTTP client. Have a look at properties of SalesforceHttpClient and the Jetty HttpClient for all available options. | Map | |
camel.component.salesforce.http-max-content-length | Max content length of an HTTP response. | Integer | |
camel.component.salesforce.http-proxy-auth-uri | Used in authentication against the HTTP proxy server, needs to match the URI of the proxy server in order for the httpProxyUsername and httpProxyPassword to be used for authentication. | String | |
camel.component.salesforce.http-proxy-excluded-addresses | A list of addresses for which HTTP proxy server should not be used. | Set | |
camel.component.salesforce.http-proxy-host | Hostname of the HTTP proxy server to use. | String | |
camel.component.salesforce.http-proxy-included-addresses | A list of addresses for which HTTP proxy server should be used. | Set | |
camel.component.salesforce.http-proxy-password | Password to use to authenticate against the HTTP proxy server. | String | |
camel.component.salesforce.http-proxy-port | Port number of the HTTP proxy server to use. | Integer | |
camel.component.salesforce.http-proxy-realm | Realm of the proxy server, used in preemptive Basic/Digest authentication methods against the HTTP proxy server. | String | |
camel.component.salesforce.http-proxy-secure | If set to false disables the use of TLS when accessing the HTTP proxy. | true | Boolean |
camel.component.salesforce.http-proxy-socks4 | If set to true the configures the HTTP proxy to use as a SOCKS4 proxy. | false | Boolean |
camel.component.salesforce.http-proxy-use-digest-auth | If set to true Digest authentication will be used when authenticating to the HTTP proxy, otherwise Basic authorization method will be used. | false | Boolean |
camel.component.salesforce.http-proxy-username | Username to use to authenticate against the HTTP proxy server. | String | |
camel.component.salesforce.http-request-buffer-size | HTTP request buffer size. May need to be increased for large SOQL queries. | 8192 | Integer |
camel.component.salesforce.include-details | Include details in Salesforce1 Analytics report, defaults to false. | Boolean | |
camel.component.salesforce.initial-replay-id-map | Replay IDs to start from per channel name. | Map | |
camel.component.salesforce.instance-id | Salesforce1 Analytics report execution instance ID. | String | |
camel.component.salesforce.instance-url | URL of the Salesforce instance used after authentication, by default received from Salesforce on successful authentication. | String | |
camel.component.salesforce.job-id | Bulk API Job ID. | String | |
camel.component.salesforce.jwt-audience | Value to use for the Audience claim (aud) when using OAuth JWT flow. If not set, the login URL will be used, which is appropriate in most cases. | String | |
camel.component.salesforce.keystore | KeyStore parameters to use in OAuth JWT flow. The KeyStore should contain only one entry with private key and certificate. Salesforce does not verify the certificate chain, so this can easily be a selfsigned certificate. Make sure that you upload the certificate to the corresponding connected app. The option is a org.apache.camel.support.jsse.KeyStoreParameters type. | KeyStoreParameters | |
camel.component.salesforce.lazy-login | If set to true prevents the component from authenticating to Salesforce with the start of the component. You would generally set this to the (default) false and authenticate early and be immediately aware of any authentication issues. | false | Boolean |
camel.component.salesforce.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.salesforce.limit | Limit on number of returned records. Applicable to some of the API, check the Salesforce documentation. | Integer | |
camel.component.salesforce.locator | Locator provided by salesforce Bulk 2.0 API for use in getting results for a Query job. | String | |
camel.component.salesforce.login-config | All authentication configuration in one nested bean, all properties set there can be set directly on the component as well. The option is a org.apache.camel.component.salesforce.SalesforceLoginConfig type. | SalesforceLoginConfig | |
camel.component.salesforce.login-url | URL of the Salesforce instance used for authentication, by default set to . | String | |
camel.component.salesforce.long-polling-transport-properties | Used to set any properties that can be configured on the LongPollingTransport used by the BayeuxClient (CometD) used by the streaming api. | Map | |
camel.component.salesforce.max-backoff | Maximum backoff interval for Streaming connection restart attempts for failures beyond CometD auto-reconnect. The option is a long type. | 30000 | Long |
camel.component.salesforce.max-records | The maximum number of records to retrieve per set of results for a Bulk 2.0 Query. The request is still subject to the size limits. If you are working with a very large number of query results, you may experience a timeout before receiving all the data from Salesforce. To prevent a timeout, specify the maximum number of records your client is expecting to receive in the maxRecords parameter. This splits the results into smaller sets with this value as the maximum size. | Integer | |
camel.component.salesforce.not-found-behaviour | Sets the behaviour of 404 not found status received from Salesforce API. Should the body be set to NULL NotFoundBehaviour#NULL or should a exception be signaled on the exchange NotFoundBehaviour#EXCEPTION - the default. | NotFoundBehaviour | |
camel.component.salesforce.notify-for-fields | Notify for fields, options are ALL, REFERENCED, SELECT, WHERE. | NotifyForFieldsEnum | |
camel.component.salesforce.notify-for-operation-create | Notify for create operation, defaults to false (API version = 29.0). | Boolean | |
camel.component.salesforce.notify-for-operation-delete | Notify for delete operation, defaults to false (API version = 29.0). | Boolean | |
camel.component.salesforce.notify-for-operation-undelete | Notify for un-delete operation, defaults to false (API version = 29.0). | Boolean | |
camel.component.salesforce.notify-for-operation-update | Notify for update operation, defaults to false (API version = 29.0). | Boolean | |
camel.component.salesforce.notify-for-operations | Notify for operations, options are ALL, CREATE, EXTENDED, UPDATE (API version 29.0). | NotifyForOperationsEnum | |
camel.component.salesforce.object-mapper | Custom Jackson ObjectMapper to use when serializing/deserializing Salesforce objects. The option is a com.fasterxml.jackson.databind.ObjectMapper type. | ObjectMapper | |
camel.component.salesforce.packages | In what packages are the generated DTO classes. Typically the classes would be generated using camel-salesforce-maven-plugin. Set it if using the generated DTOs to gain the benefit of using short SObject names in parameters/header values. Multiple packages can be separated by comma. | String | |
camel.component.salesforce.password | Password used in OAuth flow to gain access to access token. It’s easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. Make sure that you append security token to the end of the password if using one. | String | |
camel.component.salesforce.pk-chunking | Use PK Chunking. Only for use in original Bulk API. Bulk 2.0 API performs PK chunking automatically, if necessary. | Boolean | |
camel.component.salesforce.pk-chunking-chunk-size | Chunk size for use with PK Chunking. If unspecified, salesforce default is 100,000. Maximum size is 250,000. | Integer | |
camel.component.salesforce.pk-chunking-parent | Specifies the parent object when you’re enabling PK chunking for queries on sharing objects. The chunks are based on the parent object’s records rather than the sharing object’s records. For example, when querying on AccountShare, specify Account as the parent object. PK chunking is supported for sharing objects as long as the parent object is supported. | String | |
camel.component.salesforce.pk-chunking-start-row | Specifies the 15-character or 18-character record ID to be used as the lower boundary for the first chunk. Use this parameter to specify a starting ID when restarting a job that failed between batches. | String | |
camel.component.salesforce.query-locator | Query Locator provided by salesforce for use when a query results in more records than can be retrieved in a single call. Use this value in a subsequent call to retrieve additional records. | String | |
camel.component.salesforce.raw-http-headers | Comma separated list of message headers to include as HTTP parameters for Raw operation. | String | |
camel.component.salesforce.raw-method | HTTP method to use for the Raw operation. | String | |
camel.component.salesforce.raw-path | The portion of the endpoint URL after the domain name. E.g., '/services/data/v52.0/sobjects/Account/'. | String | |
camel.component.salesforce.raw-payload | Use raw payload String for request and response (either JSON or XML depending on format), instead of DTOs, false by default. | false | Boolean |
camel.component.salesforce.raw-query-parameters | Comma separated list of message headers to include as query parameters for Raw operation. Do not url-encode values as this will be done automatically. | String | |
camel.component.salesforce.refresh-token | Refresh token already obtained in the refresh token OAuth flow. One needs to setup a web application and configure a callback URL to receive the refresh token, or configure using the builtin callback at and then retrive the refresh_token from the URL at the end of the flow. Note that in development organizations Salesforce allows hosting the callback web application at localhost. | String | |
camel.component.salesforce.report-id | Salesforce1 Analytics report Id. | String | |
camel.component.salesforce.report-metadata | Salesforce1 Analytics report metadata for filtering. The option is a org.apache.camel.component.salesforce.api.dto.analytics.reports.ReportMetadata type. | ReportMetadata | |
camel.component.salesforce.result-id | Bulk API Result ID. | String | |
camel.component.salesforce.s-object-blob-field-name | SObject blob field name. | String | |
camel.component.salesforce.s-object-class | Fully qualified SObject class name, usually generated using camel-salesforce-maven-plugin. | String | |
camel.component.salesforce.s-object-fields | SObject fields to retrieve. | String | |
camel.component.salesforce.s-object-id | SObject ID if required by API. | String | |
camel.component.salesforce.s-object-id-name | SObject external ID field name. | String | |
camel.component.salesforce.s-object-id-value | SObject external ID field value. | String | |
camel.component.salesforce.s-object-name | SObject name if required or supported by API. | String | |
camel.component.salesforce.s-object-query | Salesforce SOQL query string. | String | |
camel.component.salesforce.s-object-search | Salesforce SOSL search string. | String | |
camel.component.salesforce.ssl-context-parameters | SSL parameters to use, see SSLContextParameters class for all available options. The option is a org.apache.camel.support.jsse.SSLContextParameters type. | SSLContextParameters | |
camel.component.salesforce.update-topic | Whether to update an existing Push Topic when using the Streaming API, defaults to false. | false | Boolean |
camel.component.salesforce.use-global-ssl-context-parameters | Enable usage of global SSL context parameters. | false | Boolean |
camel.component.salesforce.user-name | Username used in OAuth flow to gain access to access token. It’s easy to get started with password OAuth flow, but in general one should avoid it as it is deemed less secure than other flows. | String | |
camel.component.salesforce.worker-pool-max-size | Maximum size of the thread pool used to handle HTTP responses. | 20 | Integer |
camel.component.salesforce.worker-pool-size | Size of the thread pool used to handle HTTP responses. | 10 | Integer |
Chapter 47. Scheduler
Only consumer is supported
The Scheduler component is used to generate message exchanges when a scheduler fires. This component is similar to the Timer component, but it offers more functionality in terms of scheduling. Also this component uses JDK ScheduledExecutorService
. Where as the timer uses a JDK Timer
.
You can only consume events from this endpoint.
47.1. URI format
scheduler:name[?options]
Where name
is the name of the scheduler, which is created and shared across endpoints. So if you use the same name for all your scheduler endpoints, only one scheduler thread pool and thread will be used - but you can configure the thread pool to allow more concurrent threads.
The IN body of the generated exchange is null
. So exchange.getIn().getBody()
returns null
.
47.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
47.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
47.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
47.3. Component Options
The Scheduler component supports 3 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
poolSize (scheduler) | Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. | 1 | int |
47.4. Endpoint Options
The Scheduler endpoint is configured using URI syntax:
scheduler:name
with the following path and query parameters:
47.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (consumer) | Required The name of the scheduler. | String |
47.4.2. Query Parameters (21 parameters)
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
synchronous (advanced) | Sets whether synchronous processing should be strictly used. | false | boolean |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
poolSize (scheduler) | Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. | 1 | int |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
47.5. More information
This component is a scheduler Polling Consumer where you can find more information about the options above, and examples at the Polling Consumer page.
47.6. Exchange Properties
When the timer is fired, it adds the following information as properties to the Exchange
:
Name | Type | Description |
---|---|---|
|
|
The value of the |
|
| The time when the consumer fired. |
47.7. Sample
To set up a route that generates an event every 60 seconds:
from("scheduler://foo?delay=60000").to("bean:myBean?method=someMethodName");
The above route will generate an event and then invoke the someMethodName
method on the bean called myBean
in the Registry such as JNDI or Spring.
And the route in Spring DSL:
<route> <from uri="scheduler://foo?delay=60000"/> <to uri="bean:myBean?method=someMethodName"/> </route>
47.8. Forcing the scheduler to trigger immediately when completed
To let the scheduler trigger as soon as the previous task is complete, you can set the option greedy=true
. But beware then the scheduler will keep firing all the time. So use this with caution.
47.9. Forcing the scheduler to be idle
There can be use cases where you want the scheduler to trigger and be greedy. But sometimes you want "tell the scheduler" that there was no task to poll, so the scheduler can change into idle mode using the backoff options. To do this you would need to set a property on the exchange with the key Exchange.SCHEDULER_POLLED_MESSAGES
to a boolean value of false. This will cause the consumer to indicate that there was no messages polled.
The consumer will otherwise as by default return 1 message polled to the scheduler, every time the consumer has completed processing the exchange.
47.10. Spring Boot Auto-Configuration
When using scheduler with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-scheduler-starter</artifactId> </dependency>
The component supports 4 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.scheduler.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.scheduler.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.scheduler.enabled | Whether to enable auto configuration of the scheduler component. This is enabled by default. | Boolean | |
camel.component.scheduler.pool-size | Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. | 1 | Integer |
Chapter 48. SEDA
Both producer and consumer are supported
The SEDA component provides asynchronous SEDA behavior, so that messages are exchanged on a BlockingQueue and consumers are invoked in a separate thread from the producer.
Note that queues are only visible within a single CamelContext. If you want to communicate across CamelContext
instances (for example, communicating between Web applications), see the component.
This component does not implement any kind of persistence or recovery, if the VM terminates while messages are yet to be processed. If you need persistence, reliability or distributed SEDA, try using either JMS or ActiveMQ.
Synchronous
The Direct component provides synchronous invocation of any consumers when a producer sends a message exchange.
48.1. URI format
seda:someName[?options]
Where someName can be any string that uniquely identifies the endpoint within the current CamelContext.
48.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
48.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
48.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
48.3. Component Options
The SEDA component supports 10 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
concurrentConsumers (consumer) | Sets the default number of concurrent threads processing exchanges. | 1 | int |
defaultPollTimeout (consumer (advanced)) | The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. | 1000 | int |
defaultBlockWhenFull (producer) | Whether a thread that sends messages to a full SEDA queue will block until the queue’s capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. | false | boolean |
defaultDiscardWhenFull (producer) | Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. | false | boolean |
defaultOfferTimeout (producer) | Whether a thread that sends messages to a full SEDA queue will block until the queue’s capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. | long | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
defaultQueueFactory (advanced) | Sets the default queue factory. | BlockingQueueFactory | |
queueSize (advanced) | Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). | 1000 | int |
48.4. Endpoint Options
The SEDA endpoint is configured using URI syntax:
seda:name
with the following path and query parameters:
48.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
name (common) | Required Name of queue. | String |
48.4.2. Query Parameters (18 parameters)
Name | Description | Default | Type |
---|---|---|---|
size (common) | The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). Will by default use the defaultSize set on the SEDA component. | 1000 | int |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
concurrentConsumers (consumer) | Number of concurrent threads processing exchanges. | 1 | int |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
limitConcurrentConsumers (consumer (advanced)) | Whether to limit the number of concurrentConsumers to the maximum of 500. By default, an exception will be thrown if an endpoint is configured with a greater number. You can disable that check by turning this option off. | true | boolean |
multipleConsumers (consumer (advanced)) | Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint. | false | boolean |
pollTimeout (consumer (advanced)) | The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. | 1000 | int |
purgeWhenStopping (consumer (advanced)) | Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded. | false | boolean |
blockWhenFull (producer) | Whether a thread that sends messages to a full SEDA queue will block until the queue’s capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. | false | boolean |
discardIfNoConsumers (producer) | Whether the producer should discard the message (do not add the message to the queue), when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. | false | boolean |
discardWhenFull (producer) | Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. | false | boolean |
failIfNoConsumers (producer) | Whether the producer should fail by throwing an exception, when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
offerTimeout (producer) | Offer timeout (in milliseconds) can be added to the block case when queue is full. You can disable timeout by using 0 or a negative value. | long | |
timeout (producer) | Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value. | 30000 | long |
waitForTaskToComplete (producer) | Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. The default option is IfReplyExpected. Enum values:
| IfReplyExpected | WaitForTaskToComplete |
queue (advanced) | Define the queue instance which will be used by the endpoint. | BlockingQueue |
48.5. Choosing BlockingQueue implementation
By default, the SEDA component always intantiates LinkedBlockingQueue, but you can use different implementation, you can reference your own BlockingQueue implementation, in this case the size option is not used
<bean id="arrayQueue" class="java.util.ArrayBlockingQueue"> <constructor-arg index="0" value="10" ><!-- size --> <constructor-arg index="1" value="true" ><!-- fairness --> </bean> <!-- ... and later --> <from>seda:array?queue=#arrayQueue</from>
Or you can reference a BlockingQueueFactory implementation, 3 implementations are provided LinkedBlockingQueueFactory, ArrayBlockingQueueFactory and PriorityBlockingQueueFactory:
<bean id="priorityQueueFactory" class="org.apache.camel.component.seda.PriorityBlockingQueueFactory"> <property name="comparator"> <bean class="org.apache.camel.demo.MyExchangeComparator" /> </property> </bean> <!-- ... and later --> <from>seda:priority?queueFactory=#priorityQueueFactory&size=100</from>
48.6. Use of Request Reply
The SEDA component supports using Request Reply, where the caller will wait for the Async route to complete. For instance:
from("mina:tcp://0.0.0.0:9876?textline=true&sync=true").to("seda:input"); from("seda:input").to("bean:processInput").to("bean:createResponse");
In the route above, we have a TCP listener on port 9876 that accepts incoming requests. The request is routed to the seda:input
queue. As it is a Request Reply message, we wait for the response. When the consumer on the seda:input
queue is complete, it copies the response to the original message response.
48.7. Concurrent consumers
By default, the SEDA endpoint uses a single consumer thread, but you can configure it to use concurrent consumer threads. So instead of thread pools you can use:
from("seda:stageName?concurrentConsumers=5").process(...)
As for the difference between the two, note a thread pool can increase/shrink dynamically at runtime depending on load, whereas the number of concurrent consumers is always fixed.
48.8. Thread pools
Be aware that adding a thread pool to a SEDA endpoint by doing something like:
from("seda:stageName").thread(5).process(...)
Can wind up with two BlockQueues
: one from the SEDA endpoint, and one from the workqueue of the thread pool, which may not be what you want. Instead, you might wish to configure a Direct endpoint with a thread pool, which can process messages both synchronously and asynchronously. For example:
from("direct:stageName").thread(5).process(...)
You can also directly configure number of threads that process messages on a SEDA endpoint using the concurrentConsumers
option.
48.9. Sample
In the route below we use the SEDA queue to send the request to this async queue to be able to send a fire-and-forget message for further processing in another thread, and return a constant reply in this thread to the original caller.
We send a Hello World message and expects the reply to be OK.
@Test public void testSendAsync() throws Exception { MockEndpoint mock = getMockEndpoint("mock:result"); mock.expectedBodiesReceived("Hello World"); // START SNIPPET: e2 Object out = template.requestBody("direct:start", "Hello World"); assertEquals("OK", out); // END SNIPPET: e2 assertMockEndpointsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { // START SNIPPET: e1 public void configure() throws Exception { from("direct:start") // send it to the seda queue that is async .to("seda:next") // return a constant response .transform(constant("OK")); from("seda:next").to("mock:result"); } // END SNIPPET: e1 }; }
The "Hello World" message will be consumed from the SEDA queue from another thread for further processing. Since this is from a unit test, it will be sent to a mock
endpoint where we can do assertions in the unit test.
48.10. Using multipleConsumers
In this example we have defined two consumers.
@Test public void testSameOptionsProducerStillOkay() throws Exception { getMockEndpoint("mock:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:bar").expectedBodiesReceived("Hello World"); template.sendBody("seda:foo", "Hello World"); assertMockEndpointsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("seda:foo?multipleConsumers=true").routeId("foo").to("mock:foo"); from("seda:foo?multipleConsumers=true").routeId("bar").to("mock:bar"); } }; }
Since we have specified multipleConsumers=true on the seda foo endpoint we can have those two consumers receive their own copy of the message as a kind of pub-sub style messaging.
As the beans are part of an unit test they simply send the message to a mock endpoint.
48.11. Extracting queue information
If needed, information such as queue size, etc. can be obtained without using JMX in this fashion:
SedaEndpoint seda = context.getEndpoint("seda:xxxx"); int size = seda.getExchanges().size();
48.12. Spring Boot Auto-Configuration
When using seda with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-seda-starter</artifactId> </dependency>
The component supports 11 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.seda.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.seda.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.seda.concurrent-consumers | Sets the default number of concurrent threads processing exchanges. | 1 | Integer |
camel.component.seda.default-block-when-full | Whether a thread that sends messages to a full SEDA queue will block until the queue’s capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. | false | Boolean |
camel.component.seda.default-discard-when-full | Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. | false | Boolean |
camel.component.seda.default-offer-timeout | Whether a thread that sends messages to a full SEDA queue will block until the queue’s capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. | Long | |
camel.component.seda.default-poll-timeout | The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. | 1000 | Integer |
camel.component.seda.default-queue-factory | Sets the default queue factory. The option is a org.apache.camel.component.seda.BlockingQueueFactory<org.apache.camel.Exchange> type. | BlockingQueueFactory | |
camel.component.seda.enabled | Whether to enable auto configuration of the seda component. This is enabled by default. | Boolean | |
camel.component.seda.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.seda.queue-size | Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). | 1000 | Integer |
Chapter 49. Servlet
Only consumer is supported
The Servlet component provides HTTP based endpoints for consuming HTTP requests that arrive at a HTTP endpoint that is bound to a published Servlet.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servlet</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
Stream
Servlet is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once. If you find a situation where the message body appears to be empty or you need to access the data multiple times (eg: doing multicasting, or redelivery error handling) you should use Stream caching or convert the message body to a String
which is safe to be read multiple times.
49.1. URI format
servlet://relative_path[?options]
49.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
49.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
49.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
49.3. Component Options
The Servlet component supports 11 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
muteException (consumer) | If enabled and an Exchange failed processing on the consumer side the response’s body won’t contain the exception’s stack trace. | false | boolean |
servletName (consumer) | Default name of servlet to use. The default name is CamelServlet. | CamelServlet | String |
attachmentMultipartBinding (consumer (advanced)) | Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlets. | false | boolean |
fileNameExtWhitelist (consumer (advanced)) | Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. | String | |
httpRegistry (consumer (advanced)) | To use a custom org.apache.camel.component.servlet.HttpRegistry. | HttpRegistry | |
allowJavaSerializedObject (advanced) | Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
httpBinding (advanced) | To use a custom HttpBinding to control the mapping between Camel message and HttpClient. | HttpBinding | |
httpConfiguration (advanced) | To use the shared HttpConfiguration as base configuration. | HttpConfiguration | |
headerFilterStrategy (filter) | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy |
49.4. Endpoint Options
The Servlet endpoint is configured using URI syntax:
servlet:contextPath
with the following path and query parameters:
49.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
contextPath (consumer) | Required The context-path to use. | String |
49.4.2. Query Parameters (22 parameters)
Name | Description | Default | Type |
---|---|---|---|
chunked (consumer) | If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response. | true | boolean |
disableStreamCache (common) | Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If this option is set to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. | false | boolean |
headerFilterStrategy (common) | To use a custom HeaderFilterStrategy to filter header to and from Camel message. | HeaderFilterStrategy | |
httpBinding (common (advanced)) | To use a custom HttpBinding to control the mapping between Camel message and HttpClient. | HttpBinding | |
async (consumer) | Configure the consumer to work in async mode. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
httpMethodRestrict (consumer) | Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma. | String | |
matchOnUriPrefix (consumer) | Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. | false | boolean |
muteException (consumer) | If enabled and an Exchange failed processing on the consumer side the response’s body won’t contain the exception’s stack trace. | false | boolean |
responseBufferSize (consumer) | To use a custom buffer size on the javax.servlet.ServletResponse. | Integer | |
servletName (consumer) | Name of the servlet to use. | CamelServlet | String |
transferException (consumer) | If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was sent back serialized in the response as an application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. | false | boolean |
attachmentMultipartBinding (consumer (advanced)) | Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlets. | false | boolean |
eagerCheckContentAvailable (consumer (advanced)) | Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data. | false | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
fileNameExtWhitelist (consumer (advanced)) | Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. | String | |
mapHttpMessageBody (consumer (advanced)) | If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping. | true | boolean |
mapHttpMessageFormUrlEncodedBody (consumer (advanced)) | If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping. | true | boolean |
mapHttpMessageHeaders (consumer (advanced)) | If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping. | true | boolean |
optionsEnabled (consumer (advanced)) | Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off. | false | boolean |
traceEnabled (consumer (advanced)) | Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off. | false | boolean |
49.5. Message Headers
Camel will apply the same Message Headers as the HTTP component.
Camel will also populate all request.parameter
and request.headers
. For example, if a client request has the URL, http://myserver/myserver?orderid=123, the exchange will contain a header named orderid
with the value 123.
49.6. Usage
You can consume only from
endpoints generated by the Servlet component. Therefore, it should be used only as input into your Camel routes. To issue HTTP requests against other HTTP endpoints, use the HTTP component.
49.7. Spring Boot Auto-Configuration
When using servlet with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-servlet-starter</artifactId> </dependency>
The component supports 15 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.servlet.allow-java-serialized-object | Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. | false | Boolean |
camel.component.servlet.attachment-multipart-binding | Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turned off by default as this may require servlet specific configuration to enable this when using Servlet’s. | false | Boolean |
camel.component.servlet.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.servlet.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which means any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.servlet.enabled | Whether to enable auto configuration of the servlet component. This is enabled by default. | Boolean | |
camel.component.servlet.file-name-ext-whitelist | Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. | String | |
camel.component.servlet.header-filter-strategy | To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. | HeaderFilterStrategy | |
camel.component.servlet.http-binding | To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.http.common.HttpBinding type. | HttpBinding | |
camel.component.servlet.http-configuration | To use the shared HttpConfiguration as base configuration. The option is a org.apache.camel.http.common.HttpConfiguration type. | HttpConfiguration | |
camel.component.servlet.http-registry | To use a custom org.apache.camel.component.servlet.HttpRegistry. The option is a org.apache.camel.http.common.HttpRegistry type. | HttpRegistry | |
camel.component.servlet.mute-exception | If enabled and an Exchange failed processing on the consumer side the response’s body won’t contain the exception’s stack trace. | false | Boolean |
camel.component.servlet.servlet-name | Default name of servlet to use. The default name is CamelServlet. | CamelServlet | String |
camel.servlet.mapping.context-path | Context path used by the servlet component for automatic mapping. | /camel/* | String |
camel.servlet.mapping.enabled | Enables the automatic mapping of the servlet component into the Spring web context. | true | Boolean |
camel.servlet.mapping.servlet-name | The name of the Camel servlet. | CamelServlet | String |
Chapter 50. Slack
Both producer and consumer are supported
The Slack component allows you to connect to an instance of Slack and delivers a message contained in the message body via a pre established Slack incoming webhook.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-slack</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
50.1. URI format
To send a message to a channel.
slack:#channel[?options]
To send a direct message to a slackuser.
slack:@userID[?options]
50.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
50.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
50.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
50.3. Component Options
The Slack component supports 5 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
token (token) | The token to use. | String | |
webhookUrl (webhook) | The incoming webhook URL. | String |
50.4. Endpoint Options
The Slack endpoint is configured using URI syntax:
slack:channel
with the following path and query parameters:
50.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
channel (common) | Required The channel name (syntax #name) or slackuser (syntax userName) to send a message directly to an user. | String |
50.4.2. Query Parameters (29 parameters)
Name | Description | Default | Type |
---|---|---|---|
token (common) | The token to use. | String | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
conversationType (consumer) | Type of conversation. Enum values:
| PUBLIC_CHANNEL | ConversationType |
maxResults (consumer) | The Max Result for the poll. | 10 | String |
naturalOrder (consumer) | Create exchanges in natural order (oldest to newest) or not. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
serverUrl (consumer) | The Server URL of the Slack instance. | String | |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
iconEmoji (producer) | Deprecated Use a Slack emoji as an avatar. | String | |
iconUrl (producer) | Deprecated The avatar that the component will use when sending message to a channel or user. | String | |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
username (producer) | Deprecated This is the username that the bot will have when sending messages to a channel or user. | String | |
webhookUrl (producer) | The incoming webhook URL. | String | |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values:
| TRACE | LoggingLevel |
scheduledExecutorService (scheduler) | Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. | ScheduledExecutorService | |
scheduler (scheduler) | To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. | none | Object |
schedulerProperties (scheduler) | To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. | Map | |
startScheduler (scheduler) | Whether the scheduler should be auto started. | true | boolean |
timeUnit (scheduler) | Time unit for initialDelay and delay options. Enum values:
| MILLISECONDS | TimeUnit |
useFixedDelay (scheduler) | Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. | true | boolean |
50.5. Configuring in Sprint XML
The Slack component with XML must be configured as a Spring or Blueprint bean that contains the incoming webhook url or the app token for the integration as a parameter.
<bean id="slack" class="org.apache.camel.component.slack.SlackComponent"> <property name="webhookUrl" value="https://hooks.slack.com/services/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"/> <property name="token" value="xoxb-12345678901-1234567890123-xxxxxxxxxxxxxxxxxxxxxxxx"/> </bean>
For Java you can configure this using Java code.
50.6. Example
A CamelContext with Blueprint could be as:
<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" default-activation="lazy"> <bean id="slack" class="org.apache.camel.component.slack.SlackComponent"> <property name="webhookUrl" value="https://hooks.slack.com/services/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"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="direct:test"/> <to uri="slack:#channel?iconEmoji=:camel:&username=CamelTest"/> </route> </camelContext> </blueprint>
50.7. Producer
You can now use a token to send a message instead of WebhookUrl.
from("direct:test") .to("slack:#random?token=RAW(<YOUR_TOKEN>)");
You can now use the Slack API model to create blocks. You can read more about it here https://api.slack.com/block-kit.
public void testSlackAPIModelMessage() { Message message = new Message(); message.setBlocks(Collections.singletonList(SectionBlock .builder() .text(MarkdownTextObject .builder() .text("*Hello from Camel!*") .build()) .build())); template.sendBody(test, message); }
50.8. Consumer
You can use also a consumer for messages in channel.
from("slack://general?token=RAW(<YOUR_TOKEN>)&maxResults=1") .to("mock:result");
In this way you’ll get the last message from general channel. The consumer will take track of the timestamp of the last message consumed and in the next poll it will check from that timestamp.
You’ll need to create a Slack app and use it on your workspace.
Use the 'Bot User OAuth Access Token' as token for the consumer endpoint.
Add the corresponding history (channels:history
or groups:history
or mpim:history
or im:history
) and read (channels:read
or groups:read
or mpim:read
or im:read
) user token scope to your app to grant it permission to view messages in the corresponding channel. You will need to use the conversationType option to set it up too (PUBLIC_CHANNEL
, PRIVATE_CHANNEL
, MPIM
, IM
)
The naturalOrder option allows consuming messages from the oldest to the newest. Originally you would get the newest first and consume backward (message 3 ⇒ message 2 ⇒ message 1)
You can use the conversationType option to read history and messages from a channel that is not only public (PUBLIC_CHANNEL
,PRIVATE_CHANNEL
, MPIM
, IM
)
50.9. Spring Boot Auto-Configuration
When using slack with Spring Boot make sure to use the following Maven dependency to have support for auto configuration:
<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-slack-starter</artifactId> </dependency>
The component supports 6 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
camel.component.slack.autowired-enabled | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | Boolean |
camel.component.slack.bridge-error-handler | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | Boolean |
camel.component.slack.enabled | Whether to enable auto configuration of the slack component. This is enabled by default. | Boolean | |
camel.component.slack.lazy-start-producer | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | Boolean |
camel.component.slack.token | The token to use. | String | |
camel.component.slack.webhook-url | The incoming webhook URL. | String |
Chapter 51. SQL
Both producer and consumer are supported
The SQL component allows you to work with databases using JDBC queries. The difference between this component and JDBC component is that in case of SQL the query is a property of the endpoint and it uses message payload as parameters passed to the query.
This component uses spring-jdbc
behind the scenes for the actual SQL handling.
Maven users will need to add the following dependency to their pom.xml
for this component:
<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sql</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>
The SQL component also supports:
- a JDBC based repository for the Idempotent Consumer EIP pattern. See further below.
- a JDBC based repository for the Aggregator EIP pattern. See further below.
51.1. URI format
This component can be used as a Transactional Client.
The SQL component uses the following endpoint URI notation:
sql:select * from table where id=# order by name[?options]
You can use named parameters by using :`#name_of_the_parameter` style as shown:
sql:select * from table where id=:#myId order by name[?options]
When using named parameters, Camel will lookup the names from, in the given precedence:
-
from message body if its a
java.util.Map
- from message headers
If a named parameter cannot be resolved, then an exception is thrown.
You can use Simple expressions as parameters as shown:
sql:select * from table where id=:#${exchangeProperty.myId} order by name[?options]
Notice that the standard ? symbol that denotes the parameters to an SQL query is substituted with the # symbol, because the ? symbol is used to specify options for the endpoint. The ? symbol replacement can be configured on endpoint basis.
You can externalize your SQL queries to files in the classpath or file system as shown:
sql:classpath:sql/myquery.sql[?options]
And the myquery.sql file is in the classpath and is just a plain text
-- this is a comment select * from table where id = :#${exchangeProperty.myId} order by name
In the file you can use multilines and format the SQL as you wish. And also use comments such as the – dash line.
51.2. Configuring Options
Camel components are configured on two separate levels:
- component level
- endpoint level
51.2.1. Configuring Component Options
The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth.
Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all.
Configuring components can be done with the Component DSL, in a configuration file (application.properties|yaml), or directly with Java code.
51.2.2. Configuring Endpoint Options
Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both.
Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints.
A good practice when configuring options is to use Property Placeholders, which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse.
The following two sections lists all the options, firstly for the component followed by the endpoint.
51.3. Component Options
The SQL component supports 5 options, which are listed below.
Name | Description | Default | Type |
---|---|---|---|
dataSource (common) | Autowired Sets the DataSource to use to communicate with the database. | DataSource | |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
autowiredEnabled (advanced) | Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. | true | boolean |
usePlaceholder (advanced) | Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. This option is default true. | true | boolean |
51.4. Endpoint Options
The SQL endpoint is configured using URI syntax:
sql:query
with the following path and query parameters:
51.4.1. Path Parameters (1 parameters)
Name | Description | Default | Type |
---|---|---|---|
query (common) | Required Sets the SQL query to perform. You can externalize the query by using file: or classpath: as prefix and specify the location of the file. | String |
51.4.2. Query Parameters (45 parameters)
Name | Description | Default | Type |
---|---|---|---|
allowNamedParameters (common) | Whether to allow using named parameters in the queries. | true | boolean |
dataSource (common) | Autowired Sets the DataSource to use to communicate with the database at endpoint level. | DataSource | |
outputClass (common) | Specify the full package and class name to use as conversion when outputType=SelectOne. | String | |
outputHeader (common) | Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. | String | |
outputType (common) | Make the output of consumer or producer to SelectList as List of Map, or SelectOne as single Java object in the following way: a) If the query has only single column, then that JDBC Column object is returned. (such as SELECT COUNT( ) FROM PROJECT will return a Long object. b) If the query has more than one column, then it will return a Map of that result. c) If the outputClass is set, then it will convert the query result into an Java bean object by calling all the setters that match the column names. It will assume your class has a default constructor to create instance with. d) If the query resulted in more than one rows, it throws an non-unique result exception. StreamList streams the result of the query using an Iterator. This can be used with the Splitter EIP in streaming mode to process the ResultSet in streaming fashion. Enum values:
| SelectList | SqlOutputType |
separator (common) | The separator to use when parameter values is taken from message body (if the body is a String type), to be inserted at # placeholders. Notice if you use named parameters, then a Map type is used instead. The default value is comma. | , | char |
breakBatchOnConsumeFail (consumer) | Sets whether to break batch if onConsume failed. | false | boolean |
bridgeErrorHandler (consumer) | Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. | false | boolean |
expectedUpdateCount (consumer) | Sets an expected update count to validate when using onConsume. | -1 | int |
maxMessagesPerPoll (consumer) | Sets the maximum number of messages to poll. | int | |
onConsume (consumer) | After processing each row then this query can be executed, if the Exchange was processed successfully, for example to mark the row as processed. The query can have parameter. | String | |
onConsumeBatchComplete (consumer) | After processing the entire batch, this query can be executed to bulk update rows etc. The query cannot have parameters. | String | |
onConsumeFailed (consumer) | After processing each row then this query can be executed, if the Exchange failed, for example to mark the row as failed. The query can have parameter. | String | |
routeEmptyResultSet (consumer) | Sets whether empty resultset should be allowed to be sent to the next hop. Defaults to false. So the empty resultset will be filtered out. | false | boolean |
sendEmptyMessageWhenIdle (consumer) | If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. | false | boolean |
transacted (consumer) | Enables or disables transaction. If enabled then if processing an exchange failed then the consumer breaks out processing any further exchanges to cause a rollback eager. | false | boolean |
useIterator (consumer) | Sets how resultset should be delivered to route. Indicates delivery as either a list or individual object. defaults to true. | true | boolean |
exceptionHandler (consumer (advanced)) | To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. | ExceptionHandler | |
exchangePattern (consumer (advanced)) | Sets the exchange pattern when the consumer creates an exchange. Enum values:
| ExchangePattern | |
pollStrategy (consumer (advanced)) | A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. | PollingConsumerPollStrategy | |
processingStrategy (consumer (advanced)) | Allows to plugin to use a custom org.apache.camel.component.sql.SqlProcessingStrategy to execute queries when the consumer has processed the rows/batch. | SqlProcessingStrategy | |
batch (producer) | Enables or disables batch mode. | false | boolean |
lazyStartProducer (producer) | Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. | false | boolean |
noop (producer) | If set, will ignore the results of the SQL query and use the existing IN message as the OUT message for the continuation of processing. | false | boolean |
useMessageBodyForSql (producer) | Whether to use the message body as the SQL and then headers for parameters. If this option is enabled then the SQL in the uri is not used. Note that query parameters in the message body are represented by a question mark instead of a # symbol. | false | boolean |
alwaysPopulateStatement (advanced) | If enabled then the populateStatement method from org.apache.camel.component.sql.SqlPrepareStatementStrategy is always invoked, also if there is no expected parameters to be prepared. When this is false then the populateStatement is only invoked if there is 1 or more expected parameters to be set; for example this avoids reading the message body/headers for SQL queries with no parameters. | false | boolean |
parametersCount (advanced) | If set greater than zero, then Camel will use this count value of parameters to replace instead of querying via JDBC metadata API. This is useful if the JDBC vendor could not return correct parameters count, then user may override instead. | int | |
placeholder (advanced) | Specifies a character that will be replaced to in SQL query. Notice, that it is simple String.replaceAll() operation and no SQL parsing is involved (quoted strings will also change). | # | String |
prepareStatementStrategy (advanced) | Allows to plugin to use a custom org.apache.camel.component.sql.SqlPrepareStatementStrategy to control preparation of the query and prepared statement. | SqlPrepareStatementStrategy | |
templateOptions (advanced) | Configures the Spring JdbcTemplate with the key/values from the Map. | Map | |
usePlaceholder (advanced) | Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. | true | boolean |
backoffErrorThreshold (scheduler) | The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. | int | |
backoffIdleThreshold (scheduler) | The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. | int | |
backoffMultiplier (scheduler) | To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | int | |
delay (scheduler) | Milliseconds before the next poll. | 500 | long |
greedy (scheduler) | If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. | false | boolean |
initialDelay (scheduler) | Milliseconds before the first poll starts. | 1000 | long |
repeatCount (scheduler) | Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. | 0 | long |
runLoggingLevel (scheduler) | The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: |