Questo contenuto non è disponibile nella lingua selezionata.

Chapter 7. Upgrading Fuse applications on Karaf standalone


To upgrade your Fuse applications on Karaf:

Typically, you use Maven to build Fuse applications. Maven is a free and open source build tool from Apache. Maven configuration is defined in a Fuse application project’s pom.xml file. While building a Fuse project, the default behavior is that Maven searches external repositories and downloads the required artifacts. You add a dependency for the Fuse Bill of Materials (BOM) to the pom.xml file so that the Maven build process picks up the correct set of Fuse supported artifacts.

The following sections provide information on Maven dependencies and how to update them in your Fuse projects.

7.1. Camel migration considerations

Creating a connection to MongoDB using the MongoClients factory

From Fuse 7.12, use com.mongodb.client.MongoClient instead of com.mongodb.MongoClient to create a connection to MongoDB (note the extra .client sub-package in the full path).

If any of your existing Fuse applications use the camel-mongodb component, you must:

  • Update your applications to create the connection bean as a com.mongodb.client.MongoClient instance.

    For example, create a connection to MongoDB as follows:

    import com.mongodb.client.MongoClient;

    You can then create the MongoClient bean as shown in following example:

    return MongoClients.create("mongodb://admin:password@192.168.99.102:32553");
  • Evaluate and, if needed, refactor any code related to the methods exposed by the MongoClient class.

Camel 2.23

Red Hat Fuse uses Apache Camel 2.23. You should consider the following updates to Camel 2.22 and 2.23 when you upgrade to Fuse 7.8.

Camel 2.22 updates

  • Camel has upgraded from Spring Boot v1 to v2 and therefore v1 is no longer supported.
  • Upgraded to Spring Framework 5. Camel should work with Spring 4.3.x as well, but going forward Spring 5.x will be the minimum Spring version in future releases.
  • Upgraded to Karaf 4.2. You may run Camel on Karaf 4.1 but we only officially support Karaf 4.2 in this release.
  • Optimized using toD DSL to reuse endpoints and producers for components where it is possible. For example, HTTP based components will now reuse producer (HTTP clients) with dynamic URIs sending to the same host.
  • The File2 consumer with read-lock idempotent/idempotent-changed can now be configured to delay the release tasks to expand the window when a file is regarded as in-process, which is usable in active/active cluster settings with a shared idempotent repository to ensure other nodes don’t too quickly see a processed file as a file they can process (only needed if you have readLockRemoveOnCommit=true).
  • Allow to plugin a custom request/reply correlation id manager implementation on Netty4 producer in request/reply mode. The Twitter component now uses extended mode by default to support tweets greater than 140 characters
  • Rest DSL producer now supports being configured in REST configuration by using endpointProperties.
  • The Kafka component now supports HeaderFilterStrategy to plugin custom implementations for controlling header mappings between Camel and Kafka messages.
  • REST DSL now supports client request validation to validate that Content-Type/Accept headers are possible for the REST service.
  • Camel now has a Service Registry SPI which allows you to register routes to a service registry (such as consul, etcd, or zookeeper) by using a Camel implementation or Spring Cloud.
  • The SEDA component now has a default queue size of 1000 instead of unlimited.
  • The following noteworthy issues have been fixed:

    • Fixed a CXF continuation timeout issue with camel-cxf consumer that could cause the consumer to return a response with data instead of triggering a timeout to the calling SOAP client.
    • Fixed camel-cxf consumer doesn’t release UoW when using a robust one-way operation.
    • Fixed using AdviceWith and using weave methods on onException etc. not working.
    • Fixed Splitter in parallel processing and streaming mode may block, while iterating message body when the iterator throws an exception in the first invoked next() method call.
    • Fixed Kafka consumer to not auto commit if autoCommitEnable=false.
    • Fixed file consumer was using markerFile as read-lock by default, which should have been none.
    • Fixed using manual commit with Kafka to provide the current record offset and not the previous (and -1 for first).
    • Fixed Content Based Router in Java DSL may not resolve property placeholders in when predicates.

Camel 2.23 updates

  • Upgraded to Spring Boot 2.1.
  • Additional component-level options can now be configured by using spring-boot auto-configuration. These options are included in the spring-boot component metadata JSON file descriptor for tooling assistance.
  • Added a documentation section that includes all the Spring Boot auto configuration options for all the components, data-formats, and languages.
  • All the Camel Spring Boot starter JARs now include META-INF/spring-autoconfigure-metadata.properties file in their JARs to optimize Spring Boot auto-configuration.
  • The Throttler now supports correlation groups based on dynamic expression so that you can group messages into different throttled sets.
  • The Hystrix EIP now allows inheritance for Camel’s error handler so that you can retry the entire Hystrix EIP block again if you have enabled error handling with redeliveries.
  • SQL and ElSql consumers now support dynamic query parameters in route form. Note that this feature is limited to calling beans by using simple expressions.
  • The swagger-restdsl maven plugin now supports generating DTO model classes from the Swagger specification file.
  • The following noteworthy issues have been fixed:

    • The Aggregator2 has been fixed to not propagate control headers for forcing completion of all groups, so it will not happen again if another aggregator EIP is in use later during routing.
    • Fixed Tracer not working if redelivery was activa†ed in the error handler.
    • The built-in type converter for XML Documents may output parsing errors to stdout, which has now been fixed to output by using the logging API.
    • Fixed SFTP writing files by using the charset option would not work if the message body was streaming-based.
    • Fixed Zipkin root id to not be reused when routing over multiple routes to group them together into a single parent span.
    • Fixed optimized toD when using HTTP endpoints had a bug when the hostname contains an IP address with digits.
    • Fixed issue with RabbitMQ with request/reply over temporary queues and using manual acknowledge mode. It would not acknowledge the temporary queue (which is needed to make request/reply possible).
    • Fixed various HTTP consumer components that may not return all allowed HTTP verbs in Allow header for OPTIONS requests (such as when using rest-dsl).
    • Fixed the thread-safety issue with FluentProducerTemplate.

7.2. About Maven dependencies

The purpose of a Maven Bill of Materials (BOM) file is to provide a curated set of Maven dependency versions that work well together, saving you from having to define versions individually for every Maven artifact.

There is a dedicated BOM file for each container in which Fuse runs.

Note

You can find these BOM files here: https://github.com/jboss-fuse/redhat-fuse. Alternatively, go to the latest Release Notes for information on BOM file updates.

The Fuse BOM offers the following advantages:

  • Defines versions for Maven dependencies, so that you do not need to specify the version when you add a dependency to your pom.xml file.
  • Defines a set of curated dependencies that are fully tested and supported for a specific version of Fuse.
  • Simplifies upgrades of Fuse.
Important

Only the set of dependencies defined by a Fuse BOM are supported by Red Hat.

7.3. Updating your Fuse project’s Maven dependencies

To upgrade your Fuse application for Karaf, update your project’s Maven dependencies.

Procedure

  1. Open your project’s pom.xml file.
  2. Add a dependencyManagement element in your project’s pom.xml file (or, possibly, in a parent pom.xml file), as shown in the following example:

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <project ...>
      ...
      <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    
        <!-- configure the versions you want to use here -->
        <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version>
    
      </properties>
    
      <dependencyManagement>
        <dependencies>
          <dependency>
            <groupId>org.jboss.redhat-fuse</groupId>
            <artifactId>fuse-karaf-bom</artifactId>
            <version>${fuse.version}</version>
            <type>pom</type>
            <scope>import</scope>
          </dependency>
        </dependencies>
      </dependencyManagement>
      ...
    </project>
  3. Save your pom.xml file.

After you specify the BOM as a dependency in your pom.xml file, it becomes possible to add Maven dependencies to your pom.xml file without specifying the version of the artifact. For example, to add a dependency for the camel-velocity component, you would add the following XML fragment to the dependencies element in your pom.xml file:

<dependency>
  <groupId>org.apache.camel</groupId>
  <artifactId>camel-velocity</artifactId>
  <scope>provided</scope>
</dependency>

Note how the version element is omitted from this dependency definition.

Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.