Red Hat Camel K is deprecated
Red Hat Camel K is deprecated and the End of Life date for this product is June 30, 2025. For help migrating to the current go-to solution, Red Hat build of Apache Camel, see the Migration Guide.Getting Started with Camel K
Develop and run your first Camel K application
Abstract
Preface
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Introduction to Camel K
This chapter introduces the concepts, features, and cloud-native architecture provided by Red Hat Integration - Camel K:
1.1. Camel K overview
Red Hat Integration - Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K is specifically designed for serverless and microservice architectures. You can use Camel K to instantly run your integration code written in Camel Domain Specific Language (DSL) directly on OpenShift. Camel K is a subproject of the Apache Camel open source community: https://github.com/apache/camel-k.
Camel K is implemented in the Go programming language and uses the Kubernetes Operator SDK to automatically deploy integrations in the cloud. For example, this includes automatically creating services and routes on OpenShift. This provides much faster turnaround times when deploying and redeploying integrations in the cloud, such as a few seconds or less instead of minutes.
The Camel K runtime provides significant performance optimizations. The Quarkus cloud-native Java framework is enabled by default to provide faster start up times, and lower memory and CPU footprints. When running Camel K in developer mode, you can make live updates to your integration DSL and view results instantly in the cloud on OpenShift, without waiting for your integration to redeploy.
Using Camel K with OpenShift Serverless and Knative Serving, containers are created only as needed and are autoscaled under load up and down to zero. This reduces cost by removing the overhead of server provisioning and maintenance and enables you to focus on application development instead.
Using Camel K with OpenShift Serverless and Knative Eventing, you can manage how components in your system communicate in an event-driven architecture for serverless applications. This provides flexibility and creates efficiencies through decoupled relationships between event producers and consumers using a publish-subscribe or event-streaming model.
Additional resources
1.2. Camel K features
The Camel K includes the following main platforms and features:
1.2.1. Platform and component versions
- OpenShift Container Platform 4.13, 4.14
- OpenShift Serverless 1.31.1
- Red Hat Build of Quarkus 2.13.8.Final-redhat-00006
- Red Hat Camel Extensions for Quarkus 2.13.3.redhat-00008
- Apache Camel K 1.10.5.redhat-00002
- Apache Camel 3.18.6.redhat-00007
- OpenJDK 11
1.2.2. Camel K features
- Knative Serving for autoscaling and scale-to-zero
- Knative Eventing for event-driven architectures
- Performance optimizations using Quarkus runtime by default
- Camel integrations written in Java or YAML DSL
- Development tooling with Visual Studio Code
- Monitoring of integrations using Prometheus in OpenShift
- Quickstart tutorials
- Kamelet Catalog of connectors to external systems such as AWS, Jira, and Salesforce
The following diagram shows a simplified view of the Camel K cloud-native architecture:

Additional resources
1.2.3. Kamelets
Kamelets hide the complexity of connecting to external systems behind a simple interface, which contains all the information needed to instantiate them, even for users who are not familiar with Camel.
Kamelets are implemented as custom resources that you can install on an OpenShift cluster and use in Camel K integrations. Kamelets are route templates that use Camel components designed to connect to external systems without requiring deep understanding of the component. Kamelets abstract the details of connecting to external systems. You can also combine Kamelets to create complex Camel integrations, just like using standard Camel components.
Additional resources
1.3. Camel K development tooling
The Camel K provides development tooling extensions for Visual Studio (VS) Code, Red Hat CodeReady WorkSpaces, and Eclipse Che. The Camel-based tooling extensions include features such as automatic completion of Camel DSL code, Camel K modeline configuration, and Camel K traits.
The following VS Code development tooling extensions are available:
VS Code Extension Pack for Apache Camel by Red Hat
- Tooling for Apache Camel K extension
- Language Support for Apache Camel extension
- Debug Adapter for Apache Camel K
- Additional extensions for OpenShift, Java and more
For details on how to set up these VS Code extensions for Camel K, see Setting up your Camel K development environment.
-
The following plugin VS Code Language support for Camel - a part of the Camel extension pack provides support for content assist when editing Camel routes and
application.properties
. - To install a supported Camel K tooling extension for VS code to create, run and operate Camel K integrations on OpenShift, see VS Code Tooling for Apache Camel K by Red Hat extension
- To install a supported Camel debug tool extension for VS code to debug Camel integrations written in Java, YAML or XML locally, see Debug Adapter for Apache Camel by Red Hat
- For details about configurations and components to use the developer tool with specific product versions, see Camel K Supported Configurations and Camel K Component Details
Note: The Camel K VS Code extensions are community features.
Eclipse Che also provides these features using the vscode-camelk
plug-in.
For more information about scope of development support, see Development Support Scope of Coverage
Additional resources
1.4. Camel K distributions
Distribution | Description | Location |
---|---|---|
Operator image |
Container image for the Red Hat Integration - Camel K Operator: |
|
Maven repository | Maven artifacts for Red Hat Integration - Camel K Red Hat provides Maven repositories that host the content we ship with our products. These repositories are available to download from the software downloads page. For Red Hat Integration - Camel K the following repositories are required:
Installation of Red Hat Integration - Camel K in a disconnected environment (offline mode) is not supported. | |
Source code | Source code for Red Hat Integration - Camel K | |
Quickstarts | Quick start tutorials:
|
You must have a subscription for Red Hat build of Apache Camel K and be logged into the Red Hat Customer Portal to access the Red Hat Integration - Camel K distributions.
Chapter 2. Preparing your OpenShift cluster
This chapter explains how to install Red Hat Integration - Camel K and OpenShift Serverless on OpenShift, and how to install the required Camel K and OpenShift Serverless command-line client tools in your development environment.
2.1. Installing Camel K
You can install the Red Hat Integration - Camel K Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators.
After you install the Camel K Operator, you can install the Camel K CLI tool for command line access to all Camel K features.
Prerequisites
You have access to an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install CLI tools on your local system.
NoteYou do not need to create a pull secret when installing Camel K from the OpenShift OperatorHub. The Camel K Operator automatically reuses the OpenShift cluster-level authentication to pull the Camel K image from `registry.redhat.io`.
-
You installed the OpenShift CLI tool (
oc
) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI.
Procedure
- In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges.
Create a new OpenShift project:
- In the left navigation menu, click Home > Project > Create Project.
-
Enter a project name, for example,
my-camel-k-project
, and then click Create.
- In the left navigation menu, click Operators > OperatorHub.
-
In the Filter by keyword text box, type
Camel K
and then click the Red Hat Integration - Camel K Operator card. - Read the information about the operator and then click Install. The Operator installation page opens.
Select the following subscription settings:
- Update Channel > latest
Choose among the following 2 options:
- Installation Mode > A specific namespace on the cluster > my-camel-k-project
- Installation Mode > All namespaces on the cluster (default) > Openshift operator
NoteIf you do not choose among the above two options, the system by default chooses a global namespace on the cluster then leading to openshift operator.
Approval Strategy > Automatic
NoteThe Installation mode > All namespaces on the cluster and Approval Strategy > Manual settings are also available if required by your environment.
- Click Install, and wait a few moments until the Camel K Operator is ready for use.
Download and install the Camel K CLI tool:
- From the Help menu (?) at the top of the OpenShift web console, select Command line tools.
- Scroll down to the kamel - Red Hat Integration - Camel K - Command Line Interface section.
- Click the link to download the binary for your local operating system (Linux, Mac, Windows).
- Unzip and install the CLI in your system path.
To verify that you can access the Kamel K CLI, open a command window and then type the following:
kamel --help
This command shows information about Camel K CLI commands.
If you uninstall the Camel K operator from OperatorHub using OLM, the CRDs are not removed. To shift back to a previous Camel K operator, you must remove the CRDs manually by using the following command.
oc get crd -l app=camel-k -o name | xargs oc delete
Next step
(optional) Specifying Camel K resource limits
2.1.1. Consistent integration platform settings
You can create namespace local Integration Platform resources to overwrite settings used in the operator.
These namespace local platform settings must be derived from the Integration Platform being used by the operator by default. That is, only explicitly specified settings overwrite the platform defaults used in the operator.
Therefore, you must use a consistent platform settings hierarchy where the global operator platform settings always represent the basis for user specified platform settings.
- In case of global Camel K operator, if IntegrationPlatform specifies non-default spec.build.buildStrategy, this value is also propagated to namespaced Camel-K operators installed thereafter.
Default value for buildStrategy is routine.
$ oc get itp camel-k -o yaml -n openshift-operators apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: labels: app: camel-k name: camel-k namespace: openshift-operators spec: build: buildStrategy: pod
The parameter buildStrategy of global operator IntegrationPlatform can be edited by one of the following ways;
From Dashboard
- Administrator view: Operators → Installed operators → in namespace openshift-operators (that is, globally installed operators), select Red Hat Integration - Camel K → Integration Platform → YAML
- Now add or edit (if already present) spec.build.buildStrategy: pod
- Click Save
Using the following command. Any namespaced Camel K operators installed subsequently would inherit settings from the global IntegrationPlatform.
oc patch itp/camel-k -p '{"spec":{"build":{"buildStrategy": "pod"}}}' --type merge -n openshift-operators
2.1.2. Specifying Camel K resource limits
When you install Camel K, the OpenShift pod for Camel K does not have any limits set for CPU and memory (RAM) resources. If you want to define resource limits for Camel K, you must edit the Camel K subscription resource that was created during the installation process.
Prerequisite
- You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed as described in Installing Camel K.
You know the resource limits that you want to apply to the Camel K subscription. For more information about resource limits, see the following documentation:
- Setting deployment resources in the OpenShift documentation.
- Managing Resources for Containers in the Kubernetes documentation.
Procedure
- Log in to the OpenShift Web console.
- Select Operators > Installed Operators > Operator Details > Subscription.
Select Actions > Edit Subscription.
The file for the subscription opens in the YAML editor.
Under the
spec
section, add aconfig.resources
section and provide values for memory and cpu as shown in the following example:spec: channel: default config: resources: limits: memory: 512Mi cpu: 500m requests: cpu: 200m memory: 128Mi
- Save your changes.
OpenShift updates the subscription and applies the resource limits that you specified.
We recommend you to install Camel K Operator through Global installtion only.
2.2. Installing OpenShift Serverless
You can install the OpenShift Serverless Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators.
The OpenShift Serverless Operator supports both Knative Serving and Knative Eventing features. For more details, see installing OpenShift Serverless Operator.
Prerequisites
- You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed.
-
You installed the OpenShift CLI tool (
oc
) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI.
Procedure
- In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges.
- In the left navigation menu, click Operators > OperatorHub.
-
In the Filter by keyword text box, enter
Serverless
to find the OpenShift Serverless Operator. - Read the information about the Operator and then click Install to display the Operator subscription page.
Select the default subscription settings:
- Update Channel > Select the channel that matches your OpenShift version, for example, 4.16
- Installation Mode > All namespaces on the cluster
Approval Strategy > Automatic
NoteThe Approval Strategy > Manual setting is also available if required by your environment.
- Click Install, and wait a few moments until the Operator is ready for use.
Install the required Knative components using the steps in the OpenShift documentation:
(Optional) Download and install the OpenShift Serverless CLI tool:
- From the Help menu (?) at the top of the OpenShift web console, select Command line tools.
- Scroll down to the kn - OpenShift Serverless - Command Line Interface section.
- Click the link to download the binary for your local operating system (Linux, Mac, Windows)
- Unzip and install the CLI in your system path.
To verify that you can access the
kn
CLI, open a command window and then type the following:kn --help
This command shows information about OpenShift Serverless CLI commands.
For more details, see the OpenShift Serverless CLI documentation.
Additional resources
- Installing OpenShift Serverless in the OpenShift documentation
2.3. Configuring Maven repository for Camel K
For Camel K operator, you can provide the Maven settings in a ConfigMap
or a secret.
Procedure
To create a
ConfigMap
from a file, run the following command.oc create configmap maven-settings --from-file=settings.xml
Created
ConfigMap
can be then referenced in theIntegrationPlatform
resource, from thespec.build.maven.settings
field.Example
apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: name: camel-k spec: build: maven: settings: configMapKeyRef: key: settings.xml name: maven-settings
Or you can edit the
IntegrationPlatform
resource directly to reference the ConfigMap that contains the Maven settings using following command:oc edit itp camel-k
Configuring CA certificates for remote Maven repositories
You can provide the CA certificates, used by the Maven commands to connect to the remote Maven repositories, in a Secret.
Procedure
Create a Secret from file using following command:
oc create secret generic maven-ca-certs --from-file=ca.crt
Reference the created Secret in the
IntegrationPlatform
resource, from thespec.build.maven.caSecret
field as shown below.apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: name: camel-k spec: build: maven: caSecret: key: tls.crt name: tls-secret
2.4. Camel K offline
Camel K is naturally developed to fit in an "open world" cluster model. It means that the default installation assumes it can pull and push resources from the Internet. However, there can be certain domains or use cases where this is a limitation. The following are the details about how to setup Camel K in an offline (or disconnected, or air gapped) cluster environment.
Requirements
- Install Camel K 1.10.7 in a disconnected openshift 4.14+ cluster in OLM mode.
- Run integrations in the same cluster by using a maven repository manager to serve the maven artifacts.
Out of scope
- Support install and management of a maven repository manager.
- Mount volumes with the maven artifacts
-
Use the
--offline
maven parameter. - How to install or configure OCP in disconnected environments.
Assumption
- An existing OCP 4.14+ cluster configured in a disconnected environment.
- An existing container image registry.
- An existing maven repository manager.
- Execute the command steps in a linux machine.
Prerequisites
- You are familiar with Camel K network architecture.
Let us see the diagram here.

We can identify those components which requires access to the Internet and treat them separately: the image registry and the maven builds.
2.4.1. Container images registry
The container registry is the component in charge to host the containers which are built from the operator and are used by the cluster to run the Camel applications. This component can be provided out of the box by the cluster, or should be operated by you (see the guide on how to run your own registry).
As we are in a disconnected environment, we assume this component to be accessible by the cluster (through an IP or URL). However, the cluster must use the Camel K container image , in order to be installed. You must ensure that the cluster registry has preloaded the Camel K container image, which should be similar to registry.redhat.io/integration/camel-k-rhel8-operator-bundle:1.10.7
Red Hat container images are defined in the ecosystem catalog see Get this image tab, then you see the container image digest address, in the Manifest List Digest
field:
Openshift provides documentation to mirror container images. When mirroring the container images, you must ensure to include the following container images that are required by Camel K during its operations, note that, in a disconnected cluster we have to use the digest URLs and not the tag. For container images provided by Red Hat, visit the Red Hat ecosystem catalog, find the container image and copy the digest URL of the container image.
-
registry.redhat.io/integration/camel-k-rhel8-operator:1.10.7
-
registry.redhat.io/integration/camel-k-rhel8-operator-bundle:1.10.7
-
registry.redhat.io/quarkus/mandrel-23-rhel8:23.0
-
registry.access.redhat.com/ubi8/openjdk-11:1.20
An example of a digest URL of Camel K 1.10.7: registry.redhat.io/integration/camel-k-rhel8-operator-bundle@sha256:a043af04c9b816f0dfd5db64ba69bae192d73dd726df83aaf2002559a111a786
If all the above is set, then, you must be ready to pull and push from the container registry in Camel K as well.
2.4.1.1. Creating your own CatalogSource
The only supported way to install Camel K operator is from OperatorHub. Since we use a disconnected environment, the container images from the redhat-operators CatalogSource cannot be downloaded. For that reason, we must create our own CatalogSource to install the Camel K operator from OperatorHub.
There is mirroring of the Operator Catalog, to mirror a CatalogSource.
However, there is this way to set up a custom CatalogSource with only the Camel K Bundle Metadata container image.
Prerequisites
- Required tools
- Images accessible on your registry
- Camel K Operator and Camel K Operator Bundle with sha256 tag.
2.4.1.1.1. Creating own Index Image Bundle (IIB)
Firstly, we need to create our own IIB image with only one operator inside.
No operator upgrade is possible.
Setup
- Create IIB locally with only one bundle (use sha tag):
opm index add --bundles {CAMEL_K_OPERATOR_BUNDLE_WITH_SHA_TAG} --tag {YOUR_CONTAINER_STORAGE}/{USERNAME}/{IMAGE_NAME}:{TAG} --mode=semver
For example:
opm index add --bundles registry.redhat.io/integration/camel-k-rhel8-operator-bundle@sha256:a043af04c9b816f0dfd5db64ba69bae192d73dd726df83aaf2002559a111a786 --tag my-custom-registry/mygroup/ck-iib:1.10.7 --mode=semver
- Check if the image was created.
podman images
- Push to your registry.
podman push {YOUR_CONTAINER_STORAGE}/{USERNAME}/{IMAGE_NAME}:{TAG}
For example:
podman push my-custom-registry/mygroup/ck-iib:1.10.7
Additional resource
2.4.1.1.2. Creating own catalog source
Create yaml file, that is, myCatalogSource.yaml with content:
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: {CATALOG_SOURCE_NAME} namespace: openshift-marketplace spec: displayName: {NAME_WHICH_WILL_BE_DISPLAYED_IN_OCP} publisher: grpc sourceType: grpc image: {YOUR_CONTAINER_STORAGE}/{USERNAME}/{IMAGE_NAME}:{SHA_TAG}}
For example;
apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: camel-k-source-1.10 namespace: openshift-marketplace spec: displayName: Camel K Offline publisher: grpc sourceType: grpc image: ec2-1-111-111-111.us-east-1.compute.amazonaws.com:5000/myrepository/ckiib@sha256:f67fc953b10729e49bf012329fbfb6352b91bbc7d4b1bcdf5779f6d31c397c5c
Login to your Openshift with oc tool:
Oc login -u {USER} -p {PASS} https://api.{YOUR_CLUSTER_API_URL}:6443
Deploy CatalogSource to openshift-marketplace namespace:
oc apply -f myCatalogSource.yaml -n openshift-marketplace
Open your Openshift web console, navigate to OperatorHub, select the Camel K Offline
Source, then on the right side, select "Red Hat Integration - Camel K" and install it.
2.4.2. Maven build configuration
This guide is a best effort development done to help the final user to create maven offline bundle and be able to run Camel K in offline mode. However, since the high degree of flexibility in the installation topology we cannot provide any level of support, only guidance on the possible configuration to adopt. Also, given the quantity of third party dependencies downloaded during the procedure we cannot ensure any protection against possible CVEs affecting these third party libraries. Use at your best convenience.
The procedure contains a script that resolves, downloads and packages the entire set of Camel K Runtime and their transitive dependencies required by Maven to build and run the camel integration.
It requires that the Maven version from where you are running the scripts (likely your machine) is the same used in the Camel K operator, that is, Maven 3.6.3, This will ensure to have the correct dependencies versions.
The operator expects the dependencies to be owned by 1001 user. So ensure that the script is executed by such a user to avoid the maven build to fail due to privilege faults.
The output of the script is a tar.gz file containing all the tree of dependencies expected by Maven, allowing the target building system (that is, the Camel K operator).
It may not work in Quarkus native mode as the native build may require additional dependencies not available in the bundle.
2.4.2.1. Offliner script
The script is available in Camel K github repository.
You can run like this:
./offline_dependencies.sh usage: ./script/offline_dependencies.sh -v <Camel K Runtime version> [optional parameters] -v <Camel K Runtime version> - Camel K Runtime Version -m </usr/share/bin/mvn> - Path to mvn command -r <http://my-repo.com> - URL address of the maven repository manager -d </var/tmp/offline-1.2> - Local directory to add the offline dependencies -s - Skip Certificate validation
An example of a sample run:
./offline_dependencies.sh -v 1.15.6.redhat-00029 -r https://maven.repository.redhat.com/ga -d camel-k-offline -s
How to know the correct Camel K Runtime version ? You should look at the IntegrationPlatform/camel-k
custom resource object in the Openshift cluster, look for the runtimeVersion
field, example:
oc get integrationplatform/camel-k -oyaml|grep runtimeVersion
It may take about 30 minutes to resolve, if there is a maven proxy. All the packaged dependencies are available in a tar.gz file. It is a big file as it contains all the transitive dependencies required by all Camel components configured in the camel-k-catalog.
2.4.2.2. Upload dependencies to the the Maven Proxy Manager
The best practice we suggest is to always use a Maven Proxy. This is also the case of an offline installation. In such case you can check your Maven Repository Manager documentation and verify how to upload dependencies using the file created in the chapter above.
When you build the integration route, the build should use the maven proxy, thus requiring a custom maven settings.xml configured to mirror all maven repositories to go through the maven proxy.
Get the URL of the maven proxy and set in the url
field, like the example below:
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <mirrors> <mirror> <id>local-central-mirror</id> <name>local-central-mirror</name> <mirrorOf>*</mirrorOf> <url>http://my-maven-proxy:8080/releases</url> </mirror> </mirrors> </settings>
Create a ConfigMap
from this maven settings.xml.
kubectl create configmap local-maven-settings-offline --from-file=settings.xml=maven-settings-offline.xml
Now you have to inform Camel K to use this settings.xml when building the integrations.
If you have already installed Camel K, then you can patch the IntegrationPlatform/camel-k
, verify you environment for custom name and namespace:
kubectl patch itp/camel-k --type=merge -p '{"spec": {"build": {"maven": {"settings": {"configMapKeyRef": {"key": "settings.xml", "name": "local-maven-settings-offline"}}}}}}'
Then you should be able to run the integration with kamel run my-integration.java
and follow the camel-k-operator log.
kubectl logs -f `kubectl get pod -l app=camel-k -oname`
2.4.2.3. Troubleshooting
2.4.2.3.1. Error to download dependencies
Check if the maven repository manager is reachable from the camel-k-operator log.
Get into the camel-k-operator log
kubectl -n openshift-operators exec -i -t `kubectl -n openshift-operators get pod -l app=camel-k -oname` -- bash
And use curl to download the maven artifact
curl http://my-maven-proxy:8080/my-artifact.jar -o file
2.4.2.3.2. Dependency not found
It may happen that a specific maven artifact is not found in the maven proxy manager, due to the offline script inability to resolve and download the dependency, so you have to download that dependency and upload to the maven proxy manager.
Chapter 3. Developing and running Camel K integrations
This chapter explains how to set up your development environment and how to develop and deploy simple Camel K integrations written in Java and YAML. It also shows how to use the kamel
command line to manage Camel K integrations at runtime. For example, this includes running, describing, logging, and deleting integrations.
- Section 3.1, “Setting up your Camel K development environment”
- Section 3.2, “Developing Camel K integrations in Java”
- Section 3.3, “Developing Camel K integrations in YAML”
- Section 3.4, “Running Camel K integrations”
- Section 3.5, “Running Camel K integrations in development mode”
- Section 3.6, “Running Camel K integrations using modeline”
- Section 3.7, “Camel Runtimes (aka "sourceless" Integrations)”
- Section 3.8, “Importing existing Camel applications”
- Section 3.9, “Build”
- Section 3.10, “Promoting across environments”
3.1. Setting up your Camel K development environment
You must set up your environment with the recommended development tooling before you can automatically deploy the Camel K quick start tutorials. This section explains how to install the recommended Visual Studio (VS) Code IDE and the extensions that it provides for Camel K.
- The Camel K VS Code extensions are community features.
- VS Code is recommended for ease of use and the best developer experience of Camel K. This includes automatic completion of Camel DSL code and Camel K traits. However, you can manually enter your code and tutorial commands using your chosen IDE instead of VS Code.
Prerequisites
You must have access to an OpenShift cluster on which the Camel K Operator and OpenShift Serverless Operator are installed:
Procedure
Install VS Code on your development platform. For example, on Red Hat Enterprise Linux:
Install the required key and repository:
$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc $ sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo'
Update the cache and install the VS Code package:
$ yum check-update $ sudo yum install code
For details on installing on other platforms, see the VS Code installation documentation.
-
Enter the
code
command to launch the VS Code editor. For more details, see the VS Code command line documentation. Install the VS Code Camel Extension Pack, which includes the extensions required for Camel K. For example, in VS Code:
- In the left navigation bar, click Extensions.
- In the search box, enter Apache Camel.
Select the Extension Pack for Apache Camel by Red Hat, and click Install.
For more details, see the instructions for the Extension Pack for Apache Camel by Red Hat.
Additional resources
- VS Code Getting Started documentation
- VS Code Tooling for Apache Camel K by Red Hat extension
- VS Code Language Support for Apache Camel by Red Hat extension
- Apache Camel K and VS Code tooling example
- To upgrade your Camel application from Camel 3.x to 3.y see, Camel 3.x Upgrade Guide.
3.2. Developing Camel K integrations in Java
This section shows how to develop a simple Camel K integration in Java DSL. Writing an integration in Java to be deployed using Camel K is the same as defining your routing rules in Camel. However, you do not need to build and package the integration as a JAR when using Camel K.
You can use any Camel component directly in your integration routes. Camel K automatically handles the dependency management and imports all the required libraries from the Camel catalog using code inspection.
Prerequisites
Procedure
Enter the
camel init
command to generate a simple Java integration file. For example:$ camel init HelloCamelK.java
Open the generated integration file in your IDE and edit as appropriate. For example, the
HelloCamelK.java
integration automatically includes the Cameltimer
andlog
components to help you get started:// camel-k: language=java import org.apache.camel.builder.RouteBuilder; public class HelloCamelK extends RouteBuilder { @Override public void configure() throws Exception { // Write your routes here, for example: from("timer:java?period=1s") .routeId("java") .setBody() .simple("Hello Camel K from ${routeId}") .to("log:info"); } }
Next steps
3.3. Developing Camel K integrations in YAML
This section explains how to develop a simple Camel K integration in YAML DSL. Writing an integration in YAML to be deployed using Camel K is the same as defining your routing rules in Camel.
You can use any Camel component directly in your integration routes. Camel K automatically handles the dependency management and imports all the required libraries from the Camel catalog using code inspection.
Prerequisites
Procedure
Enter the
camel init
command to generate a simple YAML integration file. For example:$ camel init hello.camelk.yaml
Open the generated integration file in your IDE and edit as appropriate. For example, the
hello.camelk.yaml
integration automatically includes the Cameltimer
andlog
components to help you get started:# Write your routes here, for example: - from: uri: "timer:yaml" parameters: period: "1s" steps: - set-body: constant: "Hello Camel K from yaml" - to: "log:info"
3.4. Running Camel K integrations
You can run Camel K integrations in the cloud on your OpenShift cluster from the command line using the kamel run
command.
Prerequisites
- Setting up your Camel K development environment.
- You must already have a Camel integration written in Java or YAML DSL.
Procedure
Log into your OpenShift cluster using the
oc
client tool, for example:$ oc login --token=my-token --server=https://my-cluster.example.com:6443
Ensure that the Camel K Operator is running, for example:
$ oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s
Enter the
kamel run
command to run your integration in the cloud on OpenShift. For example:Java example
$ kamel run HelloCamelK.java integration "hello-camel-k" created
YAML example
$ kamel run hello.camelk.yaml integration "hello" created
Enter the
kamel get
command to check the status of the integration:$ kamel get NAME PHASE KIT hello Building Kit myproject/kit-bq666mjej725sk8sn12g
When the integration runs for the first time, Camel K builds the integration kit for the container image, which downloads all the required Camel modules and adds them to the image classpath.
Enter
kamel get
again to verify that the integration is running:$ kamel get NAME PHASE KIT hello Running myproject/kit-bq666mjej725sk8sn12g
Enter the
kamel log
command to print the log tostdout
:$ kamel log hello [1] 2021-08-11 17:58:40,573 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 17:58:40,653 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 17:58:40,844 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='camel-k-embedded-flow', language='yaml', location='file:/etc/camel/sources/camel-k-embedded-flow.yaml', } [1] 2021-08-11 17:58:41,216 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://yaml) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 136ms (build:0ms init:100ms start:36ms) [1] 2021-08-11 17:58:41,268 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 2.064s. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, camel-yaml-dsl, cdi] [1] 2021-08-11 17:58:42,423 INFO [info] (Camel (camel-1) thread #0 - timer://yaml) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from yaml] ...
-
Press
Ctrl-C
to terminate logging in the terminal.
Additional resources
-
For more details on the
kamel run
command, enterkamel run --help
- For faster deployment turnaround times, see Running Camel K integrations in development mode
- For details of development tools to run integrations, see VS Code Tooling for Apache Camel K by Red Hat
- See also Managing Camel K integrations
Running An Integration Without CLI
You can run an integration without a CLI (Command Line Interface) and create an Integration Custom Resource with the configuration to run your application.
For example, execute the following sample route.
kamel run Sample.java -o yaml
It returns the expected Integration Custom Resource.
apiVersion: camel.apache.org/v1 kind: Integration metadata: creationTimestamp: null name: my-integration namespace: default spec: sources: - content: " import org.apache.camel.builder.RouteBuilder; public class Sample extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:tick\") .log(\"Hello Integration!\"); } }" name: Sample.java status: {}
Save this custom resource in a yaml file, my-integration.yaml
. Now, run the integration that contains the Integration Custom Resource using the oc
command line, the UI, or the API to call the OpenShift cluster. In the following example, oc
CLI is used from the command line.
oc apply -f my-integration.yaml ... integration.camel.apache.org/my-integration created
The operator runs the Integration.
- Kubernetes supports Structural Schemas for CustomResourceDefinitions.
- For more details about Camel K traits see, Camel K trait configuration reference.
Schema changes on Custom Resources
The strongly-typed Trait API imposes changes on the following CustomResourceDefinitions: integrations
, integrationkits', and `integrationplatforms.
Trait properties under spec.traits.<trait-id>.configuration
are now defined directly under spec.traits.<trait-id>.
traits: container: configuration: enabled: true name: my-integration
↓↓↓
traits: container: enabled: true name: my-integration
Backward compatibility is possible in this implementation. To achieve backward compatibility, the Configuration
field with RawMessage
type is provided for each trait type, so that the existing integrations and resources are read from the new Red Hat build of Apache Camel K version.
When the old integrations and resources are read, the legacy configuration in each trait (if any) is migrated to the new Trait API fields. If the values are predefined on the new API fields, they precede the legacy ones.
type Trait struct { // Can be used to enable or disable a trait. All traits share this common property. Enabled *bool `property:"enabled" json:"enabled,omitempty"` // Legacy trait configuration parameters. // Deprecated: for backward compatibility. Configuration *Configuration `json:"configuration,omitempty"` } // Deprecated: for backward compatibility. type Configuration struct { RawMessage `json:",inline"` }
3.5. Running Camel K integrations in development mode
You can run Camel K integrations in development mode on your OpenShift cluster from the command line. Using development mode, you can iterate quickly on integrations in development and get fast feedback on your code.
When you specify the kamel run
command with the --dev
option, this deploys the integration in the cloud immediately and shows the integration logs in the terminal. You can then change the code and see the changes automatically applied instantly to the remote integration Pod on OpenShift. The terminal automatically displays all redeployments of the remote integration in the cloud.
The artifacts generated by Camel K in development mode are identical to those that you run in production. The purpose of development mode is faster development.
Prerequisites
- Setting up your Camel K development environment.
- You must already have a Camel integration written in Java or YAML DSL.
Procedure
Log into your OpenShift cluster using the
oc
client tool, for example:$ oc login --token=my-token --server=https://my-cluster.example.com:6443
Ensure that the Camel K Operator is running, for example:
$ oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s
Enter the
kamel run
command with--dev
to run your integration in development mode on OpenShift in the cloud. The following shows a simple Java example:$ kamel run HelloCamelK.java --dev Condition "IntegrationPlatformAvailable" is "True" for Integration hello-camel-k: test/camel-k Integration hello-camel-k in phase "Initialization" Integration hello-camel-k in phase "Building Kit" Condition "IntegrationKitAvailable" is "True" for Integration hello-camel-k: kit-c49sqn4apkb4qgn55ak0 Integration hello-camel-k in phase "Deploying" Progress: integration "hello-camel-k" in phase Initialization Progress: integration "hello-camel-k" in phase Building Kit Progress: integration "hello-camel-k" in phase Deploying Integration hello-camel-k in phase "Running" Condition "DeploymentAvailable" is "True" for Integration hello-camel-k: deployment name is hello-camel-k Progress: integration "hello-camel-k" in phase Running Condition "CronJobAvailable" is "False" for Integration hello-camel-k: different controller strategy used (deployment) Condition "KnativeServiceAvailable" is "False" for Integration hello-camel-k: different controller strategy used (deployment) Condition "Ready" is "False" for Integration hello-camel-k Condition "Ready" is "True" for Integration hello-camel-k [1] Monitoring pod hello-camel-k-7f85df47b8-js7cb ... ... [1] 2021-08-11 18:34:44,069 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 18:34:44,167 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 18:34:44,362 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 243ms (build:0ms init:213ms start:30ms) [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.457s. [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 18:34:46,191 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [1] 2021-08-11 18:34:47,200 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:48,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:49,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] ...
Edit the content of your integration DSL file, save your changes, and see the changes displayed instantly in the terminal. For example:
... integration "hello-camel-k" updated ... [2] 2021-08-11 18:40:54,173 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [2] 2021-08-11 18:40:54,209 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [2] 2021-08-11 18:40:54,301 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [2] 2021-08-11 18:40:55,797 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 174ms (build:0ms init:147ms start:27ms) [2] 2021-08-11 18:40:55,803 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.025s. [2] 2021-08-11 18:40:55,808 INFO [io.quarkus] (main) Profile prod activated. [2] 2021-08-11 18:40:55,809 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [2] 2021-08-11 18:40:56,810 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [2] 2021-08-11 18:40:57,793 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] ...
-
Press
Ctrl-C
to terminate logging in the terminal.
Additional resources
-
For more details on the
kamel run
command, enterkamel run --help
- For details of development tools to run integrations, see VS Code Tooling for Apache Camel K by Red Hat
- Managing Camel K integrations
- Configuring Camel K integration dependencies
3.6. Running Camel K integrations using modeline
You can use the Camel K modeline to specify multiple configuration options in a Camel K integration source file, which are executed at runtime. This creates efficiencies by saving you the time of re-entering multiple command line options and helps to prevent input errors.
The following example shows a modeline entry from a Java integration file that enables 3scale and limits the integration container memory.
Prerequisites
- Setting up your Camel K development environment
- You must already have a Camel integration written in Java or YAML DSL.
Procedure
Add a Camel K modeline entry to your integration file. For example:
ThreeScaleRest.java
// camel-k: trait=3scale.enabled=true trait=container.limit-memory=256Mi 1 import org.apache.camel.builder.RouteBuilder; public class ThreeScaleRest extends RouteBuilder { @Override public void configure() throws Exception { rest().get("/") .to("direct:x"); from("direct:x") .setBody().constant("Hello"); } }
Enables both the container and 3scale traits, to expose the route through 3scale and to limit the container memory.
Run the integration, for example:
kamel run ThreeScaleRest.java
The
kamel run
command outputs any modeline options specified in the integration, for example:Modeline options have been loaded from source files Full command: kamel run ThreeScaleRest.java --trait=3scale.enabled=true --trait=container.limit-memory=256Mi
Additional resources
- Camel K modeline options
- For details of development tools to run modeline integrations, see Introducing IDE support for Apache Camel K Modeline.
3.7. Camel Runtimes (aka "sourceless" Integrations)
Camel K can run any runtime available in Apache Camel. However, this is possible only when the Camel application was previously built and packaged into a container image. Also, if you run through this option, some of the features offered by the operator may not be available. For example, you cannot discover Camel capabilities because the source is not available to the operator but embedded in the container image.
This option is good if you are building your applications externally, that is, via a CICD technology, and you want to delegate the operator only the "operational" part, taking care on your own of the building and publishing part.
You may loose more features, such as incremental image and container kit reusability.
3.7.1. Build externally, run via Operator
Let us see the following example.
You can have your own Camel application or just create a basic one for the purpose via Camel JBang (camel init test.yaml
). Once your development is over, you can test locally via camel run test.yaml
and export in the runtime of your choice via camel export test.yaml --runtime …
.
The above step is a quick way to create a basic Camel application in any of the available runtime. Let us imagine we have done this for Camel Main or we already have a Camel application as a Maven project. As we want to take care of the build part by ourselves, we create a pipeline to build, containerize and push the container to a registry (see as a reference Camel K Tekton example).
At this stage we do have a container image with our Camel application. We can use the kamel
CLI to run our Camel application via kamel run --image docker.io/my-org/my-app:1.0.0
tuning, if it is the case, with any of the trait or configuration required. Remember that when you run an Integration with this option, the operator creates a synthetic IntegrationKit.
Certain traits (that is, builder traits) are not available when running an application built externally.
In a few seconds (there is no build involved) you must have your application up and running and you can monitor and operate with Camel K as usual.
3.7.2. Traits and dependencies
Certain Camel K operational aspect may be driven by traits. When you are building the application outside the operator, some of those traits are not executed as they are executed during the building phase that we are skipping when running sourceless Integrations.
3.8. Importing existing Camel applications
You already have a Camel application running on your cluster, and you have created it via a manual deployment, a CICD or any other deployment mechanism you have in place. Since the Camel K operator is meant to operate any Camel application out there, you are able to import it and monitor in a similar method of any other Camel K managed Integration.
This feature is disabled by default. To enable it, you must run the operator deployment with an environment variable, CAMEL_K_SYNTHETIC_INTEGRATIONS
, set to true
.
You are only able to monitor the synthetic Integrations. Camel K does not alter the lifecycle of non managed Integrations (that is, rebuild the original application).
The operator does not alter any field of the original application to avoid breaking any deployment procedure which is already in place. As it cannot make any assumption on the way the application is built and deployed, it is only able to watch for any changes happening around it.
3.8.1. Deploy externally, monitor via Camel K Operator
An imported Integration is known as synthetic Integration. You can import any Camel application deployed as a Deployment, CronJob or Knative Service. We control this behavior via a label (camel.apache.org/integration
) that the user must apply on the Camel application (either manually or introducing in the deployment process, that is, via CICD).
The example here works in a similar way using CronJob and Knative Service.
As an example, we show how to import a Camel application which was deployed with the Deployment kind. Let us assume it is called my-deploy
.
$ oc label deploy my-camel-sb-svc camel.apache.org/integration=my-it
The operator immediately creates a synthetic Integration.
$ oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running
You can see, it is in Running
status phase. However, after checking the conditions, you now see that the Integration is not yet fully monitored. This is expected because of the way Camel K operator monitor Pods. It requires the same label applied to the Deployment is inherited by the generated Pods. For this reason, beside labelling the Deployment, we must add a label in the Deployment template.
$ oc patch deployment my-camel-sb-svc --patch '{"spec": {"template": {"metadata": {"labels": {"camel.apache.org/integration": "my-it"}}}}}'
This operation can also be performed manually or automated in the deployment procedure. We can now see that the operator is able to monitor the status of the Pods.
$ oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running 1
From now on, you are able to monitor the status of the synthetic Integration in a similar method as you do with managed Integrations. If, for example, your Deployment scales up or down, then, you see this information reflecting accordingly.
$ oc scale deployment my-camel-sb-svc --replicas 2 $ oc get it NAMESPACE NAME PHASE RUNTIME PROVIDER RUNTIME VERSION KIT REPLICAS test-79c385c3-d58e-4c28-826d-b14b6245f908 my-it Running 2
3.9. Build
A Build resource describes the process of assembling a container image that copes with the requirement of an Integration or IntegrationKit.
The result of a build is an IntegrationKit that must be reused for multiple Integrations.
type Build struct { Spec BuildSpec 1 Status BuildStatus 2 } type BuildSpec struct { Tasks []Task 3 }
The full go definition can be found here.

3.9.1. Build strategy
You can choose from different build strategies. The build strategy defines how a build must be executed and following are the available strategies.
- buildStrategy: pod (each build is ran in a separate pod, the operator monitors the pod state)
- buildStrategy: routine (each build is ran as a go routine inside the operator pod)
Routine is the default strategy.
The following description allows you to decide when to use which strategy.
Routine: provides slightly faster builds as no additional pod is started, and loaded build dependencies (e.g. Maven dependencies) are cached between builds. Good for normal amount of builds being executed and only few builds running in parallel.
Pod: prevents memory pressure on the operator as the build does not consume CPU and memory from the operator go runtime. Good for many builds being executed and many parallel builds.
3.9.2. Build queues
IntegrationKits and its base images must be reused for multiple Integrations to accomplish an efficient resource management and to optimize build and startup times for Camel K Integrations.
To reuse images, the operator is going to queue builds in sequential order. This way the operator is able to use efficient image layering for Integrations.
By default, builds are queued sequentially based on their layout (e.g. native, fast-jar) and the build namespace.
However, builds may not run sequentially but in parallel to each other based on certain criteria.
- For instance, native builds will always run in parallel to other builds.
- Also, when the build requires to run with a custom IntegrationPlatform it may run in parallel to other builds that run with the default operator IntegrationPlatform.
- In general, when there is no chance to reuse the build’s image layers, the build is eager to run in parallel to other builds.
Therefore, to avoid having many builds running in parallel, the operator uses a maximum number of running builds setting that limits the amount of builds running.
You can set this limit in the IntegrationPlatform settings.
The default values for this limitation is based on the build strategy.
- buildStrategy: pod (MaxRunningBuilds=10)
- buildStrategy: routine (MaxRunningBuilds=3)
3.10. Promoting across environments
As soon as you have an Integration running in your cluster, you can move that integration to a higher environment. That is, you can test your integration in a development environment, and, after obtaining the result, you can move it into a production environment.
Camel K achieves this goal by using the kamel promote
command. With this command you can move an integration from one namespace to another.
Prerequisites
- Setting up your Camel K development environment
- You must already have a Camel integration written in Java or YAML DSL.
-
Ensure that both the source operator and the destination operator are using the same container registry, default registry (if Camel K operator is installed via OperatorHub) is
registry.redhat.io
- Also ensure that the destination namespace provides the required Configmaps, Secrets or Kamelets required by the integration.
To use the same container registry, you can use the --registry
option during installation phase or change the IntegrationPlatform to reflect that accordingly.
Code example
Following is a simple integration that uses a Configmap to expose some message on an HTTP endpoint. You can start creating such an integration and testing in a namespace called
development
.kubectl create configmap my-cm --from-literal=greeting="hello, I am development!" -n development
PromoteServer.java
import org.apache.camel.builder.RouteBuilder; public class PromoteServer extends RouteBuilder { @Override public void configure() throws Exception { from("platform-http:/hello?httpMethodRestrict=GET").setBody(simple("resource:classpath:greeting")); } }
Now run it.
kamel run --dev -n development PromoteServer.java --config configmap:my-cm [-t service.node-port=true]
You must tweak the service trait, depending on the Kubernetes platform and the level of exposure you want to provide. After that you can test it.
curl http://192.168.49.2:32116/hello hello, I am development!
After testing of your integration, you can move it to a production environment. You must have the destination environment (a Openshift namespace) ready with an operator (sharing the same operator source container registry) and any configuration, such as the configmap you have used here. For that scope, create one on the destination namespace.
kubectl create configmap my-cm --from-literal=greeting="hello, I am production!" -n production
NoteFor security reasons, there is a check to ensure that the expected resources such as Configmaps, Secrets and Kamelets are present on the destination. If any of these resources are missing, the integration does not move.
You can now promote your integration.
kamel promote promote-server -n development --to production kamel logs promote-server -n production
Test the promoted integration.
curl http://192.168.49.2:30764/hello hello, I am production!
Since the Integration is reusing the same container image, the new application is executed immediately. Also, the immutability of the Integration is assured as the container used is exactly the same as the one tested in development (changes are just the configurations).
NoteThe integration running in the test is not altered in any way and keeps running until you stop it.
Chapter 4. Upgrading Camel K
You can upgrade installed Camel K operator automatically, but it does not automatically upgrade the Camel K integrations. You must manually trigger the upgrade for the Camel K integrations. This chapter explains how to upgrade both Camel K operator and Camel K integrations.
4.1. Upgrading Camel K operator
The subscription of an installed Camel K operator specifies an update channel, for example, 1.10.x
channel, which is used to track and receive updates for the operator. To upgrade the operator to start tracking and receiving updates from a newer channel, you can change the update channel in the subscription. See Upgrading installed operators for more information about changing the update channel for installed operator.
- Installed Operators cannot change to a channel that is older than the current channel.
If the approval strategy in the subscription is set to Automatic, the upgrade process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending upgrades.
Prerequisites
- Camel K operator is installed using Operator Lifecycle Manager (OLM).
Procedure
- In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
- Click the Camel K Operator.
- Click the Subscription tab.
- Click the name of the update channel under Channel.
-
Click the newer update channel that you want to change to. For example,
latest
. Click Save. This will start the upgrade to the latest Camel K version.
For subscriptions with an Automatic approval strategy, the upgrade begins automatically. Navigate back to the Operators → Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date.
For subscriptions with a Manual approval strategy, you can manually approve the upgrade from the Subscription tab.
4.2. Upgrading Camel K integrations
When you trigger the upgrade for Camel K operator, the operator prepares the integrations to be upgraded, but does not trigger an upgrade for each one, to avoid service interruptions. When upgrading the operator, integration custom resources are not automatically upgraded to the newer version, so for example, the operator may be at version 1.10.3
, while integrations report the status.version
field of the custom resource the previous version 1.8.2
.
Prerequisites
Camel K operator is installed and upgraded using Operator Lifecycle Manager (OLM).
Procedure
- Open the terminal and run the following command to upgrade the Camel K intergations.
kamel rebuild myintegration
This will clear the status of the integration resource and the operator will start the deployment of the integration using the artifacts from upgraded version, for example, version 1.10.3
.
4.3. Downgrading Camel K
You can downgrade to older version of Camel K operator by installing a previous version of the operator. This needs to be triggered manually using OC CLI. For more infromation about installing specific version of the operator using CLI see Installing a specific version of an Operator.
You must remove the existing Camel K operator and then install the specifc version of the operator as downgrading is not supported in OLM.
Once you install the older version of operator, use the kamel rebuild
command to downgrade the integrations to the operator version. For example,
kamel rebuild myintegration
Chapter 5. Camel K quick start developer tutorials
Red Hat Integration - Camel K provides quick start developer tutorials based on integration use cases available from https://github.com/openshift-integration. This chapter provides details on how to set up and deploy the following tutorials:
- Section 5.1, “Deploying a basic Camel K Java integration”
- Section 5.2, “Deploying a Camel K Serverless integration with Knative”
- Section 5.3, “Deploying a Camel K transformations integration”
- Section 5.4, “Deploying a Camel K Serverless event streaming integration”
- Section 5.5, “Deploying a Camel K Serverless API-based integration”
- Section 5.6, “Deploying a Camel K SaaS integration”
- Section 5.7, “Deploying a Camel K JDBC integration”
- Section 5.8, “Deploying a Camel K JMS integration”
- Section 5.9, “Deploying a Camel K Kafka integration”
5.1. Deploying a basic Camel K Java integration
This tutorial demonstrates how to run a simple Java integration in the cloud on OpenShift, apply configuration and routing to an integration, and run an integration as a Kubernetes CronJob.
Prerequisites
- See the tutorial readme in GitHub.
-
You must have installed the Camel K operator and the
kamel
CLI. See Installing Camel K. - Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository.
$ git clone git@github.com:openshift-integration/camel-k-example-basic.git
- In VS Code, select File → Open Folder → camel-k-example-basic.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying basic Camel K Java integration.
Additional resources
5.2. Deploying a Camel K Serverless integration with Knative
This tutorial demonstrates how to deploy Camel K integrations with OpenShift Serverless in an event-driven architecture. This tutorial uses a Knative Eventing broker to communicate using an event publish-subscribe pattern in a Bitcoin trading demonstration.
This tutorial also shows how to use Camel K integrations to connect to a Knative event mesh with multiple external systems. The Camel K integrations also use Knative Serving to automatically scale up and down to zero as needed.
Prerequisites
- See the tutorial readme in GitHub.
You must have cluster administrator access to an OpenShift cluster to install Camel K and OpenShift Serverless:
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository:
$ git clone git@github.com:openshift-integration/camel-k-example-knative.git
- In VS Code, select File → Open Folder → camel-k-example-knative.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
If you do not have VS Code installed, you can manually enter the commands from deploying Camel K Knative integration.
Additional resources
5.3. Deploying a Camel K transformations integration
This tutorial demonstrates how to run a Camel K Java integration on OpenShift that transforms data such as XML to JSON, and stores it in a database such as PostgreSQL.
The tutorial example uses a CSV file to query an XML API and uses the data collected to build a valid GeoJSON file, which is stored in a PostgreSQL database.
Prerequisites
- See the tutorial readme in GitHub.
- You must have cluster administrator access to an OpenShift cluster to install Camel K. See Installing Camel K.
- You must follow the instructions in the tutorial readme to install Crunchy Postgres for Kubernetes, which is required on your OpenShift cluster.
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository:
$ git clone git@github.com:openshift-integration/camel-k-example-transformations.git
- In VS Code, select File → Open Folder → camel-k-example-transformations.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
If you do not have VS Code installed, you can manually enter the commands from deploying Camel K transformations integration.
Additional resources
5.4. Deploying a Camel K Serverless event streaming integration
This tutorial demonstrates using Camel K and OpenShift Serverless with Knative Eventing for an event-driven architecture.
The tutorial shows how to install Camel K and Serverless with Knative in an AMQ Streams cluster with an AMQ Broker cluster, and how to deploy an event streaming project to run a global hazard alert demonstration application.
Prerequisites
- See the tutorial readme in GitHub.
You must have cluster administrator access to an OpenShift cluster to install Camel K and OpenShift Serverless:
You must follow the instructions in the tutorial readme to install the additional required Operators on your OpenShift cluster:
- AMQ Streams Operator
- AMQ Broker Operator
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository.
$ git clone git@github.com:openshift-integration/camel-k-example-event-streaming.git
- In VS Code, select File → Open Folder → camel-k-example-event-streaming.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying Camel K event stream integration.
Additional resources
5.5. Deploying a Camel K Serverless API-based integration
This tutorial demonstrates using Camel K and OpenShift Serverless with Knative Serving for an API-based integration, and managing an API with 3scale API Management on OpenShift.
The tutorial shows how to configure Amazon S3-based storage, design an OpenAPI definition, and run an integration that calls the demonstration API endpoints.
Prerequisites
- See the tutorial readme in GitHub.
You must have cluster administrator access to an OpenShift cluster to install Camel K and OpenShift Serverless:
- You can also install the optional Red Hat Integration - 3scale Operator on your OpenShift system to manage the API. See Deploying 3scale using the Operator.
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository.
$ git clone git@github.com:openshift-integration/camel-k-example-api.git
- In VS Code, select File → Open Folder → camel-k-example-api.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying Camel K API integration.
Additional resources
5.6. Deploying a Camel K SaaS integration
This tutorial demonstrates how to run a Camel K Java integration on OpenShift that connects two widely-used Software as a Service (SaaS) providers.
The tutorial example shows how to integrate the Salesforce and ServiceNow SaaS providers using REST-based Camel components. In this simple example, each new Salesforce Case is copied to a corresponding ServiceNow Incident that includes the Salesforce Case Number.
Prerequisites
- See the tutorial readme in GitHub.
- You must have cluster administrator access to an OpenShift cluster to install Camel K. See Installing Camel K.
- You must have Salesforce login credentials and ServiceNow login credentials.
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository:
$ git clone git@github.com:openshift-integration/camel-k-example-saas.git
- In VS Code, select File → Open Folder → camel-k-example-saas.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions .
If you do not have VS Code installed, you can manually enter the commands from deploying Camel K SaaS integration.
Additional resources
5.7. Deploying a Camel K JDBC integration
This tutorial demonstrates how to get started with Camel K and an SQL database via JDBC drivers. This tutorial shows how to set up an integration producing data into a Postgres database (you can use any relational database of your choice) and also how to read data from the same database.
Prerequisites
- See the tutorial readme in GitHub.
You must have cluster administrator access to an OpenShift cluster to install Camel K.
- You must follow the instructions in the tutorial readme to install Crunchy Postgres for Kubernetes, which is required on your OpenShift cluster.
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository.
$ git clone git@github.com:openshift-integration/camel-k-example-jdbc.git
- In VS Code, select File → Open Folder → camel-k-example-jdbc.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
Alternatively, if you do not have VS Code installed, you can manually enter the commands from deploying Camel K JDBC integration.
5.8. Deploying a Camel K JMS integration
This tutorial demonstrates how to use JMS to connect to a message broker in order to consume and produce messages. There are two examples:
- JMS Sink: this tutorial demonstrates how to produce a message to a JMS broker.
- JMS Source: this tutorial demonstrates how to consume a message from a JMS broker.
Prerequisites
- See the tutorial readme in GitHub.
You must have cluster administrator access to an OpenShift cluster to install Camel K.
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository:
$ git clone git@github.com:openshift-integration/camel-k-example-jms.git
- In VS Code, select File → Open Folder → camel-k-example-jms.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
If you do not have VS Code installed, you can manually enter the commands from deploying Camel K JMS integration.
Additional resources
5.9. Deploying a Camel K Kafka integration
This tutorial demonstrates how to use Camel K with Apache Kafka. This tutorial demonstrates how to set up a Kafka Topic via Red Hat OpenShift Streams for Apache Kafka and to use it in conjunction with Camel K.
Prerequisites
- See the tutorial readme in GitHub.
You must have cluster administrator access to an OpenShift cluster to install Camel K.
- Visual Studio (VS) Code is optional but recommended for the best developer experience. See Setting up your Camel K development environment.
Procedure
Clone the tutorial Git repository:
$ git clone git@github.com:openshift-integration/camel-k-example-kafka.git
- In VS Code, select File → Open Folder → camel-k-example-kafka.
-
In the VS Code navigation tree, click the
readme.md
file. This opens a new tab in VS Code to display the tutorial instructions. Follow the tutorial instructions.
If you do not have VS Code installed, you can manually enter the commands from deploying Camel K Kafka integration.