이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Testing guide Camel K


Red Hat build of Apache Camel K 1.10.7

Test your Camel K integration locally and on cloud infrastructure

Abstract

How to test Red Hat build of Apache Camel K locally using jBang, and using YAKS on cloud infrastructure.

Preface

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Testing Camel K integration locally

This chapter provides details on how to use Camel jBang to locally test a Camel K integration.

1.1. Using Camel jBang to locally test a Camel K integration

Testing is one of the main operations performed repeatedly while building any application. With the advent of Camel JBang, we have a unified place that can be used to perform testing and fine tuning locally before moving to a higher environment.

Testing or fine tuning an integration directly connected to a Cloud Native environment is a bit cumbersome. You must be connected to the cluster, or alternatively, you need a local Kubernetes cluster running on your machine (Minikube, Kind etc). Most of the time, the aspects inherent to the cluster fine tuning are arriving late in the development.

Therefore, it is good to have a ligther way of testing our applications locally and, move to a deployment stage where we can apply that tuning, typical to a cloud native environment.

kamel local is the command used to test an Integration locally in the past. However, it overlaps the effort done by Camel community to having a single CLI that is used to test locally any Camel application independently, where this is going to be deployed.

1.1.1. Camel JBang installation

Firstly, we need to install and get familiar with jbang and camel CLIs. You can follow the official documentation about Camel JBang to install the CLIs to your local environment. After this, we can see how to test an Integration for Camel K with Camel JBang.

1.1.2. Simple application development

The first application we develop is a simple one, and it defines the process you must follow when testing any Integration that is eventually deployed in Kubernetes via Camel K. verify the target version of Camel in your Camel K installation. With this information we can ensure to test locally against the same version that we will be later deploying in a cluster.

$ kamel version -a -v | grep Runtime
Runtime Version: 3.8.1

$ kubectl get camelcatalog camel-catalog-3.8.1 -o yaml | grep camel\.version
      camel.version: 3.8.1

The commands above are useful to find out what is the Camel version used by the runtime in your cluster Camel K installation. Our target is Camel version 3.18.3.

The easiest way to initialize a Camel route is to run camel init command:

$ camel init HelloJBang.java

At this stage, we can edit the file with the logic we need for our integration, or, simply run it:

$ camel run HelloJBang.java
2022-11-23 12:11:05.407  INFO 52841 --- [           main] org.apache.camel.main.MainSupport        : Apache Camel (JBang) 3.18.1 is starting
2022-11-23 12:11:05.470  INFO 52841 --- [           main] org.apache.camel.main.MainSupport        : Using Java 11.0.17 with PID 52841. Started by squake in /home/squake/workspace/jbang/camel-blog
2022-11-23 12:11:07.537  INFO 52841 --- [           main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) is starting
2022-11-23 12:11:07.675  INFO 52841 --- [           main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:1)
2022-11-23 12:11:07.676  INFO 52841 --- [           main] e.camel.impl.engine.AbstractCamelContext :     Started java (timer://java)
2022-11-23 12:11:07.676  INFO 52841 --- [           main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) started in 397ms (build:118ms init:140ms start:139ms JVM-uptime:3s)
2022-11-23 12:11:08.705  INFO 52841 --- [ - timer://java] HelloJBang.java:14                       : Hello Camel from java
2022-11-23 12:11:09.676  INFO 52841 --- [ - timer://java] HelloJBang.java:14                       : Hello Camel from java
...

A local java process starts with a Camel application running. No need to create a Maven project, all the boilerplate is on Camel JBang! However, you may notice that the Camel version used is different from the one we want to target. This is because your Camel JBang is using a different version of Camel. No worry, we can re-run this application but specifying the Camel version we want to run:

$ jbang run -Dcamel.jbang.version=3.18.3 camel@apache/camel run HelloJBang.java
...
[1] 2022-11-23 11:13:02,825 INFO  [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.18.3 (camel-1) started in 70ms (build:0ms init:61ms start:9ms)
...
Note

Camel JBang uses a default Camel version and if you want you can use -Dcamel.jbang.version option to explicitly set a Camel version overwriting the default.

The next step is to run it in a Kubernetes cluster where Camel K is installed.

Let us use the Camel K plugin for Camel JBang here instead of kamel CLI. This way, you can use the same JBang tooling to both run the Camel K integration locally and on the K8s cluster with the operator. The JBang plugin documentation can be found here: Camel JBang Kubernetes.

You see that the Camel K operator takes care to do the necessary transformation and build the Integration and related resources according to the expected lifecycle. Once this is live, you can follow up with the operations you usually do on a deployed Integration.

The benefit of this process is that you need not worry about the remote cluster until you are satisfied with your Integration tuned locally.

1.1.3. Fine tuning for Cloud

Once your Integration ready, you must take care about the kind of tuning, related to cluster deployment. Having this, you need not worry on deployment details at an early stage of the development. Or you can even have a separation of roles in your company where the domain expert may develop the integration locally and the cluster expert may do the deployment at a later stage.

Let us see an example about how to develop an integration that will later need some fine tuning in the cluster.

import org.apache.camel.builder.RouteBuilder;

public class MyJBangRoute extends RouteBuilder {

    @Override
    public void configure() throws Exception {
        from("file:/tmp/input")
            .convertBodyTo(String.class)
            .log("Processing file ${headers.CamelFileName} with content: ${body}")
            /*
            .filter(simple("${body} !contains 'checked'"))
                .log("WARN not checked: ${body}")
                .to("file:/tmp/discarded")
            .end()
            .to("file:/tmp/output");
            */
            .choice()
                .when(simple("${body} !contains 'checked'"))
                    .log("WARN not checked!")
                    .to("file:/tmp/discarded")
                .otherwise()
                    .to("file:/tmp/output")
            .end();
    }
}
Copy to clipboard

There is a process that is in charge to write files into a directory. You must filter those files based on their content. We have left the code comments on purpose because it was the way we developed iteratively. We tested something locally with Camel JBang, until we came to the final version of the integration. We had tested the Filter EIP but while testing we needed a Content Based Router EIP instead. It must sound a familiar process as it happens probably every time we develop something.

Now that we are ready, we run a last round of testing locally via Camel JBang:

$ jbang run -Dcamel.jbang.version=3.18.3 camel@apache/camel run MyJBangRoute.java
2022-11-23 12:19:11.516  INFO 55909 --- [           main] org.apache.camel.main.MainSupport        : Apache Camel (JBang) 3.18.3 is starting
2022-11-23 12:19:11.592  INFO 55909 --- [           main] org.apache.camel.main.MainSupport        : Using Java 11.0.17 with PID 55909. Started by squake in /home/squake/workspace/jbang/camel-blog
2022-11-23 12:19:14.020  INFO 55909 --- [           main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.3 (CamelJBang) is starting
2022-11-23 12:19:14.220  INFO 55909 --- [           main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:1)
2022-11-23 12:19:14.220  INFO 55909 --- [           main] e.camel.impl.engine.AbstractCamelContext :     Started route1 (file:///tmp/input)
2022-11-23 12:19:14.220  INFO 55909 --- [           main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.3 (CamelJBang) started in 677ms (build:133ms init:344ms start:200ms JVM-uptime:3s)
2022-11-23 12:19:27.757  INFO 55909 --- [le:///tmp/input] MyJBangRoute.java:11                     : Processing file file_1669202367381 with content: some entry

2022-11-23 12:19:27.758  INFO 55909 --- [le:///tmp/input] MyJBangRoute:21                          : WARN not checked!
2022-11-23 12:19:32.276  INFO 55909 --- [le:///tmp/input] MyJBangRoute.java:11                     : Processing file file_1669202372252 with content: some entry checked

We have tested adding files on the input directory. Ready to promote to my development cluster!

Use the Camel K JBang plugin here to run the integration on K8s so you do not need to switch tooling.

Run the following command:

camel k run MyJBangRoute.java

The Integration started correctly, but we are using a file system that is local to the Pod where the Integration is running.

1.1.3.1. Kubernetes fine tuning

Now, let us configure our application for the cloud. Cloud Native development must take into consideration a series of challenges that are implicit in the way how this new paradigm works (as a reference see the 12 factors).

Kubernetes can be sometimes a bit difficult to fine tune. Many resources to edit and check. Camel K provide a user friendly way to apply most of the tuning your application needs directly in the kamel run command (or in the modeline). You must get familiar with Camel K Traits.

In this case we want to use certain volumes we had availed in our cluster. We can use the --volume option (syntactic sugar of mount trait) and enable them easily. We can read and write on those volumes from some other Pod: it depends on the architecture of our Integration process.

$ kamel run MyJBangRoute.java --volume my-pv-claim-input:/tmp/input --volume my-pv-claim-output:/tmp/output --volume my-pv-claim-discarded:/tmp/discarded --dev
...
[1] 2022-11-23 11:39:26,281 INFO  [route1] (Camel (camel-1) thread #1 - file:///tmp/input) Processing file file_1669203565971 with content: some entry
[1]
[1] 2022-11-23 11:39:26,303 INFO  [route1] (Camel (camel-1) thread #1 - file:///tmp/input) WARN not checked!
[1] 2022-11-23 11:39:32,322 INFO  [route1] (Camel (camel-1) thread #1 - file:///tmp/input) Processing file file_1669203571981 with content: some entry checked

You must iterate this tuning as well, but at least, now that the internals of the route have been polished locally you must focus on deployment aspects only. And, once you are ready with this, take the benefit of kamel promote to move your Integration through various stages of development.

1.1.4. How to test Kamelet locally?

Another benefit of Camel JBang is the ability to test a Kamelet locally. Until now, the easiest possibility to test a Kamelet was to upload to a Kubernetes cluster and to run some Integration using it via Camel K.

Let us develop a simple Kamelet for this scope. It is a Coffee source we are using to generate random coffee events.

apiVersion: camel.apache.org/v1
kind: Kamelet
metadata:
  name: coffee-source
  annotations:
    camel.apache.org/kamelet.support.level: "Stable"
    camel.apache.org/catalog.version: "4.7.0-SNAPSHOT"
    camel.apache.org/kamelet.icon: "data:image/svg+xml;base64,<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://www.w3.org/2000/svg" height="92pt" width="92pt" version="1.0" xmlns:cc="http://creativecommons.org/ns#" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<defs>
		<linearGradient id="a">
			<stop stop-color="#ffffff" stop-opacity=".5" offset="0"/>
			<stop stop-color="#ffffff" stop-opacity=".1" offset="1"/>
		</linearGradient>
		<linearGradient id="d" y2="62.299" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="33.61" gradientTransform="matrix(.78479 0 0 1.2742 -25.691 -8.5635)" x2="95.689" x1="59.099"/>
		<linearGradient id="c" y2="241.09" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="208.04" gradientTransform="matrix(1.9777 0 0 .50563 -25.691 -8.5635)" x2="28.179" x1="17.402"/>
		<linearGradient id="b" y2="80.909" xlink:href="#a" gradientUnits="userSpaceOnUse" y1="55.988" gradientTransform="matrix(1.5469 0 0 .64647 -25.691 -8.5635)" x2="87.074" x1="70.063"/>
	</defs>
	<path stroke-linejoin="round" d="m12.463 24.886c2.352 1.226 22.368 5.488 33.972 5.226 16.527 0.262 30.313-6.049 32.927-7.055 0 1.433-2.307 10.273-2.614 15.679 0 5.448 1.83 28.415 2.091 33.711 0.868 6.178 2.704 13.861 4.443 19.077 1.829 3.553-23.563 9.856-34.757 10.456-12.602 0.78-38.937-4.375-37.369-8.366 0-3.968 3.659-13.383 3.659-19.599 0.522-6.025-0.262-23.273-0.262-30.836-0.261-6.78-1.053-12.561-2.09-18.293z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbd900"/>
	<path d="m10.633 94.659c-5.5851-1.331-7.8786 10.111-1.8288 12.021 6.3678 3.75 29.703 7.06 39.199 6.27 11.101-0.26 31.192-4.44 35.801-8.36 6.134-3.92 5.466-13.066 0-12.021-3.278 3.658-26.699 8.881-36.585 9.411-9.223 0.78-30.749-2.53-36.586-7.321z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbf3bf"/>
	<path stroke-linejoin="bevel" d="m77.382 34.046c1.245-3.212 9.639-6.972 12.364-7.516 4.686-1.05 12.384-1.388 16.764 4.28 7.94 10.323 6.76 28.626 2.86 34.638-2.78 5.104-9.371 10.282-14.635 11.878-5.151 1.533-12.707 2.661-14.333 3.711-0.35-1.296-1.327-7.388-1.38-9.071 1.95 0.128 7.489-0.893 11.695-1.868 3.902-0.899 6.45-3.274 9.333-6.222 5-4.7 4.35-21.16 0.54-25.057-2.233-2.262-6.849-3.904-9.915-3.323-4.992 1.032-13.677 7.366-13.677 6.98-0.508-2.08-0.25-6.159 0.384-8.43z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbf3bf"/>
	<path stroke-linejoin="round" d="m32.022 38.368c1.655 1.206-1.355 16.955-0.942 28.131 0.414 14.295 1.444 23.528-0.521 24.635-3.108 1.675-9.901-0.135-12.046-2.42-1.273-1.507 1.806-10.24 2.013-16.429-0.414-8.711-1.703-33.303-0.461-34.778 2.252-2.053 9.681-1.152 11.957 0.861z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m40.612 39.037c-1.478 1.424-0.063 19.625-0.063 22.559 0.305 3.808-1.101 27.452-0.178 28.954 1.848 2.122 10.216 2.442 13.001-0.356 1.505-1.875-0.478-22.544-0.478-27.68 0-5.51 1.407-22.052-0.44-23.58-2.033-2.149-8.44-3.18-11.842 0.103z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#fbe600"/>
	<path stroke-linejoin="round" d="m60.301 37.593c-1.658 1.256 1.179 15.8 1.194 26.982 0.137 14.299-1.245 24.662 0.824 25.709 3.268 1.578 10.881-1.542 13-3.891 1.253-1.545-1.411-10.179-2.082-16.358-0.984-8.164 0.148-33.128-1.189-34.564-2.402-1.984-9.482 0.04-11.747 2.122z" fill-rule="evenodd" stroke="#000000" stroke-width="1.25" fill="#fbe600"/>
	<path d="m53.582 31.12c-4.989 1.109-36.588-3.141-39.729-4.804 0.924 4.62 3.141 45.272 1.663 49.892 0.185 2.032-3.88 15.152-3.695 17.924 17.184-68.37 39.728-48.968 41.761-63.012z" fill-rule="evenodd" fill="url(#d)"/>
	<path d="m10.027 95.309c-3.0515-0.897-5.2053 6.821-2.872 9.151 5.743 2.69 13.282-2.33 38.23-1.61-12.743-0.36-31.589-2.874-35.358-7.541z" fill-rule="evenodd" fill="url(#c)"/>
	<path d="m78.59 33.567c4.487-4.488 8.794-5.564 13.999-6.462 8.791-2.333 14.901 3.769 16.871 11.846-4.49-7.179-10.23-8.256-14.178-8.436-4.128 0.718-15.795 7.898-16.872 9.154s-0.718-4.128 0.18-6.102z" fill-rule="evenodd" fill="url(#b)"/>
	<path stroke-linejoin="round" d="m11.408 77.34c2.3832 1.159 4.2811-1.5693 3.4649-3.0303 0.91503 0.08658 1.7948-0.3254 1.7948-1.7948 0.72044-0.72044-0.36461-1.8544-0.36461-2.7357-0.99354-0.99354 0.0056-2.165 0.0056-3.7257 0-1.5535 0.89742-2.5024 0.89742-4.1281 0-2.3611 2.0594-1.1807 0.89742-4.6666 1.0882-0.42455 2.2741-1.4845 0.89742-2.6923 2.1601-0.23952 3.2186-2.3542 0.53845-4.6666 4.0734 0-4.2302-8.7305 2.6923-6.9999 2.222-0.55551 1.7948-2.2151 1.7948-4.3076 2.8717 3.9487 6.8954 2.6213 7.5383 0 1.3486 4.3998 10.59 2.5869 10.59-2.8717 0.17948 6.7502 7.1177 3.4046 8.4358 3.9486-1.6154 1.8662 1.5841 9.0796 4.3076 9.1537-6.3097 4.7323-5.1729 13.001 2.5128 14.538 3.8938 0 5.3845-3.2785 5.3845-7.8973 1.2564 2.6447 6.972 4.2797 6.9999-0.17948 2.8717 5.5446 6.4959-1.4704 4.3076-2.1538 5.0256 1.9057 3.2128-6.9811 1.3785-9.056 2.8718-0.91448 1.8346-7.6184 0.0574-9.7898 2.6212 2.6652 6.7385-0.83112 6.282-5.923 1.228 3.4671 9.1475-0.36828 3.7692-8.4358 0-1.5451-4.4871-1.7488-5.564-0.53845-0.01541-5.4461-4.0997-9.6921-6.9999-8.6152 1.799-2.6932-9.048-4.8999-11.308-0.539 1.351-5.7012-13.81-9.3336-14.179-6.1029-1.748-2.5128-11.771-2.5586-14.718 6.2819 0-4.8606-16.309-6.9999-15.974 0.35897-3.4899-2.4331-9.2274 0.35897-8.7947 3.2307-5.3845-2.7034-7.842 9.5611-3.4102 10.231-2.5128 2.2624-2.6923 11.311 0.53845 11.128-1.9743 2.1297-0.89742 8.4366 1.2564 8.6152-1.6794 2.3206 0.2457 13.674 7.1794 11.846 0 2.5234 0.70877 4.6941-0.17948 7.3588 0 1.5455-0.89742 2.8528-0.89742 4.4871 0.37206 0.74412-1.2597 2.7244 0.53845 3.9486-4.2167 1.7593-3.3024 4.4642-1.6701 5.7226z" fill-rule="evenodd" stroke="#000000" stroke-width="1pt" fill="#ffffff"/>
	<path stroke-linejoin="round" d="m11.317 32.574c-1.5098-1.65 1.221-7.04 4.242-6.763 0.689-2.474 2.586-2.892 4.688-2.187-1.048-2.045 1.503-3.992 3.75-1.682 1.517-2.622 4.677-4.645 6.356-3.231-0.132-3.373 6.063-6.794 8.331-3.837 0 0.606-0.362 1.875 0 1.875" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
	<path stroke-linejoin="round" d="m48.372 22.374c-0.104-4.721 14.009-8.591 11.25-0.313 1.269-0.634 6.875-1.299 5.844 2.314 4.123-0.466 10.39 1.104 6.662 6.688 2.396 1.806 1.331 6.696-0.319 5.061" stroke="#000000" stroke-linecap="round" stroke-width="1pt" fill="none"/>
</svg>
"
    camel.apache.org/provider: "Apache Software Foundation"
    camel.apache.org/kamelet.group: "Coffees"
    camel.apache.org/kamelet.namespace: "Dataset"
  labels:
    camel.apache.org/kamelet.type: "source"
spec:
  definition:
    title: "Coffee Source"
    description: "Produces periodic events about coffees!"
    type: object
    properties:
      period:
        title: Period
        description: The time interval between two events
        type: integer
        default: 5000
  types:
    out:
      mediaType: application/json
  dependencies:
    - "camel:timer"
    - "camel:http"
    - "camel:kamelet"
  template:
    from:
      uri: "timer:coffee"
      parameters:
        period: "{{period}}"
      steps:
        - to: https://random-data-api.com/api/coffee/random_coffee
        - removeHeaders:
            pattern: '*'
        - to: "kamelet:sink"
Copy to clipboard

To test it, we can use a simple Integration to log its content:

- from:
    uri: "kamelet:coffee-source?period=5000"
    steps:
      - log: "${body}"
Copy to clipboard

Now we can run:

$ camel run --local-kamelet-dir=</path/to/local/kamelets/dir> coffee-integration.yaml
2022-11-24 11:27:29.634  INFO 39527 --- [           main] org.apache.camel.main.MainSupport        : Apache Camel (JBang) 3.18.1 is starting
2022-11-24 11:27:29.706  INFO 39527 --- [           main] org.apache.camel.main.MainSupport        : Using Java 11.0.17 with PID 39527. Started by squake in /home/squake/workspace/jbang/camel-blog
2022-11-24 11:27:31.391  INFO 39527 --- [           main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) is starting
2022-11-24 11:27:31.590  INFO 39527 --- [           main] org.apache.camel.main.BaseMainSupport    : Property-placeholders summary
2022-11-24 11:27:31.590  INFO 39527 --- [           main] org.apache.camel.main.BaseMainSupport    :     [coffee-source.kamelet.yaml]     period=5000
2022-11-24 11:27:31.590  INFO 39527 --- [           main] org.apache.camel.main.BaseMainSupport    :     [coffee-source.kamelet.yaml]     templateId=coffee-source
2022-11-24 11:27:31.591  INFO 39527 --- [           main] e.camel.impl.engine.AbstractCamelContext : Routes startup (started:2)
2022-11-24 11:27:31.591  INFO 39527 --- [           main] e.camel.impl.engine.AbstractCamelContext :     Started route1 (kamelet://coffee-source)
2022-11-24 11:27:31.591  INFO 39527 --- [           main] e.camel.impl.engine.AbstractCamelContext :     Started coffee-source-1 (timer://coffee)
2022-11-24 11:27:31.591  INFO 39527 --- [           main] e.camel.impl.engine.AbstractCamelContext : Apache Camel 3.18.1 (CamelJBang) started in 1s143ms (build:125ms init:819ms start:199ms JVM-uptime:2s)
2022-11-24 11:27:33.297  INFO 39527 --- [ - timer://coffee] coffee-integration.yaml:4                  : {"id":3648,"uid":"712d4f54-3314-4129-844e-9915002ecbb7","blend_name":"Winter Cowboy","origin":"Lekempti, Ethiopia","variety":"Agaro","notes":"delicate, juicy, sundried tomato, fresh bread, lemonade","intensifier":"juicy"}

This is a boost while you are programming a Kamelet, because you can have a quick feedback without the need of a cluster. Once ready, you can continue your development as usual uploading the Kamelet to the cluster and using in your Camel K integrations.

Chapter 2. Testing Camel K locally and on cloud infrastructure

This chapter describes the steps to test a Camel K integration with YAKS, both locally and on the cloud infrastructure (Kubernetes platform).

2.1. Testing Camel K with YAKS

2.1.1. What is YAKS?

YAKS is an Open Source test automation platform that leverages Behavior Driven Development concepts for running tests locally and on Cloud infrastructure (For example: Kubernetes or OpenShift). This means that the testing tool is able to run your tests both as local tests and natively on Kubernetes. The framework is specifically designed to verify Serverless and Microservice applications and aims for integration testing with the application under test, up and running in a production-like environment. A typical YAKS test uses the same infrastructure as the application under test and exchanges data/events over different messaging transports (For example: Http REST, Knative eventing, Kafka, JMS and many more).

As YAKS itself is written in Java the runtime uses a Java virtual machine with build tools such as Maven and integrates with well known Java testing frameworks such as JUnit, Cucumber and Citrus to run the tests.

2.1.2. Understanding the Camel K example

Here is a sample Camel K integration that we like to test in the following. The integration exposes a Http service to the user. The service accepts client Http POST requests that add fruit model objects. The Camel K route applies content based routing to store the fruits in different AWS S3 buckets.

Test scenario

In the test scenario YAKS is going to invoke the Camel K service and verify that the message content has been sent to the right AWS S3 bucket.

Here is a sample fruit model object that is subject to be stored in AWS S3:

{
  "id": 1000,
  "name": "Pineapple",
  "category":{
    "id": "1",
    "name":"tropical"
  },
  "nutrition":{
    "calories": 50,
    "sugar": 9
  },
  "status": "AVAILABLE",
  "price": 1.59,
  "tags": ["sweet"]
}
Copy to clipboard

Here is the Camel K integration route:

from('platform-http:/fruits')
    .log('received fruit ${body}')
    .unmarshal().json()
    .removeHeaders("*")
    .setHeader("CamelAwsS3Key", constant("fruit.json"))
    .choice()
        .when().simple('${body[nutrition][sugar]} <= 5')
            .setHeader("CamelAwsS3BucketName", constant("low-sugar"))
        .when().simple('${body[nutrition][sugar]} > 5 && ${body[nutrition][sugar]} <= 10')
            .setHeader("CamelAwsS3BucketName", constant("medium-sugar"))
        .otherwise()
            .setHeader("CamelAwsS3BucketName", constant("high-sugar"))
    .end()
    .marshal().json()
    .log('sending ${body}')
    .to("aws2-s3://noop?$parameters")
Copy to clipboard

The route uses content based routing EAP based on the nutrition sugar rating of a given fruit in order to send the fruits to different AWS S3 buckets (low-sugar, medium-sugar, high-sugar).

In the following the test case for this integration needs to invoke the exposed service with different fruits and verify its outcome on AWS S3.

2.1.3. How to test locally with YAKS

In the beginning, let us just write the test and run it locally. For now, we do not care how to deploy the application under test in the Cloud infrastructure as everything is running on the local machine using JBang.

JBang is a fantastic way to just start coding and running Java code and also Camel K integrations (also, see this former blog post about how JBang integrates with Camel K).

YAKS as a framework brings a set of ready-to-use domain specific languages (XML, YAML, Groovy, BDD Cucumber steps) for writing tests to verify your deployed services.

This post uses the Behavior Driven Development integration via Cucumber. So the YAKS test is a single feature file that uses BDD Gherkin syntax like this:

Feature: Camel K Fruit Store

  Background:
    Given URL: http://localhost:8080

  Scenario: Create infrastructure
    # Start AWS S3 container
    Given Enable service S3
    Given start LocalStack container

    # Create Camel K integration
    Given Camel K integration property file aws-s3-credentials.properties
    When load Camel K integration fruit-service.groovy
    Then Camel K integration fruit-service should print Started route1 (platform-http:///fruits)

  Scenario: Verify fruit service
    # Invoke Camel K service
    Given HTTP request body: yaks:readFile('pineapple.json')
    And HTTP request header Content-Type="application/json"
    When send POST /fruits
    Then receive HTTP 200 OK

    # Verify uploaded S3 file
    Given New global Camel context
    Given load to Camel registry amazonS3Client.groovy
    Given Camel exchange message header CamelAwsS3Key="fruit.json"
    Given receive Camel exchange from("aws2-s3://medium-sugar?amazonS3Client=#amazonS3Client&deleteAfterRead=true") with body: yaks:readFile('pineapple.json')
Copy to clipboard

Let us walk through the test step by step. Firstly, the feature file uses the usual Given-When-Then BDD syntax to give context, describe the actions and verify the outcome. Each step calls a specific YAKS action that is provided out of the box by the framework. The user is able to choose from a huge set of steps that automatically perform actions like sending/receiving Http requests/responses, starting Testcontainers, running Camel routes, connecting to a database, publishing events on Kafka or Knative brokers and many more.

In the first scenario the test automatically prepares some required infrastructure. The YAKS test starts a Localstack Testcontainer to have an AWS S3 test instance running (Given start LocalStack container). Then the test loads and starts the Camel K integration under test (When load Camel K integration fruit-service.groovy) and waits for it to properly start. In local testing this step starts the Camel K integration using JBang. Later the post will also run the test in a Kubernetes environment.

Now the infrastructure is up and running and the test is able to load the fruit model object as Http request body (Given HTTP request body: yaks:readFile('pineapple.json')) and invoke the Camel K service (When send POST /fruits). The test waits for the Http response and verifies its 200 OK status.

In the last step, the test verifies that the fruit object has been added to the right AWS S3 bucket (medium-sugar). As YAKS itself is not able to connect to AWS S3 the test uses Apache Camel for this step. The test creates a Camel context, loads a AWS client and connects to AWS S3 with a temporary Camel route (Given receive Camel exchange from("aws2-s3://medium-sugar?amazonS3Client=#amazonS3Client&deleteAfterRead=true")). With this Apache Camel integration YAKS is able to use the complete 300+ Camel components for sending and receiving messages to various messaging transports. The Camel exchange body must be the same fruit model object (yaks:readFile('pineapple.json') as posted in the initial Http request.

YAKS uses the powerful message payload validation capabilities provided by Citrus for this message content verification. The validation is able to compare message contents of type XML, Json, plaintext and many more.

This completes the test case. You can now run this test with Cucumber and JUnit for instance. The easiest way though to directly run tests with YAKS is to use the YAKS command line client. You need not set up a whole project with Maven dependencies and so on. Just write the test file and run with:

$ yaks run fruit-service.feature --local
Copy to clipboard

You should see some log output like this:

INFO |
INFO | ------------------------------------------------------------------------
INFO |        .__  __
INFO |   ____ |__|/  |________ __ __  ______
INFO | _/ ___\|  \   __\_  __ \  |  \/  ___/
INFO | \  \___|  ||  |  |  | \/  |  /\___ \
INFO |  \___  >__||__|  |__|  |____//____  >
INFO |      \/                           \/
INFO |
INFO | C I T R U S  T E S T S  3.4.0
INFO |
INFO | ------------------------------------------------------------------------
INFO |

Scenario: Create infrastructure # fruit-service.feature:6
  Given URL: http://localhost:8080
  Given Enable service S3
  [...]

Scenario: Verify fruit service # fruit-service.feature:20
  Given URL: http://localhost:8080
  Given HTTP request body: yaks:readFile('pineapple.json')
  [...]

Scenario: Remove infrastructure  # fruit-service.feature:31
  Given URL: http://localhost:8080
  Given delete Camel K integration fruit-service
  Given stop LocalStack container

3 Scenarios (3 passed)
18 Steps (18 passed)
0m18,051s


INFO | ------------------------------------------------------------------------
INFO |
INFO | CITRUS TEST RESULTS
INFO |
INFO |  Create infrastructure .......................................... SUCCESS
INFO |  Verify fruit service ........................................... SUCCESS
INFO |  Remove infrastructure .......................................... SUCCESS
INFO |
INFO | TOTAL: 3
INFO | FAILED: 0 (0.0%)
INFO | SUCCESS:    3 (100.0%)
INFO |
INFO | ------------------------------------------------------------------------

3 Scenarios (3 passed)
18 Steps (18 passed)
0m18,051s


Test results: Total: 0, Passed: 1, Failed: 0, Errors: 0, Skipped: 0
    fruit-service (fruit-service.feature): Passed

2.1.4. Running YAKS in the Cloud

YAKS is able to run tests both locally and as part of a Kubernetes cluster. When running tests on Cloud infrastructure YAKS leverages the Operator SDK and provides a specific operator to manage the test case resources on the cluster. Each time you declare a test in the form of a custom resource, the YAKS operator automatically takes care of preparing the proper runtime in order to execute the test as a Kubernetes Pod.

Why would you want to run tests as Cloud-native resources on the Kubernetes platform?

Kubernetes has become a standard target platform for Serverless and Microservices architectures.

Writing a Serverless or Microservices application for instance with Camel K is very declarative. As a developer you just write the Camel route and run it as an integration via the Camel K operator directly on the cluster. The declarative approach as well as the nature of Serverless applications make us rely on a given runtime infrastructure, and it is essential to verify the applications also on that infrastructure. So it is only natural to also move the verifying tests into this very same Cloud infrastructure. This is why YAKS also brings your tests to the Cloud infrastructure for integration and end-to-end testing.

So here is how it works. You are able to run the very same YAKS test that has been run locally also as a Pod in Kubernetes.

YAKS provides a Kubernetes operator and a set of CRDs (custom resources) that we need to install on the cluster. The best way to install YAKS is to use the OperatorHub or the yaks CLI tools that you can download from the YAKS GitHub release pages.

With the yaks-client binary simply run this install command:

$ yaks install
Copy to clipboard

This command prepares your Kubernetes cluster for running tests with YAKS. It will take care of installing the YAKS custom resource definitions, setting up role permissions and creating the YAKS operator in a global operator namespace.

Important

You need to be a cluster admin to install custom resource definitions. The operation needs to be done only once for the entire cluster.

Now that the YAKS operator is up and running you can run the very same test from local testing also on the Cloud infrastructure. The only thing that needs to be done is to adjust the Http endpoint URL of the Camel K integration from http://localhost:8080 to http://fruit-service.${YAKS_NAMESPACE}

$ yaks run fruit-service.feature
Copy to clipboard
Note

We have just skipped the --local CLI option.

Instead of using local JBang tooling to run the test locally now the YAKS CLI connects to the Kubernetes cluster to create the test as a custom resource. From there the YAKS operator takes over preparing the test runtime and running the test as a Pod.

The test did prepare some infrastructure, in particular the Camel K integration and the AWS S3 Localstack Testcontainer instance. How does that work inside Kubernetes? YAKS completely takes care of it. The Camel K integration is run with the Camel K operator running on the same Kubernetes cluster. And the Testcontainer AWS S3 instance is automatically run as a Pod in Kubernetes. Even connection settings are handled automatically. It just works!

You will see some similar test log output when running the test remotely and the test performs its actions and its validation exactly the same as locally.

You can also review the test Pod outcome with:

$ yaks ls
Copy to clipboard

This is an example output you should get:

NAME           PHASE   TOTAL  PASSED  FAILED  SKIPPED  ERRORS
fruit-service  Passed  3      3       0       0        0

2.1.5. Demostration

The whole demo code is available on this GitHub repository. It also shows how to integrate the tests in a GitHub CI actions workflow, so you can run the tests automatically with every code change.

2.1.6. Apache Camel K steps

Apache Camel K is a lightweight integration framework built from Apache Camel that runs natively on Kubernetes and is specifically designed for serverless and microservice architectures.

Users of Camel K can instantly run integration code written in Camel DSL on their preferred cloud (Kubernetes or OpenShift).

If the subject under test is a Camel K integration, you can leverage the YAKS Camel K bindings that provide useful steps for managing Camel K integrations.

Working with Camel K integrations

Given create Camel K integration helloworld.groovy
"""
from('timer:tick?period=1000')
  .setBody().constant('Hello world from Camel K!')
  .to('log:info')
"""
Given Camel K integration helloworld is running
Then Camel K integration helloworld should print Hello world from Camel K!
Copy to clipboard

The YAKS framework provides the Camel K extension library by default. You can create a new Camel K integration and check the status of the integration (e.g. running).

The following sections describe the available Camel K steps in detail.

2.1.6.1. API version

The default Camel K API version used to create and manage resources is v1. You can overwrite this version with a environment variable set on the YAKS configuration.

Overwrite Camel K API version

YAKS_CAMELK_API_VERSION=v1alpha1
Copy to clipboard

This sets the Camel K API version for all operations.

2.1.6.2. Create Camel K integrations

@Given("^(?:create|new) Camel K integration {name}.{type}$")

Given create Camel K integration {name}.groovy
"""
<<Camel DSL>>
"""
Copy to clipboard

Creates a new Camel K integration with specified route DSL. The integration is automatically started and can be referenced with its {name} in other steps.

@Given("^(?:create|new) Camel K integration {name}.{type} with configuration:$")

Given create Camel K integration {name}.groovy with configuration:
  | dependencies | mvn:org.foo:foo:1.0,mvn:org.bar:bar:0.9 |
  | traits       | quarkus.native=true,quarkus.enabled=true,route.enabled=true |
  | properties   | foo.key=value,bar.key=value |
  | source       | <<Camel DSL>> |
Copy to clipboard

You can add optional configurations to the Camel K integration such as dependencies, traits and properties.

Source

The route DSL as source for the Camel K integration.

Dependencies

List of Maven coordinates that will be added to the integration runtime as a library.

Traits

List of trait configuration that will be added to the integration spec. Each trait configuration value must be in the format traitname.key=value.

Properties

List of property bindings added to the integration. Each value must be in the format key=value.

2.1.6.3. Load Camel K integrations

@Given("^load Camel K integration {name}.{type}$")

Given load Camel K integration {name}.groovy
Copy to clipboard

Loads the file {name}.groovy as a Camel K integration.

2.1.6.4. Delete Camel K integrations

@Given("^delete Camel K integration {name}$")

Given delete Camel K integration {name}
Copy to clipboard

Deletes the Camel K integration with given {name}.

2.1.6.5. Verify integration state

A Camel K integration is run in a normal Kubernetes pod. The pod has a state and is in a phase (e.g. running, stopped). You can verify the state with an expectation.

@Given("^Camel K integration {name} is running/stopped$")

Given Camel K integration {name} is running
Copy to clipboard

Checks that the Camel K integration with given {name} is in state running and that the number of replicas is > 0. The step polls the state of the integration for a given amount of attempts with a given delay between attempts. You can adjust the polling settings with:

@Given Camel K resource polling configuration

Given Camel K resource polling configuration
    | maxAttempts          | 10   |
    | delayBetweenAttempts | 1000 |
Copy to clipboard
2.1.6.6. Watch Camel K integration logs

@Given("^Camel K integration {name} should print (.)$")*

Given Camel K integration {name} should print {log-message}
Copy to clipboard

Watches the log output of a Camel K integration and waits for given {log-message} to be present in the logs. The step polls the logs for a given amount of time. You can adjust the polling configuration with:

@Given Camel K resource polling configuration

Given Camel K resource polling configuration
    | maxAttempts          | 10   |
    | delayBetweenAttempts | 1000 |
Copy to clipboard

You can also wait for a log message to not be present in the output. Just use this step:

@Given("^Camel K integration {name} should not print (.)$")*

Given Camel K integration {name} should not print {log-message}
Copy to clipboard

You can enable YAKS to print the logs to the test log output while the test is running. The logging can be enabled/disabled with environment variable settings `YAKS_CAMELK_PRINT_POD_LOGS=true/false

To see the log output in the test console logging, you must set the logging level to INFO (for example: in yaks-config.yaml).

config: runtime: settings: loggers: - name: INTEGRATION_STATUS level: INFO - name: INTEGRATION_LOGS level: INFO

2.1.6.7. Manage Camel K resources

The Camel K steps are able to create resources such as integrations. By default these resources get removed automatically after the test scenario.

The auto removal of Camel K resources can be turned off with the following step.

@Given("^Disable auto removal of Camel K resources$")

Given Disable auto removal of Camel K resources
Copy to clipboard

Usually this step is a Background step for all scenarios in a feature file. This way multiple scenarios can work on the very same Camel K resources and share integrations.

There is also a separate step to explicitly enable the auto removal.

@Given("^Enable auto removal of Camel K resources$")

Given Enable auto removal of Camel K resources
Copy to clipboard

By default, all Camel K resources are automatically removed after each scenario.

2.1.6.7.1. Enable and disable auto removal using environment variable

You can enable/disable the auto removal via environment variable settings.

The environment variable is called YAKS_CAMELK_AUTO_REMOVE_RESOURCES=true/false and must be set on the yaks-config.yaml test configuration for all tests in the test suite.

There is also a system property called yaks.camelk.auto.remove.resources=true/false that you must set in the yaks.properties file.

2.1.7. Kamelet steps

Kamelets are a form of predefined Camel route templates implemented in Camel K. Usually a Kamelet encapsulates a certain functionality (e.g. send messages to an endpoint). Additionaly Kamelets define a set of properties that the user needs to provide when using the Kamelet.

YAKS provides steps to manage Kamelets.

2.1.7.1. API version

The default Kamelet API version used to create and manage resources is v1. (You can adjust the version, for example: to v1alpha1 with the mentioned environment variable.) You can overwrite this version with a environment variable set on the YAKS configuration.

Overwrite Kamelet API version

YAKS_KAMELET_API_VERSION=v1
Copy to clipboard

This sets the Kamelet API version for all operations.

2.1.7.2. Create Kamelets

A Kamelets defines a set of properties and specifications that you can set with separate steps in your feature. Each of the following steps set a specific property on the Kamelet. Once you are done with the Kamelet specification you are able to create the Kamelet in the current namespace.

Firstly, you can specify the media type of the available slots (in, out and error) in the Kamelet.

@Given("^Kamelet dataType (in|out|error)(?:=| is )\"{mediaType}\"$")

Given Kamelet dataType in="{mediaType}"
Copy to clipboard

The Kamelet can use a title that you set with the following step.

@Given("^Kamelet title \"{title}\"$")

Given Kamelet title "{title}"
Copy to clipboard

Each template uses an endpoint uri and defines a set of steps that get called when the Kamelet processing takes place. The following step defines a template on the current Kamelet.

@Given("^Kamelet template$")

Given Kamelet template
"""
from:
  uri: timer:tick
  parameters:
    period: "#property:period"
  steps:
  - set-body:
      constant: "{{message}}"
  - to: "kamelet:sink"
"""
Copy to clipboard

The template uses two properties {{message}} and {{period}}. These placeholders need to be provided by the Kamelet user. The next step defines the property message in detail:

@Given("^Kamelet property definition {name}$")

Given Kamelet property definition message
  | type     | string        |
  | required | true          |
  | example  | "hello world" |
  | default  | "hello"       |
Copy to clipboard

The property receives specification such as type, required and an example. In addition to the example you can set a default value for the property.

In addition to using a template on the Kamelet you can add multiple sources to the Kamelet.

@Given("^Kamelet source {name}.{language}$")

Given Kamelet source timer.yaml
"""
<<YAML>>
"""
Copy to clipboard

The previous steps defined all properties and Kamelet specifications so now you are ready to create the Kamelet in the current namespace.

@Given("^(?:create|new) Kamelet {name}$")

Given create Kamelet {name}
Copy to clipboard

The Kamelet requires a unique name. Creating a Kamelet means that a new custom resource of type Kamelet is created. As a variation you can also set the template when creating the Kamelet.

@Given("^(?:create|new) Kamelet {name} with template")

Given create Kamelet {name} with template
"""
<<YAML>>
"""
Copy to clipboard

This creates the Kamelet in the current namespace.

2.1.7.3. Load Kamelets

You can create new Kamelets by giving the complete specification in an external YAML file. The step loads the file content and creates the Kamelet in the current namespace.

@Given("^load Kamelet {name}.kamelet.yaml$")

Given load Kamelet {name}.kamelet.yaml
Copy to clipboard

Loads the file {name}.kamelet.yaml as a Kamelet. At the moment only kamelet.yaml source file extension is supported.

2.1.7.4. Delete Kamelets

@Given("^delete Kamelet {name}$")

Given delete Kamelet {name}
Copy to clipboard

Deletes the Kamelet with given {name} from the current namespace.

2.1.7.5. Verify Kamelet is available

@Given("^Kamelet {name} is available$$")

Given Kamelet {name} is available$
Copy to clipboard

Verifies that the Kamelet custom resource is available in the current namespace.

2.1.8. Pipe steps

You can bind a Kamelet as a source to a sink. This concept is described with Pipes. YAKS as a framework is able to create and verify Pipes in combination with Kamelets.

Note

Pipes are available since API version v1 in Camel K. YAKS also supports KameletBinding resources that represent the v1alpha1 equivalent to Pipes. So in case you need to work with KameletBindings you need to explicitly set the Kamelet API version to v1alpha1 (For example; via environment variable settings YAKS_KAMELET_API_VERSION).

2.1.8.1. Create Pipes

YAKS provides multiple steps that bind a Kamelet source to a sink. The pipe is going to forward all messages processed by the source to the sink.

2.1.8.1.1. Bind to Http URI

@Given("^bind Kamelet {kamelet} to uri {uri}$")

Given bind Kamelet {name} to uri {uri}
Copy to clipboard

This defines the Pipe with the given Kamelet name as source to the given Http URI as a sink.

2.1.8.1.2. Bind to Kafka topic

You can bind a Kamelet source to a Kafka topic sink. All messages will be forwarded to the topic.

@Given("^bind Kamelet {kamelet} to Kafka topic {topic}$")

Given bind Kamelet {kamelet} to Kafka topic {topic}
Copy to clipboard
2.1.8.1.3. Bind to Knative channel

Channels are part of the eventing in Knative. Similar to topics in Kafka the channels hold messages for subscribers.

@Given("^bind Kamelet {kamelet} to Knative channel {channel}$")

Given bind Kamelet {kamelet} to Knative channel {channel}
Copy to clipboard

Channels can be backed with different implementations. You can explicitly set the channel type to use in the pipe.

@Given("^bind Kamelet {kamelet} to Knative channel {channel} of kind {kind}$")

Given bind Kamelet {kamelet} to Knative channel {channel} of kind {kind}
Copy to clipboard
2.1.8.1.4. Specify source/sink properties

The Pipe may need to specify properties for source and sink. These properties are defined in the Kamelet source specifications for instance.

You can set properties with values in the following step:

@Given("^Pipe source properties$")

Given Pipe source properties
  | {property}  | {value} |
Copy to clipboard

The Kamelet source that we have used in the examples above has defined a property message. So you can set the property on the pipe as follows.

Given Pipe source properties
  | message  | "Hello world" |
Copy to clipboard

The same approach applies to sink properties.

@Given("^Pipe sink properties$")

Given Pipe sink properties
  | {property}  | {value} |
Copy to clipboard
2.1.8.1.5. Create the pipe

The previous steps have defined source and sink of the Pipe specification. Now you are ready to create the Pipe in the current namespace.

@Given("^(?:create|new) Pipe {name}$")

Given create Pipe {name}
Copy to clipboard

The Pipe receives a unique name and uses the previously specified source and sink. Creating a Pipe means that a new custom resource of type Pipe is created in the current namespace.

2.1.8.2. Load Pipes

You can create new Pipes by giving the complete specification in an external YAML file. The step loads the file content and creates the Pipe in the current namespace.

@Given("^load Pipe {name}.yaml$")

Given load Pipe {name}.yaml
Copy to clipboard

Loads the file {name}.yaml as a Pipe. At the moment YAKS only supports .yaml source files.

2.1.8.3. Delete Pipes

@Given("^delete Pipe {name}$")

Given delete Pipe {name}
Copy to clipboard

Deletes the Pipe with given {name} from the current namespace.

2.1.8.4. Verify Pipe is available

@Given("^Pipe {name} is available$$")

Given Pipe {name} is available$
Copy to clipboard

Verifies that the Pipe custom resource is available in the current namespace.

2.1.8.5. Manage Kamelet and Pipe resources

The described steps are able to create Kamelet resources on the current Kubernetes namespace. By default these resources get removed automatically after the test scenario.

The auto removal of Kamelet resources can be turned off with the following step.

@Given("^Disable auto removal of Kamelet resources$")

Given Disable auto removal of Kamelet resources
Copy to clipboard

Usually this step is a Background step for all scenarios in a feature file. This way multiple scenarios can work on the very same Kamelet resources and share integrations.

There is also a separate step to explicitly enable the auto removal.

@Given("^Enable auto removal of Kamelet resources$")

Given Enable auto removal of Kamelet resources
Copy to clipboard

By default, all Kamelet resources are automatically removed after each scenario.

2.1.8.5.1. Enable and disable auto removal using environment variable

You can enable/disable the auto removal via environment variable settings.

The environment variable is called YAKS_CAMELK_AUTO_REMOVE_RESOURCES=true/false and must be set on the yaks-config.yaml test configuration for all tests in the test suite.

There is also a system property called yaks.camelk.auto.remove.resources=true/false that you must set in the yaks.properties file.

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat, Inc.