このコンテンツは選択した言語では利用できません。

Chapter 3. Kafka Bridge configuration


Configure a deployment of the Kafka Bridge with Kafka-related properties and specify the HTTP connection details needed to be able to interact with Kafka. Additionally, enable metrics in Prometheus format using either the Prometheus JMX Exporter or the Streams for Apache Kafka Metrics Reporter. You can also use configuration properties to enable and use distributed tracing with the Kafka Bridge. Distributed tracing allows you to track the progress of transactions between applications in a distributed system.

Note

Use the KafkaBridge resource to configure properties when you are running the Kafka Bridge on OpenShift.

3.1. Configuring Kafka Bridge properties

This procedure describes how to configure the Kafka and HTTP connection properties used by the Kafka Bridge.

You configure the Kafka Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties.

  • kafka. for general configuration that applies to producers and consumers, such as server connection and security.
  • kafka.consumer. for consumer-specific configuration passed only to the consumer.
  • kafka.producer. for producer-specific configuration passed only to the producer.

As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP methods to access them. Additional HTTP headers in requests describe the CORS origins that are permitted access to the Kafka cluster.

Procedure

  1. Edit the application.properties file provided with the Kafka Bridge installation archive.

    Use the properties file to specify Kafka and HTTP-related properties.

    1. Configure standard Kafka-related properties, including properties specific to the Kafka consumers and producers.

      Use:

      • kafka.bootstrap.servers to define the host/port connections to the Kafka cluster
      • kafka.producer.acks to provide acknowledgments to the HTTP client
      • kafka.consumer.auto.offset.reset to determine how to manage reset of the offset in Kafka

        For more information on configuration of Kafka properties, see the Apache Kafka website

    2. Configure HTTP-related properties to enable HTTP access to the Kafka cluster.

      For example:

      bridge.id=my-bridge
      http.host=0.0.0.0
      http.port=8080 
      1
      
      http.cors.enabled=true 
      2
      
      http.cors.allowedOrigins=https://strimzi.io 
      3
      
      http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH 
      4
      Copy to Clipboard Toggle word wrap
      1
      The default HTTP configuration for the Kafka Bridge to listen on port 8080.
      2
      Set to true to enable CORS.
      3
      Comma-separated list of allowed CORS origins. You can use a URL or a Java regular expression.
      4
      Comma-separated list of allowed HTTP methods for CORS.
  2. Save the configuration file.

3.2. Configuring Prometheus JMX Exporter metrics

Enable the Prometheus JMX Exporter to collect Kafka Bridge metrics by setting the bridge.metrics option to jmxPrometheusExporter.

Procedure

  1. Set the bridge.metrics configuration to jmxPrometheusExporter.

    Configuration for enabling metrics

    bridge.metrics=jmxPrometheusExporter
    Copy to Clipboard Toggle word wrap

    Optionally, you can add a custom Prometheus JMX Exporter configuration using the bridge.metrics.exporter.config.path property. If not configured, a default embedded configuration file is used.

  2. Run the Kafka Bridge run script.

    Running the Kafka Bridge

    ./bin/kafka_bridge_run.sh --config-file=<path>/application.properties
    Copy to Clipboard Toggle word wrap

    With metrics enabled, you can scrape metrics in Prometheus format from the /metrics endpoint of the Kafka Bridge.

3.3. Configuring Streams for Apache Kafka Metrics Reporter metrics

Important

This feature is a technology preview and not intended for a production environment. For more information see the release notes.

Enable the Streams for Apache Kafka Metrics Reporter to collect Kafka Bridge metrics by setting the bridge.metrics option to strimziMetricsReporter.

Procedure

  1. Set the bridge.metrics configuration to strimziMetricsReporter.

    Configuration for enabling metrics

    bridge.metrics=strimziMetricsReporter
    Copy to Clipboard Toggle word wrap

    Optionally, you can configure a comma-separated list of regular expressions to filter exposed metrics using the kafka.prometheus.metrics.reporter.allowlist property. If not configured, a default set of metrics is exposed.

    When needed, it is possible to configure the allowlist per client type. For example, by using the kafka.admin prefix and setting kafka.admin.prometheus.metrics.reporter.allowlist=, all admin client metrics are excluded.

    You can add any plugin configuration to the Kafka Bridge properties file using kafka., kafka.admin., kafka.producer., and kafka.consumer. prefixes. In the event that the same property is configured with multiple prefixes, the most specific prefix takes precedence. For example, kafka.producer.prometheus.metrics.reporter.allowlist takes precedence over kafka.prometheus.metrics.reporter.allowlist.

  2. Run the Kafka Bridge run script.

    Running the Kafka Bridge

    ./bin/kafka_bridge_run.sh --config-file=<path>/application.properties
    Copy to Clipboard Toggle word wrap

    With metrics enabled, you can scrape metrics in Prometheus format from the /metrics endpoint of the Kafka Bridge.

3.4. Configuring distributed tracing

Enable distributed tracing to trace messages consumed and produced by the Kafka Bridge, and HTTP requests from client applications.

Properties to enable tracing are present in the application.properties file. To enable distributed tracing, do the following:

  • Set the bridge.tracing property value to enable the tracing you want to use. The only possible value is opentelemetry.
  • Set environment variables for tracing.

With the default configuration, OpenTelemetry tracing uses OTLP as the exporter protocol. By configuring the OTLP endpoint, you can still use a Jaeger backend instance to get traces.

Note

Jaeger has supported the OTLP protocol since version 1.35. Older Jaeger versions cannot get traces using the OTLP protocol.

OpenTelemetry defines an API specification for collecting tracing data as spans of metrics data. Spans represent a specific operation. A trace is a collection of one or more spans.

Traces are generated when the Kafka Bridge does the following:

  • Sends messages from Kafka to consumer HTTP clients
  • Receives messages from producer HTTP clients to send to Kafka

Jaeger implements the required APIs and presents visualizations of the trace data in its user interface for analysis.

To have end-to-end tracing, you must configure tracing in your HTTP clients.

Important

Streams for Apache Kafka no longer supports OpenTracing. If you were previously using OpenTracing with the bridge.tracing=jaeger option, we encourage you to transition to using OpenTelemetry instead.

Procedure

  1. Edit the application.properties file provided with the Kafka Bridge installation archive.

    Use the bridge.tracing property to enable the tracing you want to use.

    Example configuration to enable OpenTelemetry

    bridge.tracing=opentelemetry 
    1
    Copy to Clipboard Toggle word wrap

    1
    The property for enabling OpenTelemetry is uncommented by removing the # at the beginning of the line.

    With tracing enabled, you initialize tracing when you run the Kafka Bridge script.

  2. Save the configuration file.
  3. Set the environment variables for tracing.

    Environment variables for OpenTelemetry

    OTEL_SERVICE_NAME=my-tracing-service 
    1
    
    OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 
    2
    Copy to Clipboard Toggle word wrap

    1
    The name of the OpenTelemetry tracer service.
    2
    The gRPC-based OTLP endpoint that listens for spans on port 4317.
  4. Run the Kafka Bridge script with the property enabled for tracing.

    Running the Kafka Bridge with OpenTelemetry enabled

    ./bin/kafka_bridge_run.sh --config-file=<path>/application.properties
    Copy to Clipboard Toggle word wrap

    The internal consumers and producers of the Kafka Bridge are now enabled for tracing.

3.4.1. Specifying tracing systems with OpenTelemetry

Instead of the default OTLP tracing system, you can specify other tracing systems that are supported by OpenTelemetry.

If you want to use another tracing system with OpenTelemetry, do the following:

  1. Add the library of the tracing system to the Kafka classpath.
  2. Add the name of the tracing system as an additional exporter environment variable.

    Additional environment variable when not using OTLP

    OTEL_SERVICE_NAME=my-tracing-service
    OTEL_TRACES_EXPORTER=zipkin 
    1
    
    OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 
    2
    Copy to Clipboard Toggle word wrap

    1
    The name of the tracing system. In this example, Zipkin is specified.
    2
    The endpoint of the specific selected exporter that listens for spans. In this example, a Zipkin endpoint is specified.

3.4.2. Supported Span attributes

The Kafka Bridge adds, in addition to the standard OpenTelemetry attributes, the following attributes from the OpenTelemetry standard conventions for HTTP to its spans.

Expand

Attribute key

Attribute value

peer.service

Hardcoded to kafka

http.request.method

The http method used to make the request

url.scheme

The URI scheme component

url.path

The URI path component

url.query

The URI query component

messaging.destination.name

The name of the Kafka topic being produced to or read from

messaging.system

Hardcoded to kafka

http.response.status_code

ok for http responses between 200 and 300. error for all other status codes

Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る