Chapter 3. Configuring proxies


Fine-tune your deployment by configuring proxies to include additional features according to your specific requirements.

3.1. Configuring virtual clusters

A Kafka cluster is represented by the proxy as a virtual cluster. Clients connect to the virtual cluster rather than the actual cluster. When Streams for Apache Kafka Proxy is deployed, it includes configuration to create virtual clusters.

A virtual cluster has exactly one target cluster, but many virtual clusters can target the same cluster. Each virtual cluster targets a single listener on the target cluster, so multiple listeners on the Kafka side are represented as multiple virtual clusters by the proxy. Clients connect to a virtual cluster using a bootstrap_servers address. The virtual cluster has a bootstrap address that maps to each broker in the target cluster. When a client connects to the proxy, communication is proxied to the target broker by rewriting the address. Responses back to clients are rewritten to reflect the appropriate network addresses of the virtual clusters.

You can secure virtual cluster connections from clients and to target clusters.

Streams for Apache Kafka Proxy accepts keys and certificates in PEM (Privacy Enhanced Mail), PKCS #12 (Public-Key Cryptography Standards), or JKS (Java KeyStore) keystore format.

Streams for Apache Kafka Proxy configuration is defined in a ConfigMap resource. Use the data properties of the ConfigMap resource to configure the following:

  • Virtual clusters that represent the Kafka clusters
  • Network addresses for broker communication in a Kafka cluster
  • Filters to introduce additional functionality to the Kafka deployment

In this example, configuration for the Record Encryption filter is shown.

Example Streams for Apache Kafka Proxy configuration

apiVersion: v1
kind: ConfigMap
metadata:
  name: proxy-config
data:
  config.yaml: |
    adminHttp: 
1

      endpoints:
        prometheus: {}
    virtualClusters: 
2

      my-cluster-proxy: 
3

        targetCluster:
          bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 
4

          tls: 
5

            trust:
              storeFile: /opt/proxy/trust/ca.p12
              storePassword:
                passwordFile: /opt/proxy/trust/ca.password
        clusterNetworkAddressConfigProvider: 
6

          type: SniRoutingClusterNetworkAddressConfigProvider 
7

          Config:
            bootstrapAddress: my-cluster-proxy.kafka:9092 
8

            brokerAddressPattern: broker$(nodeId).my-cluster-proxy.kafka
        logNetwork: false 
9

        logFrames: false
        tls: 
10

          key:
            storeFile: /opt/proxy/server/key-material/keystore.p12
            storePassword:
              passwordFile: /opt/proxy/server/keystore-password/storePassword
filters: 
11

  - type: EnvelopeEncryption 
12

    config: 
13

      kms: VaultKmsService
      kmsConfig:
        vaultTransitEngineUrl: https://vault.vault.svc.cluster.local:8200/v1/transit
        vaultToken:
          passwordFile: /opt/proxy/server/token.txt
        tls: 
14

          key:
            storeFile: /opt/cert/server.p12
            storePassword:
              passwordFile: /opt/cert/store.password
            keyPassword:
              passwordFile: /opt/cert/key.password
            storeType: PKCS12
      selector: TemplateKekSelector
      selectorConfig:
        template: "${topicName}"
Copy to Clipboard Toggle word wrap

1
Enables metrics for the proxy.
2
Virtual cluster configuration.
3
The name of the virtual cluster.
4
The bootstrap address of the target physical Kafka Cluster being proxied.
5
TLS configuration for the connection to the target cluster.
6
The configuration for the cluster network address configuration provider that controls how the virtual cluster is presented to the network.
7
The built-in types are PortPerBrokerClusterNetworkAddressConfigProvider and SniRoutingClusterNetworkAddressConfigProvider.
8
The hostname and port of the bootstrap used by the Kafka clients. The hostname must be resolved by the clients.
9
Logging is disabled by default. Enable logging related to network activity (logNetwork) and messages (logFrames) by setting the logging properties to true.
10
TLS encryption for securing connections with the clients.
11
Filter configuration.
12
The type of filter, which is the Record Encryption filter using Vault as the KMS in this example.
13
The configuration specific to the type of filter.
14
If required, you can also specify the credentials for TLS authentication with the KMS, with key names under which TLS certificates are stored.

3.3. Securing connections from clients

To secure client connections to virtual clusters, configure TLS on the virtual cluster by doing the following:

  • Obtain a server certificate for the virtual cluster from a Certificate Authority (CA).
    Ensure the certificate matches the names of the virtual cluster’s bootstrap and broker addresses.
    This may require wildcard certificates and Subject Alternative Names (SANs).
  • Provide the TLS configuration using the tls properties in the virtual cluster’s configuration to enable it to present the certificate to clients. Depending on your certificate format, apply one of the following examples.
Note

TLS is recommended on Kafka clients and virtual clusters for production configurations.

Example PKCS #12 configuration

virtualClusters:
  my-cluster-proxy:
    tls:
      key:
        storeFile: <path>/server.p12  
1

        storePassword:
          passwordFile: <path>/store.password 
2

        keyPassword:
          passwordFile: <path>/key.password 
3

        storeType: PKCS12 
4

      # ...
Copy to Clipboard Toggle word wrap

1
PKCS #12 store for the public CA certificate of the virtual cluster.
2
Password to protect the PKCS #12 store.
3
(Optional) Password for the key. If a password is not specified, the keystore’s password is used to decrypt the key too.
4
(Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used.

Example PEM configuration

virtualClusters:
  my-cluster-proxy:
    tls:
      key:
        privateKeyFile: <path>/server.key   
1

        certificateFile: <path>/server.crt 
2

        keyPassword:
          passwordFile: <path>/key.password 
3

# …
Copy to Clipboard Toggle word wrap

1
Private key of the virtual cluster.
2
Public CA certificate of the virtual cluster.
3
(Optional) Password for the key.

If required, configure the insecure property to disable trust and establish insecure connections with any Kafka Cluster, irrespective of certificate validity. However, this option is only intended for use in development and testing environments where proper certificates are hard to obtain and mange.

Example to enable insecure TLS

virtualClusters:
  demo:
    targetCluster:
      bootstrap_servers: myprivatecluster:9092
      tls:
        trust:
          insecure: true 
1

      #...
# …
Copy to Clipboard Toggle word wrap

1
Enables insecure TLS.

3.4. Securing connections to target clusters

To secure a virtual cluster connection to a target cluster, configure TLS on the virtual cluster. The target cluster must already be configured to use TLS.

Specify TLS for the virtual cluster configuration using targetCluster.tls properties

Use an empty object ({}) to inherit trust from the underlying platform on which the cluster is running. This option is suitable if the target cluster is using a TLS certificate signed by a public CA.

Example target cluster configuration for TLS

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093
      tls: {}
      #...
Copy to Clipboard Toggle word wrap

If it is using a TLS certificate signed by a private CA, you must add truststore configuration for the target cluster.

Example truststore configuration for a target cluster

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093
      tls:
        trust:
          storeFile: <path>/trust.p12 
1

          storePassword:
            passwordFile: <path>/store.password 
2

          storeType: PKCS12 
3

      #...
Copy to Clipboard Toggle word wrap

1
PKCS #12 store for the public CA certificate of the Kafka cluster.
2
Password to access the public Kafka cluster CA certificate.
3
(Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used.

For mTLS, you can add keystore configuration for the virtual cluster too.

Example keystore and truststore configuration for mTLS

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093:9092
      tls:
        key:
          privateKeyFile: <path>/client.key 
1

          certificateFile: <path>/client.crt 
2

        trust:
          storeFile: <path>/server.crt
          storeType: PEM
# ...
Copy to Clipboard Toggle word wrap

1
Private key of the virtual cluster.
2
Public CA certificate of the virtual cluster.

For the purposes of testing outside of a production environment, you can set the insecure property to true to turn off TLS so that the Streams for Apache Kafka Proxy can connect to any Kafka cluster.

Example configuration to turn off TLS

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: myprivatecluster:9092
      tls:
        trust:
          insecure: true
      #...
Copy to Clipboard Toggle word wrap

3.5. Configuring network addresses

Virtual cluster configuration requires a network address configuration provider that manages network communication and provides broker address information to clients.

Streams for Apache Kafka Proxy has the following built-in providers:

  • Broker address provider (PortPerBrokerClusterNetworkAddressConfigProvider)
  • Node ID ranges provider (RangeAwarePortPerNodeClusterNetworkAddressConfigProvider)
  • SNI routing address provider (SniRoutingClusterNetworkAddressConfigProvider)
Important

Make sure that the virtual cluster bootstrap address and generated broker addresses are resolvable and routable by the Kafka client.

3.5.1. Broker address provider

The per-broker network address configuration provider opens one port for a virtual cluster’s bootstrap address and one port for each broker in the target Kafka cluster. The number of open ports is maintained dynamically. For example, if a broker is removed from the cluster, the port assigned to it is closed. If you have two virtual clusters, each targeting a Kafka cluster with three brokers, eight ports are bound in total.

This provider works best with straightforward configurations. Ideally, the target cluster should have sequential, stable broker IDs and a known minimum broker ID, such as 0, 1, 2 for a cluster with three brokers. While it can handle non-sequential broker IDs, this would require exposing ports equal to maxBrokerId - minBrokerId, which could be excessive if your cluster contains broker IDs like 0 and 20000.

The provider supports both cleartext and TLS downstream connections.

Example broker address configuration

clusterNetworkAddressConfigProvider:
  type: PortPerBrokerClusterNetworkAddressConfigProvider
  config:
    bootstrapAddress: mycluster.kafka.com:9192 
1

    brokerAddressPattern: mybroker-$(nodeId).mycluster.kafka.com 
2

    brokerStartPort: 9193 
3

    numberOfBrokerPorts: 3 
4

    lowestTargetBrokerId: 1000 
5

    bindAddress: 192.168.0.1 
6
Copy to Clipboard Toggle word wrap

1
The hostname and port of the bootstrap address used by Kafka clients.
2
(Optional) The broker address pattern used to form broker addresses. If not defined, it defaults to the hostname part of the bootstrap address and the port number allocated to the broker.
3
(Optional) The starting number for the broker port range. Defaults to the port of the bootstrap address plus 1.
4
(Optional) The maximum number of broker ports that are permitted. Set this value according to the maximum number of brokers allowed by your operational rules. Defaults to 3.
5
(Optional) The lowest broker ID in the target cluster. Defaults to 0. This should match the lowest node.id (or broker.id) in the target cluster.
6
(Optional) The bind address used when binding the ports. If undefined, all network interfaces are bound.

Each broker’s ID must be greater than or equal to lowestTargetBrokerId and less than lowestTargetBrokerId + numberOfBrokerPorts. The current strategy for mapping node IDs to ports is as follows: nodeId brokerStartPort + nodeId - lowestTargetBrokerId. The example configuration maps broker IDs 1000, 1001, and 1002 to ports 9193, 9194, and 9195, respectively. Reconfigure numberOfBrokerPorts to accommodate the number of brokers in the cluster.

The example broker address configuration creates the following broker addresses:

mybroker-0.mycluster.kafka.com:9193
mybroker-1.mycluster.kafka.com:9194
mybroker-2.mycluster.kafka.com:9194
Copy to Clipboard Toggle word wrap

The brokerAddressPattern configuration parameter accepts the $(nodeId) replacement token, which is optional. If included, $(nodeId) is replaced by the broker’s node.id (or broker.id) in the target cluster.

For example, with the configuration shown above, if your cluster has three brokers, your Kafka client receives broker addresses like this:

0.  mybroker-0.mycluster.kafka.com:9193
1.  mybroker-1.mycluster.kafka.com:9194
2.  mybroker-2.mycluster.kafka.com:9195
Copy to Clipboard Toggle word wrap

3.5.2. Node ID ranges provider

As an alternative to the broker address provider, the node ID ranges provider allows you to model specific ranges of node IDs in the target cluster, enabling efficient port allocation even when broker IDs are non-sequential or widely spaced This ensures a deterministic mapping of node IDs to ports while minimizing the number of ports needed.

Example node ID ranges configuration

clusterNetworkAddressConfigProvider:
  type: RangeAwarePortPerNodeClusterNetworkAddressConfigProvider
  config:
    bootstrapAddress: mycluster.kafka.com:9192
    brokerAddressPattern: mybroker-$(nodeId).mycluster.kafka.com
    brokerStartPort: 9193
    nodeIdRanges: 
1

      - name: brokers 
2

        range:
          startInclusive: 0 
3

          endExclusive: 3 
4
Copy to Clipboard Toggle word wrap

1
The list of Node ID ranges, which must be non-empty.
2
The name of the range, which must be unique within the nodeIdRanges list.
3
The start of the range (inclusive).
4
The end of the range (exclusive). It must be greater than startInclusive; empty ranges are not allowed.

Node ID ranges must be distinct, meaning a node ID cannot belong to more than one range.

KRaft roles given to cluster nodes can be accommodated in the configuration. For example, consider a target cluster using KRaft with the following node IDs and roles:

  • nodeId: 0, roles: controller
  • nodeId: 1, roles: controller
  • nodeId: 2, roles: controller
  • nodeId: 1000, roles: broker
  • nodeId: 1001, roles: broker
  • nodeId: 1002, roles: broker
  • nodeId: 99999, roles: broker

This can be modeled as three node ID ranges, as shown in the following example.

Example node ID ranges configuration with KRaft roles

    clusterNetworkAddressConfigProvider:
      type: RangeAwarePortPerNodeClusterNetworkAddressConfigProvider
      config:
        bootstrapAddress: mycluster.kafka.com:9192
        nodeIdRanges:
          - name: controller
            range:
              startInclusive: 0
              endExclusive: 3
          - name: brokers
            range:
              startInclusive: 1000
              endExclusive: 1003
          - name: broker-outlier
            range:
              startInclusive: 99999
              endExclusive: 100000
Copy to Clipboard Toggle word wrap

This configuration results in the following mapping from node ID to port:

  • nodeId: 0 port 9193
  • nodeId: 1 port 9194
  • nodeId: 2 port 9195
  • nodeId: 1000 port 9196
  • nodeId: 1001 port 9197
  • nodeId: 1002 port 9198
  • nodeId: 99999 port 9199

3.5.3. SNI routing address provider

The SNI ((Server Name Indication) routing provider opens a single port for all virtual clusters or a port for each. You can open a port for the whole cluster or each broker. The SNI routing provider uses SNI information to determine where to route the traffic, so requires downstream TLS.

Example SNI routing address provider configuration

clusterNetworkAddressConfigProvider:
  type: SniRoutingClusterNetworkAddressConfigProvider
  config:
    bootstrapAddress: mycluster.kafka.com:9192 
1

    brokerAddressPattern: mybroker-$(nodeId).mycluster.kafka.com
    bindAddress: 192.168.0.1
Copy to Clipboard Toggle word wrap

1
A single address for all traffic, including bootstrap address and brokers.

In the SNI routing address configuration, the brokerAddressPattern specification is mandatory, as it is required to generate routes for each broker.

Note

Single port operation may have cost advantages when using load balancers of public clouds, as it allows a single cloud provider load balancer to be shared across all virtual clusters.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat