Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Configuring Streams for Apache Kafka Proxy

download PDF

Fine-tune your deployment by configuring Streams for Apache Kafka Proxy resources to include additional features according to your specific requirements.

4.1. Example Streams for Apache Kafka Proxy configuration

Streams for Apache Kafka Proxy configuration is defined in a ConfigMap resource. Use the data properties of the ConfigMap resource to configure the following:

  • Virtual clusters that represent the Kafka clusters
  • Network addresses for broker communication in a Kafka cluster
  • Filters to introduce additional functionality to the Kafka deployment

In this example, configuration for the Record Encryption filter is shown.

Example Streams for Apache Kafka Proxy configuration

apiVersion: v1
kind: ConfigMap
metadata:
  name: proxy-config
data:
  config.yaml: |
    adminHttp: 1
      endpoints:
        prometheus: {}
    virtualClusters: 2
      my-cluster-proxy: 3
        targetCluster:
          bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 4
          tls: 5
            trust:
              storeFile: /opt/proxy/trust/ca.p12
              storePassword:
                passwordFile: /opt/proxy/trust/ca.password
        clusterNetworkAddressConfigProvider: 6
          type: SniRoutingClusterNetworkAddressConfigProvider
          Config:
            bootstrapAddress: mycluster-proxy.kafka:9092
            brokerAddressPattern: broker$(nodeId).mycluster-proxy.kafka
        logNetwork: false 7
        logFrames: false
        tls: 8
          key:
            storeFile: /opt/proxy/server/key-material/keystore.p12
            storePassword:
              passwordFile: /opt/proxy/server/keystore-password/storePassword
filters: 9
  - type: EnvelopeEncryption 10
    config: 11
      kms: VaultKmsService
      kmsConfig:
        vaultTransitEngineUrl: https://vault.vault.svc.cluster.local:8200/v1/transit
        vaultToken:
          passwordFile: /opt/proxy/server/token.txt
        tls: 12
          key:
            storeFile: /opt/cert/server.p12
            storePassword:
              passwordFile: /opt/cert/store.password
            keyPassword:
              passwordFile: /opt/cert/key.password
            storeType: PKCS12
      selector: TemplateKekSelector
      selectorConfig:
        template: "${topicName}"

1
Enables metrics for the proxy.
2
Virtual cluster configuration.
3
The name of the virtual cluster.
4
The bootstrap address of the target physical Kafka Cluster being proxied.
5
TLS configuration for the connection to the target cluster.
6
The configuration for the cluster network address configuration provider that controls how the virtual cluster is presented to the network.
7
Logging is disabled by default. Enable logging related to network activity (logNetwork) and messages (logFrames) by setting the logging properties to true.
8
TLS encryption for securing connections with the clients.
9
Filter configuration.
10
The type of filter, which is the Record Encryption filter in this example.
11
The configuration specific to the type of filter.
12
The Record Encryption filter requires a connection to Vault. If required, you can also specify the credentials for TLS authentication with Vault, with key names under which TLS certificates are stored.

4.2. Configuring virtual clusters

A Kafka cluster is represented by the proxy as a virtual cluster. Clients connect to the virtual cluster rather than the actual cluster. When Streams for Apache Kafka Proxy is deployed, it includes configuration to create virtual clusters.

A virtual cluster has exactly one target cluster, but many virtual clusters can target the same cluster. Each virtual cluster targets a single listener on the target cluster, so multiple listeners on the Kafka side are represented as multiple virtual clusters by the proxy. Clients connect to a virtual cluster using a bootstrap_servers address. The virtual cluster has a bootstrap address that maps to each broker in the target cluster. When a client connects to the proxy, communication is proxied to the target broker by rewriting the address. Responses back to clients are rewritten to reflect the appropriate network addresses of the virtual clusters.

You can secure virtual cluster connections from clients and to target clusters.

Streams for Apache Kafka Proxy accepts keys and certificates in PEM (Privacy Enhanced Mail), PKCS #12 (Public-Key Cryptography Standards), or JKS (Java KeyStore) keystore format.

4.2.1. Securing connections from clients

To secure client connections to virtual clusters, configure TLS on the virtual cluster by doing the following:

Obtain a CA (Certificate Authority) certificate for the virtual cluster from a Certificate Authority. When requesting the certificate, ensure it matches the names of the virtual cluster’s bootstrap and broker addresses. This might require wildcard certificates and Subject Alternative Names (SANs).

Specify TLS credentials in the virtual cluster configuration using tls properties.

Example PKCS #12 configuration

virtualClusters:
  my-cluster-proxy:
    tls:
      key:
        storeFile: <path>/server.p12  1
        storePassword:
          passwordFile: <path>/store.password 2
        keyPassword:
          passwordFile: <path>/key.password 3
        storeType: PKCS12 4
      # ...

1
PKCS #12 store for the public CA certificate of the virtual cluster.
2
Password to protect the PKCS #12 store.
3
(Optional) Password for the key. If a password is not specified, the keystore’s password is used to decrypt the key too.
4
(Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used.
Note

TLS is recommended on Kafka clients and virtual clusters for production configurations.

Example PEM configuration

virtualClusters:
  my-cluster-proxy:
    tls:
      key:
        privateKeyFile: <path>/server.key   1
        certificateFile: <path>/server.crt 2
        keyPassword:
          passwordFile: <path>/key.password 3
# …

1
Private key of the virtual cluster.
2
Public CA certificate of the virtual cluster.
3
(Optional) Password for the key.

If required, configure the insecure property to disable trust and establish insecure connections with any Kafka Cluster, irrespective of certificate validity. However, this option is not recommended for production use.

Example to enable insecure TLS

virtualClusters:
  demo:
    targetCluster:
      bootstrap_servers: myprivatecluster:9092
      tls:
        trust:
          insecure: true 1
      #...
# …

1
Enables insecure TLS.

4.2.2. Securing connections to target clusters

To secure a virtual cluster connection to a target cluster, configure TLS on the virtual cluster. The target cluster must already be configured to use TLS.

Specify TLS for the virtual cluster configuration using targetCluster.tls properties

Use an empty object ({}) to inherit trust from the OpenShift platform. This option is suitable if the target cluster is using a TLS certificate signed by a public CA.

Example target cluster configuration for TLS

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093
      tls: {}
      #...

If it is using a TLS certificate signed by a private CA, you must add truststore configuration for the target cluster.

Example truststore configuration for a target cluster

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093
      tls:
        trust:
          storeFile: <path>/trust.p12 1
          storePassword:
            passwordFile: <path>/store.password 2
          storeType: PKCS12 3
      #...

1
PKCS #12 store for the public CA certificate of the Kafka cluster.
2
Password to access the public Kafka cluster CA certificate.
3
(Optional) Keystore type. If a keystore type is not specified, the default JKS (Java Keystore) type is used.

For mTLS, you can add keystore configuration for the virtual cluster too.

Example keystore and truststore configuration for mTLS

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093:9092
      tls:
        key:
          privateKeyFile: <path>/client.key 1
          certificateFile: <path>/client.crt 2
        trust:
          storeFile: <path>/server.crt
          storeType: PEM
# ...

1
Private key of the virtual cluster.
2
Public CA certificate of the virtual cluster.

For the purposes of testing outside of a production environment, you can set the insecure property to true to turn off TLS so that the Streams for Apache Kafka Proxy can connect to any Kafka cluster.

Example configuration to turn off TLS

virtualClusters:
  my-cluster-proxy:
    targetCluster:
      bootstrap_servers: myprivatecluster:9092
      tls:
        trust:
          insecure: true
      #...

4.3. Configuring network addresses

Virtual cluster configuration requires a network address configuration provider that manages network communication and provides broker address information to clients.

Streams for Apache Kafka Proxy has two built-in providers:

Broker address provider (PortPerBrokerClusterNetworkAddressConfigProvider)
The per-broker network address configuration provider opens one port for a virtual cluster’s bootstrap address and one port for each broker in the target Kafka cluster. The ports are maintained dynamically. For example, if a broker is removed from the cluster, the port assigned to it is closed.
SNI routing address provider (SniRoutingClusterNetworkAddressConfigProvider)
The SNI routing provider opens a single port for all virtual clusters or a port for each. For the Kafka cluster, you can open a port for the whole cluster or each broker. The SNI routing provider uses SNI information to determine where to route the traffic.

Example broker address provider configuration

clusterNetworkAddressConfigProvider:
  type: PortPerBrokerClusterNetworkAddressConfigProvider
  config:
    bootstrapAddress: mycluster.kafka.com:9192 1
    brokerAddressPattern: mybroker-$(nodeId).mycluster.kafka.com 2
    brokerStartPort: 9193 3
    numberOfBrokerPorts: 3 4
    bindAddress: 192.168.0.1 5

1
The hostname and port of the bootstrap address used by Kafka clients.
2
(Optional) The broker address pattern used to form broker addresses. If not defined, it defaults to the hostname part of the bootstrap address and the port number allocated to the broker. The $(nodeId) token is replaced by the broker’s node.id (or broker.id if node.id is not set).
3
(Optional) The starting number for broker port range. Defaults to the port of the bootstrap address plus 1.
4
(Optional) The maximum number of broker ports that are permitted. Defaults to 3.
5
(Optional) The bind address used when binding the ports. If undefined, all network interfaces are bound.

The example broker address configuration creates the following broker addresses:

mybroker-0.mycluster.kafka.com:9193
mybroker-1.mycluster.kafka.com:9194
mybroker-2.mycluster.kafka.com:9194
Note

For a configuration with multiple physical clusters, ensure that the numberOfBrokerPorts is set to (number of brokers * number of listeners per broker) + number of bootstrap listeners across all clusters. For instance, if there are two physical clusters with 3 nodes each, and each broker has one listener, the configuration requires a value of 8 (comprising 3 ports for broker listeners + 1 port for the bootstrap listener in each cluster).

Example SNI routing address provider configuration

clusterNetworkAddressConfigProvider:
  type: SniRoutingClusterNetworkAddressConfigProvider
  config:
    bootstrapAddress: mycluster.kafka.com:9192 1
    brokerAddressPattern: mybroker-$(nodeId).mycluster.kafka.com
    bindAddress: 192.168.0.1

1
A Single address for all traffic, including bootstrap address and brokers.

In the SNI routing address configuration, the brokerAddressPattern specification is mandatory, as it is required to generate routes for each broker.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.