Chapter 3. Deploying Streams for Apache Kafka Proxy with the Record Encryption filter


Streams for Apache Kafka Proxy is designed to seamlessly integrate with Kafka clusters managed by Streams for Apache Kafka. Additionally, it offers compatibility with all types of Kafka instances, irrespective of their distribution or protocol version. Whether your deployment involves public or private clouds, or if you are setting up a local development environment, the instructions in this guide are applicable in all cases.

In this procedure, the Streams for Apache Kafka Proxy is deployed with the Record Encryption filter for use with a Kafka instance managed by Streams for Apache Kafka on OpenShift.

Streams for Apache Kafka provides example installation artifacts with the necessary configuration for the Streams for Apache Kafka Proxy to connect to the Kafka cluster in the examples/proxy/record-encryption folder.

Using the example configuration files, deploy and expose the proxy with the following types of listener:

  • cluster-ip type listener using per-broker ClusterIP services to expose the proxy within the OpenShift cluster
  • loadbalancer type listener using per-broker loadbalancer services to expose the proxy outside the OpenShift cluster

For each option, the following files are provided:

  • kustomization.yaml specifies the Kubernetes customization for deploying the proxy
  • proxy-config.yaml specifies the ConfigMap resource configuration for the proxy
  • proxy-service.yaml specifies the Service resource configuration for the proxy service

The ConfigMap resource provides the configuration for setting up virtual clusters and filters. Virtual clusters represent the Kafka clusters you wish to use with the proxy.

Prerequisites

  • An OpenShift cluster running a supported version.
  • A Kafka cluster managed by Streams for Apache Kafka running on the OpenShift cluster.
  • Kafka binaries installed locally to verify a proxy setup for external access through a loadbalancer. The Kafka binaries are included with the installation artifacts for Streams for Apache Kafka on RHEL from the Streams for Apache Kafka software downloads page.
  • A config map that includes the configuration for creating virtual clusters and filters.
  • The oc command-line tool is installed and configured to connect to the OpenShift cluster with admin access.
  • The helm command line tool is installed and configured to connect to the OpenShift cluster with admin access.
  • HashiCorp Vault is set up for the proxy and is accessible from the Streams for Apache Kafka Proxy.

    Make sure the Vault instance is set up for the Record Encryption filter.

For information on the oc and helm command line options used in this procedure, check the --help.

In addition to the files to install the Streams for Apache Kafka Proxy, Streams for Apache Kafka Proxy also provides pre-configured files to install a Kafka cluster. The installation files offer the quickest way to set up and try the proxy, though you can use your own deployments of a Kafka cluster managed by Streams for Apache Kafka and Vault.

In this procedure, we are connecting to a Kafka cluster named my-cluster that is deployed to the kafka namespace. To deploy the proxy to the same namespace as the cluster managed by Streams for Apache Kafka, change the namespace setting in the kustomization.yaml file. The proxy is deployed to the proxy namespace by default. If you keep this setting, the Streams for Apache Kafka Operator must be installed cluster-wide.

Procedure

  1. Download and extract the Streams for Apache Kafka Proxy installation artifacts.

    The artifacts are included with installation and example files available from the Streams for Apache Kafka software downloads page.

    The files contain the deployment configuration required for connecting through a cluster-ip or loadbalancer type listener.

  2. Create a topic in the Kafka cluster:

    oc run -n <my_proxy_namespace> -ti proxy-producer \
      --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 \
      --rm=true \
      --restart=Never \
      -- bin/kafka-topics.sh \
      --bootstrap-server proxy-service:9092 \
      --create -topic my-topic

    In this example, we create a topic named my-topic through an interactive pod container.

  3. Create a key for my-topic in Vault:

    vault write -f transit/keys/KEK_my-topic
  4. Edit the ConfigMap that provides the filter configuration for the proxy.

    The Record Encryption filter config requires credentials for the HashiCorp Vault KMS.

    Example Record Encryption filter configuration

    filters:
      - type: RecordEncryption
        config:
          kms: VaultKmsService 
    1
    
          kmsConfig:
            vaultTransitEngineUrl: http://vault.vault.svc.cluster.local:8200/v1/transit 
    2
    
            vaultToken:
              passwordFile: /opt/proxy/encryption/token.txt 
    3
    
          selector: TemplateKekSelector 
    4
    
          selectorConfig:
            template: "KEK_${topicName}" 
    5
    
    	 # ...

    1
    The type of KMS (key management system) used. In this case, HashiCorp Vault.
    2
    The URL of the Vault Transit Engine service.
    3
    The file containing the token required to access the Vault service. If this location changes, equivalent changes are required in the proxy deployment configuration.
    4
    The Key Encryption Key (KEK) selector to use. The ${topicName} is a literal understood by the proxy.
    5
    The template for deriving the KEK, based on a specific topic name.
  5. Deploy Streams for Apache Kafka Proxy with the Record Encryption filter and the appropriate listener to your OpenShift cluster:

    Deploying the proxy with a cluster-ip listener

    cd /examples/proxy/record-encryption/
    oc apply -k cluster-ip

    Deploying the proxy with a loadbalancer listener

    cd /examples/proxy/record-encryption/
    oc apply -k loadbalancer

  6. If you are using a loadbalancer listener, update the proxy configuration to use the address of loadbalancer service that was created.

    1. Get the external address of the proxy service:

      LOAD_BALANCER_ADDRESS=$(oc get service -n <my_proxy_namespace> proxy-service --template='{{(index .status.loadBalancer.ingress 0).hostname}}')
    2. Update the brokerAddressPattern property in the proxy service configuration to use the broker address:

      sed -i "s/\(brokerAddressPattern:\).*$/\1 ${LOAD_BALANCER_ADDRESS}/" load-balancer/proxy/proxy-config.yaml
    3. Apply the change to the proxy configuration and restart the proxy pod.

       oc apply -k load-balancer && oc delete pod -n <my_proxy_namespace> --all
  7. Check the status of the deployment:

    oc get pods -n <my_proxy_namespace>

    Output shows the deployment name and readiness

    NAME                      READY  STATUS   RESTARTS
    my-cluster-kafka-0        1/1    Running  0
    my-cluster-kafka-1        1/1    Running  0
    my-cluster-kafka-2        1/1    Running  0
    my-cluster-proxy-<pod_id> 1/1    Running  0

    my-cluster-proxy is the name of the proxy.

    A pod ID identifies the pod created.

    With the default deployment, you install a single proxy pod.

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

  8. Verify that the encryption has been applied to the specified topics by producing messages through the proxy and then consuming directly and indirectly from the Kafka cluster.

Verify the proxy is working when using the cluster-ip type listener by running interactive pod containers for Kafka producers and consumers within the OpenShift cluster.

  1. Produce messages from the proxy:

    Producing messages through the proxy

    oc run -n <my_proxy_namespace> -ti proxy-producer \
      --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 \
      --rm=true \
      --restart=Never \
      -- bin/kafka-console-producer.sh \
      --bootstrap-server proxy-service:9092 \
      --topic my-topic

  2. Consume messages directly from the Kafka cluster to show they are encrypted:

    Consuming messages directly from the Kafka cluster

    oc run -n my-cluster -ti cluster-consumer \
      --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 \
      --rm=true \
      --restart=Never
      -- ./bin/kafka-console-consumer.sh  \
      --bootstrap-server my-cluster-kafka-bootstrap:9092 \
      --topic my-topic \
      --from-beginning \
      --timeout-ms 10000

  3. Consume messages from the proxy to show they are decrypted automatically:

    Consuming messages directly from the Kafka cluster

    oc run -n <my_proxy_namespace> -ti proxy-consumer \
      --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 \
      --rm=true \
      --restart=Never \
      -- ./bin/kafka-console-consumer.sh \
      --bootstrap-server proxy-service:9092 \
      --topic my-topic --from-beginning --timeout-ms 10000

Verify the proxy is working when using the loadbalancer type listener by running a Kafka producer and consumer through the proxy locally.

  1. Produce messages from the proxy using the loadbalancer address:

    Producing messages through the proxy

    kafka-console-producer \
    --bootstrap-server <load_balancer_address>:9092 \
    --topic my-topic

  2. Consume messages directly from the Kafka cluster using an interactive pod container to show they are encrypted:

    Consuming messages directly from the Kafka cluster

     oc run -n my-cluster -ti cluster-consumer \
       --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 \
       --rm=true \
       --restart=Never
       -- ./bin/kafka-console-consumer.sh  \
       --bootstrap-server my-cluster-kafka-bootstrap:9092 \
       --topic my-topic \
       --from-beginning \
       --timeout-ms 10000

  3. Consume messages from the proxy to show they are decrypted automatically:

    Consuming messages directly from the Kafka cluster

     kafka-console-consumer \
     --bootstrap-server <load_balancer_address>:9092 \
     --topic my-topic \
     --from-beginning \
     --timeout-ms 10000

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top