Deploying and Managing Streams for Apache Kafka Proxy on OpenShift


Red Hat Streams for Apache Kafka 3.1

Simplify proxy deployment and management on OpenShift

Abstract

Streams for Apache Kafka Proxy is a protocol-aware proxy that extends Kafka-based systems with flexible filtering and enhanced security. This guide describes how to deploy the proxy using its operator, configure proxy behavior, and perform key operational tasks, including setup, security integration, and monitoring.

Providing feedback on Red Hat documentation

We appreciate your feedback on our documentation.

To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly.

Prerequisite

  • You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one.

Procedure

  1. Click Create issue.
  2. In the Summary text box, enter a brief description of the issue.
  3. In the Description text box, provide the following information:

    • The URL of the page where you found the issue.
    • A detailed description of the issue.
      You can leave the information in any other fields at their default values.
  4. Add a reporter name.
  5. Click Create to submit the Jira issue to the documentation team.

Thank you for taking the time to provide feedback.

About this guide

This guide covers using the Streams for Apache Kafka Proxy Operator to configure, deploy, secure, and operate the Streams for Apache Kafka Proxy on OpenShift. Refer to other Streams for Apache Kafka Proxy guides for information on running the proxy outside OpenShift or for advanced topics such as plugin development.

Streams for Apache Kafka Proxy is an Apache Kafka protocol-aware ("Layer 7") proxy designed to enhance Kafka-based systems.

The Streams for Apache Kafka Proxy Operator is an operator for OpenShift which simplifies deploying and operating the Streams for Apache Kafka Proxy.

Chapter 2. API concepts

The Streams for Apache Kafka Proxy Operator uses a declarative API based on OpenShift custom resources to manage proxy deployments.

The operator takes custom resources and core OpenShift resources as inputs:

KafkaProxy
Defines an instance of the proxy.
VirtualKafkaCluster
Represents a logical Kafka cluster that will be exposed to Kafka clients.
KafkaProxyIngress
Configures how a virtual cluster is exposed on the network to Kafka clients.
KafkaService
Specifies a backend Kafka cluster for a virtual cluster.
KafkaProtocolFilter
Specifies filter mechanisms for use with a virtual cluster.
Secret
KafkaService and KafkaProtocolFilter resources may reference a Secret to provide security-sensitive data such as TLS certificates or passwords.
ConfigMap
KafkaService and KafkaProtocolFilter resources may reference a ConfigMap to provide non-sensitive configuration such as trusted CA certificates.

Figure 2.1. Example input resources and the references between them

Based on the input resources, the operator generates the core OpenShift resources needed to deploy the Streams for Apache Kafka Proxy, such as the following:

ConfigMap
Provides the proxy configuration file mounted into the proxy container.
Deployment
Manages the proxy Pod and container.
Service
Exposes the proxy over the network to other workloads in the same OpenShift cluster.

The API is decomposed into multiple custom resources in a similar way to the OpenShift Gateway API, and for similar reasons. You can make use of Kubernete’s Role-Based Access Control (RBAC) to divide responsibility for different aspects of the overall proxy functionality to different roles (people) in your organization.

For example, you might grant networking engineers the ability to configure KafkaProxy and KafkaProxyIngress, while giving application developers the ability to configure VirtualKafkaCluster, KafkaService, and KafkaProtocolFilter resources.

Figure 2.2. Generated OpenShift resources and the relationships between them

2.2. Custom resource API compatibility

Streams for Apache Kafka Proxy custom resource definitions are packaged and deployed alongside the operator. Currently, there’s only a single version of the custom resource APIs: v1alpha1.

Future updates to the operator may introduce new versions of the custom resource APIs. At that time the operator will be backwards compatible with older versions of those APIs and an upgrade procedure will be used to upgrade existing custom resources to the new API version.

Chapter 3. Installing the proxy operator

This section provides instructions for installing the Streams for Apache Kafka Proxy Operator. Install the proxy operator using one of the following methods:

  • By applying the proxy installation files
  • From the OperatorHub in the OpenShift web console (Openshift Clusters only)

Installation options and procedures are demonstrated using the example files included with Streams for Apache Kafka Proxy.

3.1. Install prerequisites

To install Streams for Apache Kafka Proxy, you will need the following:

  • An OpenShift 4.18 or later cluster.
  • The oc command-line tool is installed and configured to connect to the running cluster.

To deploy Streams for Apache Kafka Proxy using YAML files, download the latest release archive (streams-3.1-ocp-install-examples.zip - Streams for Apache Kafka 3.1 Installation and Example files) from the Streams for Apache Kafka software downloads page.

The release archive contains sample YAML files for deploying Streams for Apache Kafka components to OpenShift using oc.

The proxy artifacts include installation and example files:

Installation files
The install directory contains the YAML manifests to install the proxy operator.
Example files
The examples directory contains example custom resources, which can be used to deploy a proxy once the operator has been installed.

This procedure shows how to install the Streams for Apache Kafka Proxy Operator in your OpenShift cluster.

Prerequisites

  • You need an account with permission to create and manage CustomResourceDefinition and RBAC (ClusterRole) resources.
  • You have downloaded the release artifacts and extracted the contents into the current directory.

Procedure

  1. Edit the Streams for Apache Kafka Proxy installation files to use the namespace the operator is going to be installed into.

    For example, in this procedure the operator is installed into the namespace my-kroxylicious-operator-namespace.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-kroxylicious-operator-namespace/' install/*.yaml
    Copy to Clipboard Toggle word wrap

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-kroxylicious-operator-namespace/' install/*.yaml
    Copy to Clipboard Toggle word wrap
  2. Deploy the Streams for Apache Kafka Proxy operator:

    oc create -f install
    Copy to Clipboard Toggle word wrap
  3. Check the status of the deployment:

    oc get deployments -n my-kroxylicious-operator-namespace
    Copy to Clipboard Toggle word wrap

    Output shows the deployment name and readiness

    NAME                      READY  UP-TO-DATE  AVAILABLE
    kroxylicious-operator     1/1    1           1
    Copy to Clipboard Toggle word wrap

    READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1.

3.4. Installing the operator from the OperatorHub

This procedure describes how to install and subscribe to the Streams for Apache Kafka Proxy Operator using the OperatorHub in the OpenShift Container Platform web console.

The procedure describes how to create a project and install the operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions.

Warning

Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing the operator from the default alpha channel is generally safe. However, we do not recommend enabling automatic updates on the alpha channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels.

Prerequisites

Procedure

  1. Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation.

    We use a project named streams-kafka-proxy in this example.

  2. Navigate to the Operators > OperatorHub page.
  3. Scroll or type a keyword into the Filter by keyword box to find the Streams for Apache Kafka Proxy operator.

    The operator is located in the Other category.

  4. Click Streams for Apache Kafka Proxy to display the operator information.
  5. Read the information about the operator and click Install.
  6. On the Install Operator page, choose from the following installation and update options:

    • Update Channel: Choose the update channel for the operator.

      • The (default) alpha channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
      • An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number.
      • An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number.
    • Installation Mode: Install the operator to all namespaces in the OpenShift cluster.

      A single instance of the operator will watch and manage proxies created throughout the OpenShift cluster.

    • Update approval: By default, the Streams for Apache Kafka Proxy Operator is automatically upgraded to the latest proxy version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation.
  7. Click Install to install the operator to your selected namespace.
  8. After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace.

    The status will show as Succeeded.

  9. Use the proxy operator to deploy a proxy.

Chapter 4. Deploying a proxy

Deploy a basic proxy instance with a single virtual cluster exposed to Kafka clients on the same OpenShift cluster.

4.1. Prerequisites

  • The Streams for Apache Kafka Proxy Operator is installed in the OpenShift cluster.
  • A Kafka cluster is available to be proxied.
  • TLS certificate generation capability is available for ingress configurations that require TLS.
  • DNS management access is available for ingress configurations that require off-cluster access.

4.2. The required resources

A KafkaProxy resource represents an instance of the Streams for Apache Kafka Proxy. Conceptually, it is the top-level resource that links together KafkaProxyIngress, VirtualKafkaCluster, KafkaService, and KafkaProtocolFilter resources to form a complete working proxy.

KafkaProxy resources are referenced by KafkaProxyIngress and VirtualKafkaCluster resources to define how the proxy is exposed and what it proxies.

Example KafkaProxy configuration

kind: KafkaProxy
apiVersion: kroxylicious.io/v1alpha1
metadata:
  namespace: my-proxy
  name: simple
spec: {} 
1
Copy to Clipboard Toggle word wrap

1
An empty spec creates a proxy with default configuration.

4.2.2. Networking configuration

A KafkaProxyIngress resource defines the networking configuration that allows Kafka clients to connect to a VirtualKafkaCluster.

It is uniquely associated with a single KafkaProxy instance, but it is not uniquely associated with a VirtualKafkaCluster and can be used by multiple VirtualKafkaCluster instances.

The KafkaProxyIngress resource supports the following ingress types to configure networking access to the virtual cluster:

  • clusterIP exposes the virtual cluster to applications running inside the same OpenShift cluster as the proxy.
  • loadBalancer exposes the virtual cluster to applications running outside the OpenShift cluster.

The clusterIP ingress types support both TCP (plain) and TLS connections. The loadBalancer type exclusively supports TLS.

When using TLS, you specify a TLS server certificate in the ingress configuration of the VirtualKafkaCluster resource.

When using loadBalancer, changes to your DNS may be required.

The following table summarizes the supported ingress types.

Expand
Table 4.1. Supported ingress types
Ingress TypeUse caseSupported TransportRequires DNS changes?

clusterIP

On-cluster applications

TCP/TLS

No

loadBalancer

Off-cluster applications

TLS only

Yes

Important

TLS is recommended when connecting applications in a production environment.

4.2.2.1. clusterIP ingress type

The clusterIP ingress type exposes virtual clusters to Kafka clients running in the same OpenShift cluster as the proxy. It supports both TCP (plain) and TLS connections.

The clusterIP ingress type uses OpenShift Service resources of type ClusterIP to enable on-cluster access.

Example KafkaProxyIngress configuration for clusterIP with TCP

kind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
  namespace: my-proxy
  name: cluster-ip
spec:
  proxyRef: 
1

    name: simple
  clusterIP: 
2

    protocol: TCP 
3
Copy to Clipboard Toggle word wrap

1
Identifies the KafkaProxy resource that this ingress is part of.
2
Specifies clusterIP networking.
3
Defines the connection protocol as plain TCP.

Example KafkaProxyIngress configuration for clusterIP with TLS

kind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
  namespace: my-proxy
  name: cluster-ip
spec:
  proxyRef:
    name: simple
  clusterIP:
    protocol: TLS 
1
Copy to Clipboard Toggle word wrap

1
Defines the connection protocol as TLS to enable encrypted communication between clients and the proxy.

When using TLS, specify a TLS server certificate in the ingress configuration of the VirtualKafkaCluster resource using a certificateRef.

4.2.2.2. loadBalancer ingress type

The loadBalancer ingress type allows applications running off-cluster to connect to the virtual cluster. TLS must be used with this ingress type.

The loadBalancer ingress type uses OpenShift Service resources of type LoadBalancer to enable off-cluster access.

When using a loadBalancer ingress, the proxy uses SNI (Server Name Indication) to match the client’s requested host name to the correct virtual cluster and broker within the proxy. This means that every virtual cluster and every broker within the virtual cluster must be uniquely identifiable within DNS. To accomplish this, the following configuration must be provided:

  • A unique bootstrapAddress. This is the address that the clients initially use to connect to the virtual cluster.
  • An advertisedBrokerAddressPattern that generates unique broker addresses which clients use to connect to individual brokers.

You decide how to formulate the bootstrapAddress and the advertisedBrokerAddressPattern to best fit the networking conventions of your organization.

The advertisedBrokerAddressPattern must contain the token $(nodeId). The proxy replaces this token with the broker’s node ID. This ensures that client connections are correctly routed to the intended broker.

Both bootstrapAddress and advertisedBrokerAddressPattern may contain the token $(virtualClusterName). If this is present, it is replaced by the virtual cluster’s name. This token is necessary when the KafkaProxyIngress is being shared by many virtual clusters.

One possible scheme is to use the virtual cluster’s name as a subdomain within your organisation’s domain name:

$(virtualClusterName).kafkaproxy.example.com
Copy to Clipboard Toggle word wrap

You can then use a further subdomain for each broker:

broker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com
Copy to Clipboard Toggle word wrap

You can use other naming schemes, as long as each address remains unique.

Example KafkaProxyIngress configuration for loadBalancer

kind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
  namespace: my-proxy
  name: load-balancer
spec:
  proxyRef: 
1

    name: simple
  loadBalancer: 
2

    bootstrapAddress: "$(virtualClusterName).kafkaproxy.example.com" 
3

    advertisedBrokerAddressPattern: "broker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com" 
4
Copy to Clipboard Toggle word wrap

1
Identifies the KafkaProxy resource that this ingress is part of.
2
Specifies loadBalancer networking.
3
The bootstrap address for clients to connect to the virtual cluster.
4
The advertised broker address used by the proxy to generate the individual broker addresses presented to the client.

When using TLS, specify a TLS server certificate in the ingress configuration of the VirtualKafkaCluster resource using a certificateRef.

You must also configure DNS so that the bootstrap and broker address resolve from the network used by the applications.

4.2.3. Configuration for proxied Kafka clusters

A proxied Kafka cluster is configured in a KafkaService resource, which specifies how the proxy connects to the cluster. The Kafka cluster may or may not be running in the same OpenShift cluster as the proxy: Network connectivity is all that’s required.

This example shows a KafkaService defining how to connect to a Kafka cluster at kafka.example.com.

Example KafkaService configuration

kind: KafkaService
metadata:
  # ...
spec:
  bootstrapServers: kafka.example.com:9092 
1

  nodeIdRanges: 
2

    - name: brokers 
3

      start: 0 
4

      end: 5 
5

  # ...
Copy to Clipboard Toggle word wrap

1
The bootstrapServers property is a comma-separated list of addresses in <host>:<port> format. Including multiple broker addresses helps clients connect when one is unavailable.
2
nodeIdRanges declares the IDs of all the broker nodes in the Kafka cluster
3
name is optional, but specifying it can make errors easier to diagnose.
4
The start of the ID range, inclusive.
5
The end of the ID range, inclusive.

4.2.4. Virtual cluster configuration

A VirtualKafkaCluster resource defines a logical Kafka cluster that is accessible to clients over the network.

The virtual cluster references the following resources, which must be in the same namespace:

  • A KafkaProxy resource that the proxy is part of.
  • One or more KafkaProxyIngress resources that expose the virtual cluster to Kafka clients and provide virtual-cluster-specific configuration to the ingress (such as TLS certificates and other parameters).
  • A KafkaService resource that defines the backend Kafka cluster.
  • Zero or more KafkaProtocolFilter resources that apply filters to the Kafka protocol traffic passing between clients and the backend Kafka cluster.

This example shows a VirtualKafkaCluster, exposing it to Kafka clients running on the same OpenShift cluster. It uses plain TCP (as opposed to TLS) as the transport protocol.

Example VirtualKafkaCluster configuration with single clusterIP ingress

kind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
  name: my-cluster
  namespace: my-proxy
spec:
  proxyRef: 
1

    name: simple
  targetKafkaServiceRef: 
2

    name: my-cluster
  ingresses:
    - ingressRef: 
3

        name: cluster-ip
Copy to Clipboard Toggle word wrap

1
Identifies the KafkaProxy resource that this virtual cluster is part of.
2
The KafkaService that defines the Kafka cluster proxied by the virtual cluster.
3
Ingresses that expose the virtual cluster. Each ingress references a KafkaProxyIngress by name.

This example shows a VirtualKafkaCluster, exposing it to Kafka clients running both on and off-cluster, both using TLS. Because TLS is used, the ingress configuration must reference a TLS server certificate.

Example VirtualKafkaCluster configuration with two ingresses using TLS

kind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
  name: my-cluster
  namespace: my-proxy
spec:
  proxyRef:
    name: simple
  targetKafkaServiceRef:
    name: my-cluster
  ingresses:
    - ingressRef:
        name: cluster-ip
        certificateRef:
          name: 'cluster-ip-server-cert' 
1

          kind: Secret
    - ingressRef:
        name: load-balancer
        certificateRef:
          name: 'external-server-cert' 
2

          kind: Secret
Copy to Clipboard Toggle word wrap

1
Reference to a secret containing the server certificate for the clusterIP ingress.
2
Reference to a secret containing the server certificate for the loadBalancer ingress.

When using the clusterIP ingress type with the TLS protocol, you must provide suitable TLS certificates to secure communication.

The basic steps are as follows:

  • Generate a TLS server certificate that covers the service names assigned to the virtual cluster by the ingress.
  • Provide the certificate to the virtual cluster using an OpenShift Secret of type kubernetes.io/tls.

The exact procedure for generating the certificate depends on the tooling and processes used by your organization.

The certificate must meet the following criteria:

  • The certificate needs to be signed by a CA that is trusted by the on-cluster applications that connect to the virtual cluster.
  • The format of the key must be PKCS#8 encoded PEM (Privacy Enhanced Mail). It must not be password protected.
  • The certificate must use SANs (Subject Alternate Names) to list all service names or use a wildcard TLS certificate that covers them all. Assuming a virtual cluster name of my-cluster, an ingress name of cluster-ip, and a Kafka cluster using node IDs (0-2), the following SANs must be listed in the certificate:

    my-cluster-cluster-ip-bootstrap.<namespace>.svc.cluster.local
    my-cluster-cluster-ip-0.<namespace>.svc.cluster.local
    my-cluster-cluster-ip-1.<namespace>.svc.cluster.local
    my-cluster-cluster-ip-2.<namespace>.svc.cluster.local
    Copy to Clipboard Toggle word wrap

Create a secret for the certificate using the following command:

oc create secret tls <secret-name> --namespace <namespace> --cert=<path/to/cert/file> --key=<path/to/key/file>
Copy to Clipboard Toggle word wrap

<secret-name> is the name of the secret to be created, <namespace> is the name of the namespace where the proxy is to be deployed, and <path/to/cert/file> and <path/to/key/file> are the paths to the certificate and key files.

When using loadBalancer ingress type, you must provide suitable TLS certificates to secure communication.

The basic steps are as follows:

  • Generate a TLS server certificate that covers the bootstrap and broker names assigned to the virtual cluster by the ingress.
  • Provide the certificate to the virtual cluster using an OpenShift Secret of type kubernetes.io/tls.

The exact procedure for generating the certificate depends on the tooling and processes used by your organization.

The certificate must meet the following criteria:

  • The certificate needs to be signed by a CA that is trusted by the off-cluster applications that connect to the virtual cluster.
  • The format of the key must be PKCS#8 encoded PEM (Privacy Enhanced Mail). It must not be password protected.
  • The certificate must use SANs (Subject Alternate Names) to list the bootstrap and all the broker names or use a wildcard TLS certificate that covers them all. Assuming a bootstrapAddress of $(virtualClusterName).kafkaproxy.example.com, an advertisedBrokerAddressPattern of broker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com, a Kafka cluster using node IDs (0-2), and a virtual cluster name of my-cluster, the following SANs must be listed in the certificate:

    mycluster.kafkaproxy.example.com
    broker-0.mycluster.kafkaproxy.example.com
    broker-1.mycluster.kafkaproxy.example.com
    broker-2.mycluster.kafkaproxy.example.com
    Copy to Clipboard Toggle word wrap

Create a secret for the certificate using the following command:

oc create secret tls <secret-name> --namespace <namespace> --cert=<path/to/cert/file> --key=<path/to/key/file>
Copy to Clipboard Toggle word wrap

<secret-name> is the name of the secret to be created, <namespace> is the name of the namespace where the proxy is to be deployed, and <path/to/cert/file> and <path/to/key/file> are the paths to the certificate and key files.

4.2.4.3. Configuring DNS for load balancer ingress

When using the loadBalancer ingress type, you must ensure that both the bootstrapAddress and the names generated from advertisedBrokerAddressPattern resolve to the external address of the OpenShift Service underlying the load balancer on the network where the off-cluster applications run.

Prerequisites

  • The Streams for Apache Kafka Proxy Operator is installed.
  • KafkaProxy, VirtualKafkaCluster, and KafkaProxyIngress resources are deployed.
  • The VirtualKafkaCluster and KafkaProxyIngress resources are configured to use a loadBalancer ingress.
  • DNS can be configured on the network where the off-cluster applications run.
  • Network traffic can to flow from the application network run to the external addresses provided by the OpenShift cluster.

Procedure

  1. Run the following command to discover the external address being used by the load balancer:

    oc get virtualkafkacluster -n <namespace> <virtual-cluster-name> -o=jsonpath='{.status.ingresses[?(@.name == "<ingress-name>")].loadBalancerIngressPoints}'
    Copy to Clipboard Toggle word wrap

    Replace <namespace> with the name of the OpenShift namespace where the resources are deployed, replace <ingress-name> with the name of the KafkaProxyIngresses and replace <virtual-cluster-name> with the name of the VirtualKafkaCluster resource.

    Depending on your OpenShift environment, the command returns an object containing an IP address or a hostname. This is the external address of the load balancer.

  2. Configure your DNS so that the bootstrap and broker names resolve to the external address.

    Assuming a bootstrapAddress of $(virtualClusterName).kafkaproxy.example.com, an advertisedBrokerAddressPattern of broker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com, a Kafka cluster uses node IDs (0-2), and a virtual cluster name of my-cluster, the following DNS mappings are listed:

    my-cluster.kafkaproxy.example.com => <external address>
    broker-0.my-cluster.kafkaproxy.example.com => <external address>
    broker-1.my-cluster.kafkaproxy.example.com => <external address>
    broker-2.my-cluster.kafkaproxy.example.com => <external address>
    Copy to Clipboard Toggle word wrap

    The exact steps vary by environment and network setup.

  3. Confirm that the names resolve from the application network:

    nslookup mycluster.kafkaproxy.example.com
    nslookup broker-0.mycluster.kafkaproxy.example.com
    Copy to Clipboard Toggle word wrap

4.3. Filters

A KafkaProtocolFilter resource represents a Streams for Apache Kafka Proxy filter. It is not uniquely associated with a VirtualKafkaCluster or KafkaProxy instance; it can be used in a number of VirtualKafkaCluster instances in the same namespace.

A KafkaProtocolFilter is similar to one of the items in a proxy configuration’s filterDefinitions:

  • The resource’s metadata.name corresponds directly to the name of a filterDefinitions item.
  • The resource’s spec.type corresponds directly to the type of a filterDefinitions item.
  • The resource’s spec.configTemplate corresponds to the config of a filterDefinitions item, but is subject to interpolation by the operator.

Chapter 5. Operating a proxy

Monitor the operational status of the proxy and configure resource usage. This section explains how to check the status of the KafkaProxyIngress and VirtualKafkaCluster resources, and how to set CPU and memory requests and limits for the proxy container.

This section assumes you have a running Streams for Apache Kafka Proxy instance.

The status of a VirtualKafkaCluster resource provides feedback on its configuration through a set of conditions. These include the ResolvedRefs condition, which indicates whether all referenced resources exist, and the Accepted condition, which indicates whether the cluster’s configuration was successfully applied to the proxy.

5.1.1. ResolvedRefs conditions

When you create a VirtualKafkaCluster, the operator checks whether the following exist:

  • A KafkaProxy matching spec.proxyRef.
  • Each KafkaProxyIngress specified in spec.ingresses, and whether they refer to the same KafkaProxy as the virtual cluster.
  • A Secret referred to in the tls property.

The result is reported in status.conditions with a ResolvedRefs condition accordingly.

Example VirtualKafkaCluster status when all referenced resources exist

kind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
  # ...
  generation: 12
spec:
  # ...
status:
  observedGeneration: 12 
1

  conditions:
    - type: ResolvedRefs 
2

      status: True 
3

      observedGeneration: 12
Copy to Clipboard Toggle word wrap

1
The observedGeneration in the status matches the metadata.generation, indicating that the status is up-to-date for the latest spec.
2
The ResolvedRefs condition type reports any issues with referenced resources.
3
A status value of True means that all referenced resources exist.

A status value of False means that one or more of the referenced resources is missing. In this case, the condition includes reason and message properties with more details.

5.1.2. Accepted conditions

When a VirtualKafkaCluster has a valid spec, the operator attempts to configure the proxy instance accordingly. This might not be possible. For example, the spec may be valid but incompatible with other virtual clusters running in the same proxy instance.

The operator sets a condition type of Accepted in status.conditions to indicate whether or not a virtual cluster has been successfully configured within a proxy instance.

The status of a KafkaProxyIngress resource provides feedback on its configuration through a set of conditions. These include the ResolvedRefs condition, which indicates whether all referenced resources exist.

When you create a KafkaProxyIngress, the operator checks whether a KafkaProxy corresponding to the spec.proxyRef exists. The result is reported in status.conditions with a ResolvedRefs condition accordingly.

Example KafkaProxyIngress status when spec.proxyRef exists

kind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
  name: cluster-ip
  namespace: my-proxy
  generation: 12
spec:
  # ...
status:
  observedGeneration: 12 
1

  conditions:
    - type: ResolvedRefs 
2

      status: True 
3

      observedGeneration: 12
Copy to Clipboard Toggle word wrap

1
The observedGeneration in the status matches the metadata.generation, indicating that the status is up-to-date for the latest spec.
2
The ResolvedRefs condition type reports any issues with referenced resources.
3
A status value of True means that all referenced resources exist.

A status value of False means that the KafkaProxy resource is missing. In this case, the condition includes reason and message properties with more details.

When you define a KafkaProxy resource, a number of OpenShift Pods are created, each with a proxy container. Each of these containers runs a single Streams for Apache Kafka Proxy process.

By default, these proxy containers are defined without resource limits. To manage CPU and memory consumption in your environment, modify the proxyContainer section within your KafkaProxy specification.

Example KafkaProxy configuration with proxy container resource specification

kind: KafkaProxy
apiVersion: kroxylicious.io/v1alpha1
metadata:
  namespace: my-proxy
  name: simple
spec:
  infrastructure:
    proxyContainer:
      resources:
        requests:
          cpu: '400m'
          memory: '656Mi'
        limits:
          cpu: '500m'
          memory: '756Mi'
Copy to Clipboard Toggle word wrap

Chapter 6. Securing a proxy

Secure proxies by using TLS and storing sensitive values in external resources.

6.1. Prerequisites

  • A running Streams for Apache Kafka Proxy instance

6.2. Securing the client-to-proxy connection

Secure client-to-proxy communications using TLS.

This example shows a VirtualKafkaCluster, exposing it to Kafka clients running on the same OpenShift cluster. It uses TLS as the transport protocol so that communication between Kafka clients and the proxy is encrypted.

Example VirtualKafkaCluster configuration

kind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
  name: my-cluster
  namespace: my-proxy
spec:
  proxyRef: 
1

    name: simple
  targetKafkaServiceRef: 
2

    name: my-cluster
  ingresses:
    - ingressRef: 
3

        name: cluster-ip
      tls: 
4

        certificateRef:
          name: server-certificate
          kind: Secret
Copy to Clipboard Toggle word wrap

1
Identifies the KafkaProxy resource that this virtual cluster is part of. It must be in the same namespace as the VirtualKafkaCluster.
2
The virtual cluster names the KafkaService to be proxied. It must be in the same namespace as the VirtualKafkaCluster.
3
The virtual cluster can be exposed by one or more ingresses. Each ingress must reference a KafkaProxyIngress in the same namespace as the VirtualKafkaCluster.
4
If the ingress supports TLS, the tls property configures the TLS server certificate to use.

Within a VirtualKafkaCluster, an ingress’s tls property configures TLS for that ingress. The tls.certificateRef specifies the Secret resource holding the TLS server certificate that the proxy uses for clients connecting through this ingress. The referenced KafkaProxyIngress also needs to be configured for TLS.

Example KafkaProxyIngress configuration for TLS

kind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
  name: cluster-ip
  namespace: my-proxy
spec:
  proxyRef: 
1

    name: simple
  clusterIP: 
2

    protocol: TLS 
3
Copy to Clipboard Toggle word wrap

1
The ingress must reference a KafkaProxy in the same namespace as the KafkaProxyIngress.
2
Exposes the proxy to Kafka clients inside the same OpenShift cluster using a ClusterIP service.
3
The ingress uses TLS as the transport protocol.

You can configure a virtual cluster ingress to request or require Kafka clients to authenticate to the proxy using TLS. This configuration is known as mutual TLS (mTLS), because both the client and the proxy authenticate each other using TLS.

Example VirtualKafkaCluster configuration requiring clients to present a trusted certificate

kind: VirtualKafkaCluster
metadata:
  # ...
spec:
  # ...
  ingresses:
    - ingressRef:
        name: cluster-ip
      tls:
        certificateRef:
          # ...
        trustAnchorRef: 
1

          kind: ConfigMap 
2

          name: trusted-cas 
3

          key: trusted-cas.pem 
4

        tlsClientAuthentication: REQUIRED 
5
Copy to Clipboard Toggle word wrap

1
References a separate OpenShift resource containing the trusted CA certificates.
2
The kind is optional and defaults to ConfigMap.
3
Name of the resource of the given kind, which must exist in the same namespace as the VirtualKafkaCluster.
4
Key identifying the entry in the given resource. The corresponding value must be a set of CA certificates. Supported formats for the bundle are: PEM, PKCS#12, and JKS.
5
Specifies whether client authentication is required (REQUIRED), requested (REQUESTED), or disabled (NONE). If a trustAnchorRef is specified, the default is REQUIRED.

Some older versions of TLS (and SSL before it) are now considered insecure. These versions remain enabled by default in order to maximize interoperability between TLS clients and servers that only support older versions.

If the Kafka cluster than you want to connect to supports newer TLS versions, you can disable the proxy’s support for older, insecure versions. For example, if the Kafka cluster supports TLSv1.1, TLSv1.2 and TLSv1.3 you might choose to enable only TLSv1.3 support. This would reduce the susceptibility to a TLS downgrade attack.

Important

It is good practice to disable insecure protocol versions.

You can restrict which TLS protocol versions the proxy supports for client-to-proxy connections by configuring the protocols property.

Example VirtualKafkaCluster with restricted TLS protocol versions

kind: VirtualKafkaCluster
metadata:
  # ...
spec:
  # ...
  ingresses:
    - ingressRef:
        name: cluster-ip
      tls:
        certificateRef:
          # ...
        protocols: 
1

          allow: 
2

            - TLSv1.3
Copy to Clipboard Toggle word wrap

1
Configures the TLS protocol versions used by the proxy.
2
Lists the protocol versions explicitly allowed for TLS negotiation.

Alternatively, you can use deny to specify protocol versions to exclude.

The names of the TLS protocol versions supported depend on the JVM in the proxy container image.

A cipher suite is a set of cryptographic algorithms that together provide the security guarantees offered by TLS. During TLS negotiation, a server and client agree on a common cipher suite that they both support.

Some older cipher suites are now considered insecure, but may be enabled on the Kafka cluster to allow older clients to connect.

The cipher suites enabled by default in the proxy depend on the JVM used in the proxy image and the TLS protocol version that is negotiated.

To prevent TLS downgrade attacks, you can disable cipher suites known to be insecure or no longer recommended. However, the proxy and the cluster must support at least one cipher suite in common.

Important

It is good practice to disable insecure cipher suites.

You can restrict which TLS cipher suites the proxy uses when negotiating client-to-proxy connections by configuring the cipherSuites property.

Example VirtualKafkaCluster configuration using cipherSuites to allow specific ciphers

kind: VirtualKafkaCluster
metadata:
  # ...
spec:
  # ...
  ingresses:
    - ingressRef:
        name: cluster-ip
      tls:
        certificateRef:
          # ...
        cipherSuites: 
1

          allow: 
2

           - TLS_AES_128_GCM_SHA256
           - TLS_AES_256_GCM_SHA384
Copy to Clipboard Toggle word wrap

1
Configures the cipher suites used by the proxy.
2
Lists the cipher suites explicitly allowed for TLS negotiation.

Alternatively, you can use deny to specify cipher suites to exclude. The names of the cipher suites supported depend on the JVM in the proxy container image.

6.3. Securing the proxy-to-broker connection

Secure proxy-to-broker communication using TLS.

By default, the proxy uses the platform’s default trust store when connecting to the proxied cluster over TLS. This works if the cluster’s TLS certificates are signed by a well-known public Certificate Authority (CA), but fails if they’re signed by a private CA instead.

Important

It is good practice to configure trust explicitly, even when proxied cluster’s TLS certificates are signed by a public CA.

This example configures a KafkaService to trust TLS certificates signed by any Certificate Authority (CA) listed in the trusted-cas.pem entry of the ConfigMap named trusted-cas.

Example KafkaService configuration for trusting certificates.

kind: KafkaService
metadata:
  # ...
spec:
  bootstrapServers: kafka.example.com:9092
  tls:
    trustAnchorRef: 
1

      kind: ConfigMap 
2

      name: trusted-cas 
3

      key: trusted-cas.pem 
4

    # ...
Copy to Clipboard Toggle word wrap

1
The trustAnchorRef property references a separate OpenShift resource which contains the CA certificates to be trusted
2
The kind is optional and defaults to ConfigMap.
3
The name of the resource of the given kind. This resource must exist in the same namespace as the KafkaService
4
The key identifies the entry in the given resource. The corresponding value must be a PEM-encoded set of CA certificates.

Some Kafka clusters require mutual TLS (mTLS) authentication. You can configure the proxy to present a TLS client certificate using the KafkaService resource.

The TLS client certificate you provide must have been issued by a Certificate Authority (CA) that’s trusted by the proxied cluster.

This example configures a KafkaService to use a TLS client certificate stored in a Secret named tls-cert-for-kafka.example.com.

Example KafkaService configuration with TLS client authentication.

kind: KafkaService
metadata:
  # ...
spec:
  bootstrapServers: kafka.example.com:9092
  tls:
    trustAnchorRef:
      kind: ConfigMap
      name: trusted-cas
      key: trusted-cas.pem
    certificateRef: 
1

      kind: Secret 
2

      name: tls-cert-for-kafka.example.com 
3

    # ...
Copy to Clipboard Toggle word wrap

1
The certificateRef property identifies the TLS client certificate to use.
2
The kind is optional and defaults to Secret. The Secret should have type: kubernetes.io/tls.
3
The name is the name of the resource of the given kind. This resource must exist in the same namespace as the KafkaService

Some older versions of TLS (and SSL before it) are now considered insecure. These versions remain enabled by default in order to maximize interoperability between TLS clients and servers that only support older versions.

If the Kafka cluster than you want to connect to supports newer TLS versions, you can disable the proxy’s support for older, insecure versions. For example, if the Kafka cluster supports TLSv1.1, TLSv1.2 and TLSv1.3 you might choose to enable only TLSv1.3 support. This would reduce the susceptibility to a TLS downgrade attack.

Important

It is good practice to disable insecure protocol versions.

This example configures a KafkaService to allow only TLS v1.3 when connecting to kafka.example.com.

Example KafkaService with restricted TLS protocol versions.

kind: KafkaService
metadata:
  # ...
spec:
  bootstrapServers: kafka.example.com:9092
  tls:
    # ...
    protocols: 
1

      allow: 
2

        - TLSv1.3
Copy to Clipboard Toggle word wrap

1
The protocols property configures the TLS protocol versions
2
allow lists the versions of TLS which are permitted.

The protocols property also supports deny, if you prefer to list the versions to exclude instead. The names of the TLS protocol versions supported depend on the JVM in the proxy container image.

A cipher suite is a set of cryptographic algorithms that together provide the security guarantees offered by TLS. During TLS negotiation, a server and client agree on a common cipher suite that they both support.

Some older cipher suites are now considered insecure, but may be enabled on the Kafka cluster to allow older clients to connect.

The cipher suites enabled by default in the proxy depend on the JVM used in the proxy image and the TLS protocol version that is negotiated.

To prevent TLS downgrade attacks, you can disable cipher suites known to be insecure or no longer recommended. However, the proxy and the cluster must support at least one cipher suite in common.

Important

It is good practice to disable insecure cipher suites.

Example KafkaService configured so that the proxy will negotiate TLS connection using only the listed ciphers.

kind: KafkaService
metadata:
  # ...
spec:
  bootstrapServers: kafka.example.com:9092
  tls:
    # ...
    cipherSuites: 
1

      allow: 
2

       - TLS_AES_128_GCM_SHA256
       - TLS_AES_256_GCM_SHA384
Copy to Clipboard Toggle word wrap

1
The cipherSuites object configures the cipher suites.
2
allow lists the cipher suites which are permitted.

The cipherSuites property also supports deny, if you prefer to list the cipher suites to exclude instead. The names of the cipher suites supported depend on the JVM in the proxy container image.

6.4. Securing filters

Secure filters by using the security features provided by each filter and storing sensitive values in external resources such as an OpenShift Secret.

Filter resources can be configured to handle security-sensitive values like passwords or keys by referencing OpenShift Secret and ConfigMap resources.

6.4.1.1. Template use and value interpolation

Interpolation is supported in spec.configTemplate for the automatic substitution of placeholder values at runtime. This allows security-sensitive values, such as passwords or keys, to be specified in OpenShift Secret resources rather than directly in the KafkaProtocolFilter resource. Likewise, things like trusted CA certificates can be defined in ConfigMap resources.

The operator determines which Secret and ConfigMap resources are referenced by a KafkaProtocolFilter resource and declares them as volumes in the proxy Pod, mounted into the proxy container. This example shows how to configure the RecordEncryptionFilter using a Vault KMS deployed in the same OpenShift cluster.

Example KafkaProtocolFilter configuration

kind: KafkaProtocolFilter
metadata:
  # ...
spec:
  type: RecordEncryption 
1

  configTemplate: 
2

    kms: VaultKmsService
    kmsConfig:
      vaultTransitEngineUrl: http://vault.vault.svc.cluster.local:8200/v1/transit
      vaultToken:
        password: ${secret:vault:token} 
3

    selector: TemplateKekSelector
    selectorConfig:
      template: "$(topicName)" 
4
Copy to Clipboard Toggle word wrap

1
The type is the Java class name of the proxy filter. If the unqualified name is ambiguous, it must be qualified by the filter package name.
2
The KafkaProtocolFilter requires a configTemplate, which supports interpolation references.
3
The password uses an interpolation reference, enclosed by ${ and } instead of a literal value. The operator supplies the value at runtime from the specified Secret.
4
The selector template is interpreted by the proxy. It uses different delimiters, $( and ), than the interpolation reference.
6.4.1.2. Structure of interpolation references

Let’s look at the example interpolation reference ${secret:vault:token} in more detail.

It starts with ${ and ends with }. Between these, it is broken into three parts, separated by colons (:):

  • secret is a provider. Supported providers are secret and configmap (note the use of lower case).
  • vault is a path. The interpretation of the path depends on the provider.
  • token is a key. The interpretation of the key also depends on the provider.

For both secret and configmap providers:

  • The path is interpreted as the name of a Secret or ConfigMap resource in the same namespace as the KafkaProtocolFilter resource.
  • The key is interpreted as a key in the data property of the Secret or ConfigMap resource.

Chapter 7. Monitoring

Streams for Apache Kafka Proxy supports key observability features to help you understand the performance and health of your proxy instances.

The Streams for Apache Kafka Proxy and Streams for Apache Kafka Proxy Operator generate metrics for real-time monitoring and alerting, as well as logs that capture their actions and behavior. You can integrate these metrics with a monitoring system like Prometheus for ingestion and analysis, while configuring log levels to control the granularity of logged information.

7.1. Overview of proxy metrics

The proxy provides metrics for both connections and messages. These metrics are categorized into downstream (client-side) and upstream (broker-side) groups They allow users to assess the impact of the proxy and its filters on their Kafka system.

  • Connection metrics count the connections made from the downstream (incoming connections from the clients) and the connection made by the proxy to upstream (outgoing connections to the Kafka brokers).
  • Message metrics count the number of Kafka protocol request and response messages that flow through the proxy.

7.1.1. Connection metrics

Connection metrics count the TCP connections made from the client to the proxy (kroxylicious_client_to_proxy_request_total) and from the proxy to the broker (kroxylicious_proxy_to_server_connections_total). These metrics count connection attempts, so the connection count is incremented even if the connection attempt ultimately fails.

In addition to the count metrics, there are active connection gauge metrics that track the current number of open connections, and error metrics.

  • If an error occurs whilst the proxy is accepting a connection from the client the kroxylicious_client_to_proxy_errors_total metric is incremented by one.
  • If an error occurs whilst the proxy is attempting a connection to a broker the kroxylicious_proxy_to_server_errors_total metric is incremented by one.

Connection and connection error metrics include the following labels: virtual_cluster (the virtual cluster’s name) and node_id (the broker’s node ID). When the client connects to the boostrap endpoint of the virtual cluster, a node ID value of bootstrap is recorded.

The kroxylicious_client_to_proxy_errors_total metric also counts connection errors that occur before a virtual cluster has been identified. For these specific errors, the virtual_cluster and node_id labels are set to an empty string ("").

Note

Error conditions signaled within the Kafka protocol response (such as RESOURCE_NOT_FOUND or UNKNOWN_TOPIC_ID) are not classed as errors by these metrics.

The proxy provides both counter and gauge metrics for connections:

  • Connection counters (kroxylicious_*_connections_total) track the total number of connection attempts over time. These values only increase and provide a historical view of connection activity.
  • Active connection gauges (kroxylicious_*_active_connections) show the current number of open connections at any given moment. These values increase when connections are established and decrease when connections are closed.
Expand
Table 7.1. Connection metrics for client and broker interactions
Metric NameTypeLabelsDescription

kroxylicious_client_to_proxy_connections_total

Counter

virtual_cluster, node_id

Incremented by one every time a connection is accepted from a client by the proxy.
This metric counts all connection attempts that reach the proxy, even those that end in error.

kroxylicious_client_to_proxy_errors_total

Counter

virtual_cluster, node_id

Incremented by one every time a connection is closed due to any downstream error.

kroxylicious_proxy_to_server_connections_total

Counter

virtual_cluster, node_id

Incremented by one every time a connection is made to the server from the proxy.
This metric counts all connections attempted to the broker, even those that end in error.

kroxylicious_proxy_to_server_errors_total

Counter

virtual_cluster, node_id

Incremented by one every time a connection is closed due to any upstream error.

kroxylicious_client_to_proxy_active_connections

Gauge

virtual_cluster, node_id

Shows the current number of active TCP connections from clients to the proxy.
This gauge reflects real-time connection state and decreases when connections are closed.

kroxylicious_proxy_to_server_active_connections

Gauge

virtual_cluster, node_id

Shows the current number of active TCP connections from the proxy to servers.
This gauge reflects real-time connection state and decreases when connections are closed.

7.1.2. Message metrics

Message metrics count and record the sizes of the Kafka protocol requests and responses that flow through the proxy.

Use these metrics to help understand:

  • the number of messages flowing through the proxy.
  • the overall volume of data through the proxy.
  • the effect the filters are having on the messages.
  • Downstream metrics

    • kroxylicious_client_to_proxy_request_total counts requests as they arrive from the client.
    • kroxylicious_proxy_to_client_response_total counts responses as they are returned to the client.
    • kroxylicious_client_to_proxy_request_size_bytes is incremented by the size of each request as it arrives from the client.
    • kroxylicious_proxy_to_client_response_size_bytes is incremented by the size of each response as it is returned to the client.
  • Upstream metrics

    • kroxylicious_proxy_to_server_request_total counts requests as they go to the broker.
    • kroxylicious_server_to_proxy_response_total counts responses as they are returned by the broker.
    • kroxylicious_proxy_to_server_request_size_bytes is incremented by the size of each request as it goes to the broker.
    • kroxylicious_server_to_proxy_response_size_bytes is incremented by the size of each response as it is returned by the broker.

The size recorded is the encoded size of the protocol message. It includes the 4 byte message size.

Filters can alter the flow of messages through the proxy or the content of the message. This is apparent through the metrics.

  • If a filter sends a short-circuit, or closes a connection the downstream message counters will exceed the upstream counters.
  • If a filter changes the size of the message, the downstream size metrics will be different to the upstream size metrics.

Figure 7.1. Downstream and upstream message metrics in the proxy

Message metrics include the following labels: virtual_cluster (the virtual cluster’s name), node_id (the broker’s node ID), api_key (the message type), api_version, and decoded (a flag indicating if the message was decoded by the proxy).

When the client connects to the boostrap endpoint of the virtual cluster, metrics are recorded with a node ID value of bootstrap.

Expand
Table 7.2. Kafka message metrics for proxy request and response flow
Metric NameTypeLabelsDescription

kroxylicious_client_to_proxy_request_total

Counter

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by one every time a request arrives at the proxy from a client.

kroxylicious_proxy_to_server_request_total

Counter

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by one every time a request goes from the proxy to a server.

kroxylicious_server_to_proxy_response_total

Counter

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by one every time a response arrives at the proxy from a server.

kroxylicious_proxy_to_client_response_total

Counter

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by one every time a response goes from the proxy to a client.

kroxylicious_client_to_proxy_request_total

Distribution

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by the size of the message each time a request arrives at the proxy from a client.

kroxylicious_proxy_to_server_request_total

Distribution

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by the size of the message each time a request goes from the proxy to a server.

kroxylicious_server_to_proxy_response_total

Distribution

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by the size of the message each time a response arrives at the proxy from a server.

kroxylicious_proxy_to_client_response_total

Distribution

virtual_cluster, node_id, api_key, api_version, decoded

Incremented by the size of the message each time a response goes from the proxy to a client.

7.2. Overview of operator metrics

The Streams for Apache Kafka Proxy Operator is implemented using the Java Operator SDK. The Java Operator SDK exposes metrics that allow its behavior to be understood. These metrics are enabled by default in the Streams for Apache Kafka Proxy Operator.

Refer to the Java Operator SDK metric documentation to learn more about metrics.

When Streams for Apache Kafka Proxy is deployed on OpenShift Container Platform, Prometheus metrics can be collected through OpenShift’s monitoring for user-defined projects.

This OpenShift feature provides a dedicated Prometheus instance that enables developers to monitor workloads in their own projects (namespaces). The monitoring stack automatically scrapes metrics from eligible targets defined by custom resources such as PodMonitor.

Streams for Apache Kafka Proxy integrates with this monitoring stack by using a PodMonitor custom resource. To enable metric scraping, define a PodMonitor in the same namespace as the proxy deployment. This resource identifies the proxy pods and exposes their /metrics endpoints to the Prometheus instance, without requiring manual Prometheus configuration.

7.4. Ingesting metrics

Metrics from the Streams for Apache Kafka Proxy and Streams for Apache Kafka Proxy Operator can be ingested into your Prometheus instance. The proxy and the operator each expose an HTTP endpoint for Prometheus metrics at the /metrics address. The endpoint does not require authentication.

For the Proxy, the port that exposes the scrape endpoint is named management. For the Operator, the port is named http.

Prometheus can be configured to ingest the metrics from the scrape endpoints.

Note

This guide assumes monitoring for user-defined projects is enabled on your OpenShift cluster. For more information, see the Openshift Monitoring guide.

7.4.1. Ingesting operator metrics

This procedure describes how to ingest metrics from the Streams for Apache Kafka Proxy Operator into Prometheus.

Prerequisites

  • Streams for Apache Kafka Proxy Operator is installed.
  • Monitoring for user-defined projects is enabled on your OpenShift cluster and a Prometheus instance has been created. For more information, see the Openshift Monitoring guide.

Procedure

  1. Apply the PodMonitor configuration:

    apiVersion: monitoring.coreos.com/v1
    kind: PodMonitor
    metadata:
      name: operator
    spec:
      selector:
        matchLabels:
          app.kubernetes.io/name: kroxylicious
          app.kubernetes.io/component: operator
      podMetricsEndpoints:
      - path: /metrics
        port: http
    Copy to Clipboard Toggle word wrap

    The Prometheus Operator reconfigures Prometheus automatically. Prometheus begins to regularly to scrape the Streams for Apache Kafka Proxy Operator’s metric.

  2. Check the metrics are being ingested using a PromQL query such as:

    operator_sdk_reconciliations_queue_size_kafkaproxyreconciler{kind="KafkaProxy", group="kroxylicious.io"}
    Copy to Clipboard Toggle word wrap

7.4.2. Ingesting proxy metrics

This procedure describes how to ingest metrics from the Streams for Apache Kafka Proxy into Prometheus.

Prerequisites

  • Streams for Apache Kafka Proxy Operator is installed.
  • An instance of Streams for Apache Kafka Proxy deployed by the operator.
  • Monitoring for user-defined projects is enabled on your OpenShift cluster and a Prometheus instance has been created. For more information, see the Openshift Monitoring guide.

Procedure

  1. Apply the PodMonitor configuration:

    apiVersion: monitoring.coreos.com/v1
    kind: PodMonitor
    metadata:
      name: proxy
    spec:
      selector:
        matchLabels:
          app.kubernetes.io/name: kroxylicious
          app.kubernetes.io/component: proxy
      podMetricsEndpoints:
      - path: /metrics
        port: management
    Copy to Clipboard Toggle word wrap

    The Prometheus Operator reconfigures Prometheus automatically. Prometheus begins to regularly to scrape the proxy’s metric.

  2. Check the metrics are being ingested using a PromQL query such as:

    kroxylicious_build_info
    Copy to Clipboard Toggle word wrap

7.5. Setting log levels

You can independently control the logging level of both the Streams for Apache Kafka Proxy Operator and the Streams for Apache Kafka Proxy.

In both cases, logging levels are controlled using two environment variables:

  • KROXYLICIOUS_APP_LOG_LEVEL controls the logging of the application (io.kroxylicious loggers). It defaults to INFO.
  • KROXYLICIOUS_ROOT_LOG_LEVEL controls the logging level at the root. It defaults to WARN.

When trying to diagnose a problem, start first by raising the logging level of KROXYLICIOUS_APP_LOG_LEVEL. If more detailed diagnostics are required, try raising the KROXYLICIOUS_ROOT_LOG_LEVEL. Both the proxy and operator use Apache Log4J2 and use logging levels understood by it: TRACE, DEBUG, INFO, WARN, and ERROR.

Warning

WARNING: Running the operator or the proxy at elevated logging levels, such as DEBUG or TRACE, can generate a large volume of logs, which may consume significant storage and affect performance. Run at these levels only as long as necessary.

7.5.1. Overriding proxy logging levels

This procedure describes how to override the logging level of the Streams for Apache Kafka Proxy.

Prerequisites

  • An instance of Streams for Apache Kafka Proxy deployed by the Streams for Apache Kafka Proxy Operator.

Procedure

  1. Apply the KROXYLICIOUS_APP_LOG_LEVEL or KROXYLICIOUS_ROOT_LOG_LEVEL environment variable to the proxy’s OpenShift Deployment resource:

    oc set env -n <namespace> deployment <deployment_name> KROXYLICIOUS_APP_LOG_LEVEL=DEBUG
    Copy to Clipboard Toggle word wrap

    The Deployment resource has the same name as the KafkaProxy.

    OpenShift recreates the proxy pod automatically.

  2. Verify that the new logging level has taken affect:

    oc logs -f -n <namespace> deployment/<deployment_name>
    Copy to Clipboard Toggle word wrap
7.5.1.1. Reverting proxy logging levels

This procedure describes how to revert the logging level of the Streams for Apache Kafka Proxy back to its defaults.

Prerequisites

  • An instance of Streams for Apache Kafka Proxy deployed by the Streams for Apache Kafka Proxy Operator.

Procedure

  1. Remove the KROXYLICIOUS_APP_LOG_LEVEL or KROXYLICIOUS_ROOT_LOG_LEVEL environment variable from the proxy’s OpenShift Deployment:

    oc set env -n <namespace> deployment <deployment_name> KROXYLICIOUS_APP_LOG_LEVEL-
    Copy to Clipboard Toggle word wrap

    OpenShift recreates the proxy pod automatically.

  2. Verify that the logging level has reverted to its default:

    oc logs -f -n <namespace> deployment/<deployment_name>
    Copy to Clipboard Toggle word wrap

This procedure describes how to override the logging level of the Streams for Apache Kafka Proxy Operator. It applies when the operator was installed by OLM.

Prerequisites

  • Streams for Apache Kafka Proxy Operator installed using OLM.

Procedure

  1. Identify the name of the Subscription resource that has installed the operator and its namespace:

    oc get subscriptions.operators.coreos.com --all-namespaces | grep kroxylicious
    Copy to Clipboard Toggle word wrap
  2. Apply the KROXYLICIOUS_APP_LOG_LEVEL or KROXYLICIOUS_ROOT_LOG_LEVEL environment variable to the Subscription:

    oc patch subscription -n <namespace> <subscription_name> -p '{"spec":{"config":{"env":[{"name":"KROXYLICIOUS_APP_LOG_LEVEL","value":"DEBUG"}]}}}' --type=merge
    Copy to Clipboard Toggle word wrap

    OpenShift recreates the operator pod automatically.

  3. Verify that the new logging level has taken affect:

    oc logs -f -n <namespace> deployment/kroxylicious-operator
    Copy to Clipboard Toggle word wrap
7.5.2.1. Reverting operator logging levels

This procedure describes how to revert the logging level of the Streams for Apache Kafka Proxy Operator back to its defaults.

Prerequisites

  • Streams for Apache Kafka Proxy Operator installed using OLM.

Procedure

  1. Remove the KROXYLICIOUS_APP_LOG_LEVEL or KROXYLICIOUS_ROOT_LOG_LEVEL environment variable from the Subcription:

    oc patch subscription <subscription_name> -p '{"spec":{"config":{"env":[]}}}' --type=merge
    Copy to Clipboard Toggle word wrap

    OpenShift recreates the operator pod automatically

  2. Verify that the logging level has reverted to its default:

    oc logs -f -n <namespace> deployment/kroxylicious-operator
    Copy to Clipboard Toggle word wrap

This procedure describes how to override the logging level of the Streams for Apache Kafka Proxy Operator. It applies when the operator was installed from the YAML bundle.

Prerequisites

  • Streams for Apache Kafka Proxy Operator installed from the YAML bundle.

Procedure

  1. Apply the KROXYLICIOUS_APP_LOG_LEVEL or KROXYLICIOUS_ROOT_LOG_LEVEL environment variable to the operator’s OpenShift Deployment:

    oc set env -n kroxylicious-operator deployment kroxylicious-operator KROXYLICIOUS_APP_LOG_LEVEL=DEBUG
    Copy to Clipboard Toggle word wrap

    OpenShift recreates the operator pod automatically.

  2. Verify that the new logging level has taken affect:

    oc logs -f -n kroxylicious-operator deployment/kroxylicious-operator
    Copy to Clipboard Toggle word wrap
7.5.3.1. Reverting operator logging levels

This procedure describes how to revert the logging level of the Streams for Apache Kafka Proxy Operator back to its defaults.

Prerequisites

  • Streams for Apache Kafka Proxy Operator installed from the YAML bundle.

Procedure

  1. Remove the KROXYLICIOUS_APP_LOG_LEVEL or KROXYLICIOUS_ROOT_LOG_LEVEL environment variable from the proxy’s OpenShift Deployment:

    oc set env -n kroxylicious-operator deployment kroxylicious-operator KROXYLICIOUS_APP_LOG_LEVEL-
    Copy to Clipboard Toggle word wrap

    OpenShift recreates the operator pod automatically

  2. Verify that the logging level has reverted to its default:

    oc logs -f -n kroxylicious-operator deployment/kroxylicious-operator
    Copy to Clipboard Toggle word wrap

Chapter 8. Glossary

Glossary of terms used in the Streams for Apache Kafka Proxy documentation.

API
Application Programmer Interface.
CA
Certificate Authority. An organization that issues certificates.
CR
Custom Resource. An instance resource of a CRD. In other words, a resource of a kind that is not built into OpenShift.
CRD
Custom Resource Definition. An OpenShift API for defining OpenShift API extensions.
KMS
Key Management System. A dedicated system for controlling access to cryptographic material, and providing operations which use that material.
mTLS
Mutual Transport Layer Security. A configuration of TLS where the client presents a certificate to a server, which the server authenticates.
TLS
The Transport Layer Security. A secure transport protocol where a server presents a certificate to a client, which the client authenticates. TLS was previously known as the Secure Sockets Layer (SSL).
TCP
The Transmission Control Protocol.

Appendix A. Using your subscription

Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.

A.1. Accessing Your Account

  1. Go to access.redhat.com.
  2. If you do not already have an account, create one.
  3. Log in to your account.

A.2. Activating a Subscription

  1. Go to access.redhat.com.
  2. Navigate to My Subscriptions.
  3. Navigate to Activate a subscription and enter your 16-digit activation number.

A.3. Downloading Zip and Tar Files

To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.

  1. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
  2. Locate the Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category.
  3. Select the desired Streams for Apache Kafka product. The Software Downloads page opens.
  4. Click the Download link for your component.

A.4. Installing packages with DNF

To install a package and all the package dependencies, use:

dnf install <package_name>
Copy to Clipboard Toggle word wrap

To install a previously-downloaded package from a local directory, use:

dnf install <path_to_download_package>
Copy to Clipboard Toggle word wrap

Revised on 2025-12-16 10:57:42 UTC

Legal Notice

Copyright © Red Hat.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top