Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 6. Deploying Streams for Apache Kafka using installation artifacts

download PDF

Having prepared your environment for a deployment of Streams for Apache Kafka, you can deploy Streams for Apache Kafka to an OpenShift cluster. Use the installation files provided with the release artifacts.

Streams for Apache Kafka is based on Strimzi 0.40.x. You can deploy Streams for Apache Kafka 2.7 on OpenShift 4.12 to 4.15.

The steps to deploy Streams for Apache Kafka using the installation files are as follows:

  1. Deploy the Cluster Operator
  2. Use the Cluster Operator to deploy the following:

  3. Optionally, deploy the following Kafka components according to your requirements:

Note

To run the commands in this guide, an OpenShift user must have the rights to manage role-based access control (RBAC) and CRDs.

6.1. Basic deployment path

You can set up a deployment where Streams for Apache Kafka manages a single Kafka cluster in the same namespace. You might use this configuration for development or testing. Or you can use Streams for Apache Kafka in a production environment to manage a number of Kafka clusters in different namespaces.

The first step for any deployment of Streams for Apache Kafka is to install the Cluster Operator using the install/cluster-operator files.

A single command applies all the installation files in the cluster-operator folder: oc apply -f ./install/cluster-operator.

The command sets up everything you need to be able to create and manage a Kafka deployment, including the following:

  • Cluster Operator (Deployment, ConfigMap)
  • Streams for Apache Kafka CRDs (CustomResourceDefinition)
  • RBAC resources (ClusterRole, ClusterRoleBinding, RoleBinding)
  • Service account (ServiceAccount)

The basic deployment path is as follows:

  1. Download the release artifacts
  2. Create an OpenShift namespace in which to deploy the Cluster Operator
  3. Deploy the Cluster Operator

    1. Update the install/cluster-operator files to use the namespace created for the Cluster Operator
    2. Install the Cluster Operator to watch one, multiple, or all namespaces
  4. Create a Kafka cluster

After which, you can deploy other Kafka components and set up monitoring of your deployment.

6.2. Deploying the Cluster Operator

The Cluster Operator is responsible for deploying and managing Kafka clusters within an OpenShift cluster.

When the Cluster Operator is running, it starts to watch for updates of Kafka resources.

By default, a single replica of the Cluster Operator is deployed. You can add replicas with leader election so that additional Cluster Operators are on standby in case of disruption. For more information, see Section 9.5.4, “Running multiple Cluster Operator replicas with leader election”.

6.2.1. Specifying the namespaces the Cluster Operator watches

The Cluster Operator watches for updates in the namespaces where the Kafka resources are deployed. When you deploy the Cluster Operator, you specify which namespaces to watch in the OpenShift cluster. You can specify the following namespaces:

Watching multiple selected namespaces has the most impact on performance due to increased processing overhead. To optimize performance for namespace monitoring, it is generally recommended to either watch a single namespace or monitor the entire cluster. Watching a single namespace allows for focused monitoring of namespace-specific resources, while monitoring all namespaces provides a comprehensive view of the cluster’s resources across all namespaces.

The Cluster Operator watches for changes to the following resources:

  • Kafka for the Kafka cluster.
  • KafkaConnect for the Kafka Connect cluster.
  • KafkaConnector for creating and managing connectors in a Kafka Connect cluster.
  • KafkaMirrorMaker for the Kafka MirrorMaker instance.
  • KafkaMirrorMaker2 for the Kafka MirrorMaker 2 instance.
  • KafkaBridge for the Kafka Bridge instance.
  • KafkaRebalance for the Cruise Control optimization requests.

When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as Deployments, Pods, Services and ConfigMaps.

Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource.

Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption.

When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources.

Note

While the Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster, the Topic Operator and User Operator watch for KafkaTopic and KafkaUser resources in a single namespace. For more information, see Section 1.2.1, “Watching Streams for Apache Kafka resources in OpenShift namespaces”.

6.2.2. Deploying the Cluster Operator to watch a single namespace

This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources in a single namespace in your OpenShift cluster.

Prerequisites

  • You need an account with permission to create and manage CustomResourceDefinition and RBAC (ClusterRole, and RoleBinding) resources.

Procedure

  1. Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into.

    For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
  2. Deploy the Cluster Operator:

    oc create -f install/cluster-operator -n my-cluster-operator-namespace
  3. Check the status of the deployment:

    oc get deployments -n my-cluster-operator-namespace

    Output shows the deployment name and readiness

    NAME                      READY  UP-TO-DATE  AVAILABLE
    strimzi-cluster-operator  1/1    1           1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1.

6.2.3. Deploying the Cluster Operator to watch multiple namespaces

This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources across multiple namespaces in your OpenShift cluster.

Prerequisites

  • You need an account with permission to create and manage CustomResourceDefinition and RBAC (ClusterRole, and RoleBinding) resources.

Procedure

  1. Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into.

    For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
  2. Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to add a list of all the namespaces the Cluster Operator will watch to the STRIMZI_NAMESPACE environment variable.

    For example, in this procedure the Cluster Operator will watch the namespaces watched-namespace-1, watched-namespace-2, watched-namespace-3.

    apiVersion: apps/v1
    kind: Deployment
    spec:
      # ...
      template:
        spec:
          serviceAccountName: strimzi-cluster-operator
          containers:
          - name: strimzi-cluster-operator
            image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0
            imagePullPolicy: IfNotPresent
            env:
            - name: STRIMZI_NAMESPACE
              value: watched-namespace-1,watched-namespace-2,watched-namespace-3
  3. For each namespace listed, install the RoleBindings.

    In this example, we replace watched-namespace in these commands with the namespaces listed in the previous step, repeating them for watched-namespace-1, watched-namespace-2, watched-namespace-3:

    oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace>
    oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace>
    oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>
  4. Deploy the Cluster Operator:

    oc create -f install/cluster-operator -n my-cluster-operator-namespace
  5. Check the status of the deployment:

    oc get deployments -n my-cluster-operator-namespace

    Output shows the deployment name and readiness

    NAME                      READY  UP-TO-DATE  AVAILABLE
    strimzi-cluster-operator  1/1    1           1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1.

6.2.4. Deploying the Cluster Operator to watch all namespaces

This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources across all namespaces in your OpenShift cluster.

When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created.

Prerequisites

  • You need an account with permission to create and manage CustomResourceDefinition and RBAC (ClusterRole, and RoleBinding) resources.

Procedure

  1. Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into.

    For example, in this procedure the Cluster Operator is installed into the namespace my-cluster-operator-namespace.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
  2. Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to set the value of the STRIMZI_NAMESPACE environment variable to *.

    apiVersion: apps/v1
    kind: Deployment
    spec:
      # ...
      template:
        spec:
          # ...
          serviceAccountName: strimzi-cluster-operator
          containers:
          - name: strimzi-cluster-operator
            image: registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0
            imagePullPolicy: IfNotPresent
            env:
            - name: STRIMZI_NAMESPACE
              value: "*"
            # ...
  3. Create ClusterRoleBindings that grant cluster-wide access for all namespaces to the Cluster Operator.

    oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
    oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
    oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
  4. Deploy the Cluster Operator to your OpenShift cluster.

    oc create -f install/cluster-operator -n my-cluster-operator-namespace
  5. Check the status of the deployment:

    oc get deployments -n my-cluster-operator-namespace

    Output shows the deployment name and readiness

    NAME                      READY  UP-TO-DATE  AVAILABLE
    strimzi-cluster-operator  1/1    1           1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1.

6.3. Deploying Kafka

To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka resource. Streams for Apache Kafka provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time.

After you have deployed the Cluster Operator, use a Kafka resource to deploy the following components:

Node pools provide configuration for a set of Kafka nodes. By using node pools, nodes can have different configuration within the same Kafka cluster.

If you haven’t deployed a Kafka cluster as a Kafka resource, you can’t use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, by deploying them as standalone components. You can also deploy and use other Kafka components with a Kafka cluster not managed by Streams for Apache Kafka.

6.3.1. Deploying a Kafka cluster with node pools

This procedure shows how to deploy Kafka with node pools to your OpenShift cluster using the Cluster Operator. Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration. For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the kafka resource.

The deployment uses a YAML file to provide the specification to create a KafkaNodePool resource. You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management. To deploy a Kafka cluster in KRaft mode, you must use the KafkaNodePool resources.

Streams for Apache Kafka provides the following example files that you can use to create a Kafka cluster that uses node pools:

kafka-with-dual-role-kraft-nodes.yaml
Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles.
kafka-with-kraft.yaml
Deploys a persistent Kafka cluster with one pool of controller nodes and one pool of broker nodes.
kafka-with-kraft-ephemeral.yaml
Deploys an ephemeral Kafka cluster with one pool of controller nodes and one pool of broker nodes.
kafka.yaml
Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.
Note

You can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool resources or migrate your existing Kafka cluster.

Procedure

  1. Deploy a KRaft-based Kafka cluster.

    • To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes:

      oc apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml
    • To deploy a persistent Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:

      oc apply -f examples/kafka/kraft/kafka.yaml
    • To deploy an ephemeral Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:

      oc apply -f examples/kafka/kraft/kafka-ephemeral.yaml
    • To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers:

      oc apply -f examples/kafka/kafka-with-node-pools.yaml
  2. Check the status of the deployment:

    oc get pods -n <my_cluster_operator_namespace>

    Output shows the node pool names and readiness

    NAME                        READY  STATUS   RESTARTS
    my-cluster-entity-operator  3/3    Running  0
    my-cluster-pool-a-0         1/1    Running  0
    my-cluster-pool-a-1         1/1    Running  0
    my-cluster-pool-a-4         1/1    Running  0

    • my-cluster is the name of the Kafka cluster.
    • pool-a is the name of the node pool.

      A sequential index number starting with 0 identifies each Kafka pod created. If you are using ZooKeeper, you’ll also see the ZooKeeper pods.

      READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

      Information on the deployment is also shown in the status of the KafkaNodePool resource, including a list of IDs for nodes in the pool.

      Note

      Node IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the next node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed.

Additional resources

Node pool configuration

6.3.2. Deploying a ZooKeeper-based Kafka cluster without node pools

This procedure shows how to deploy a ZooKeeper-based Kafka cluster to your OpenShift cluster using the Cluster Operator.

The deployment uses a YAML file to provide the specification to create a Kafka resource.

Streams for Apache Kafka provides the following example files to create a Kafka cluster that uses ZooKeeper for cluster management:

kafka-persistent.yaml
Deploys a persistent cluster with three ZooKeeper and three Kafka nodes.
kafka-jbod.yaml
Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes).
kafka-persistent-single.yaml
Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node.
kafka-ephemeral.yaml
Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes.
kafka-ephemeral-single.yaml
Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node.

In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment.

Ephemeral cluster
In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down.
Persistent cluster

A persistent Kafka cluster uses persistent volumes to store ZooKeeper and Kafka data. A PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume. The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning. When no StorageClass is specified, OpenShift will try to use the default StorageClass.

The following examples show some common types of persistent volumes:

  • If your OpenShift cluster runs on Amazon AWS, OpenShift can provision Amazon EBS volumes
  • If your OpenShift cluster runs on Microsoft Azure, OpenShift can provision Azure Disk Storage volumes
  • If your OpenShift cluster runs on Google Cloud, OpenShift can provision Persistent Disk volumes
  • If your OpenShift cluster runs on bare metal, OpenShift can provision local persistent volumes

The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. The inter.broker.protocol.version property for the Kafka config must be the version supported by the specified Kafka version (spec.kafka.version). The property represents the version of Kafka protocol used in a Kafka cluster.

From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn’t need to be set.

The example clusters are named my-cluster by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the Kafka resource in the relevant YAML file.

Default cluster name and specified Kafka versions

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 3.7.0
    #...
    config:
      #...
      log.message.format.version: "3.7"
      inter.broker.protocol.version: "3.7"
  # ...

Procedure

  1. Deploy a ZooKeeper-based Kafka cluster.

    • To deploy an ephemeral cluster:

      oc apply -f examples/kafka/kafka-ephemeral.yaml
    • To deploy a persistent cluster:

      oc apply -f examples/kafka/kafka-persistent.yaml
  2. Check the status of the deployment:

    oc get pods -n <my_cluster_operator_namespace>

    Output shows the pod names and readiness

    NAME                        READY   STATUS    RESTARTS
    my-cluster-entity-operator  3/3     Running   0
    my-cluster-kafka-0          1/1     Running   0
    my-cluster-kafka-1          1/1     Running   0
    my-cluster-kafka-2          1/1     Running   0
    my-cluster-zookeeper-0      1/1     Running   0
    my-cluster-zookeeper-1      1/1     Running   0
    my-cluster-zookeeper-2      1/1     Running   0

    my-cluster is the name of the Kafka cluster.

    A sequential index number starting with 0 identifies each Kafka and ZooKeeper pod created.

    With the default deployment, you create an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods.

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

Additional resources

Kafka cluster configuration

6.3.3. Deploying the Topic Operator using the Cluster Operator

This procedure describes how to deploy the Topic Operator using the Cluster Operator. The Topic Operator can be deployed for use in either bidirectional mode or unidirectional mode. To learn more about bidirectional and unidirectional topic management, see Section 10.1, “Topic management modes”.

You configure the entityOperator property of the Kafka resource to include the topicOperator. By default, the Topic Operator watches for KafkaTopic resources in the namespace of the Kafka cluster deployed by the Cluster Operator. You can also specify a namespace using watchedNamespace in the Topic Operator spec. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator.

If you use Streams for Apache Kafka to deploy multiple Kafka clusters into the same namespace, enable the Topic Operator for only one Kafka cluster or use the watchedNamespace property to configure the Topic Operators to watch other namespaces.

If you want to use the Topic Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, you must deploy the Topic Operator as a standalone component.

For more information about configuring the entityOperator and topicOperator properties, see Configuring the Entity Operator.

Procedure

  1. Edit the entityOperator properties of the Kafka resource to include topicOperator:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      #...
      entityOperator:
        topicOperator: {}
        userOperator: {}
  2. Configure the Topic Operator spec using the properties described in the EntityTopicOperatorSpec schema reference.

    Use an empty object ({}) if you want all properties to use their default values.

  3. Create or update the resource:

    oc apply -f <kafka_configuration_file>
  4. Check the status of the deployment:

    oc get pods -n <my_cluster_operator_namespace>

    Output shows the pod name and readiness

    NAME                        READY   STATUS    RESTARTS
    my-cluster-entity-operator  3/3     Running   0
    # ...

    my-cluster is the name of the Kafka cluster.

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

6.3.4. Deploying the User Operator using the Cluster Operator

This procedure describes how to deploy the User Operator using the Cluster Operator.

You configure the entityOperator property of the Kafka resource to include the userOperator. By default, the User Operator watches for KafkaUser resources in the namespace of the Kafka cluster deployment. You can also specify a namespace using watchedNamespace in the User Operator spec. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator.

If you want to use the User Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, you must deploy the User Operator as a standalone component.

For more information about configuring the entityOperator and userOperator properties, see Configuring the Entity Operator.

Procedure

  1. Edit the entityOperator properties of the Kafka resource to include userOperator:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      #...
      entityOperator:
        topicOperator: {}
        userOperator: {}
  2. Configure the User Operator spec using the properties described in EntityUserOperatorSpec schema reference.

    Use an empty object ({}) if you want all properties to use their default values.

  3. Create or update the resource:

    oc apply -f <kafka_configuration_file>
  4. Check the status of the deployment:

    oc get pods -n <my_cluster_operator_namespace>

    Output shows the pod name and readiness

    NAME                        READY   STATUS    RESTARTS
    my-cluster-entity-operator  3/3     Running   0
    # ...

    my-cluster is the name of the Kafka cluster.

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

6.3.5. Connecting to ZooKeeper from a terminal

ZooKeeper services are secured with encryption and authentication and are not intended to be used by external applications that are not part of Streams for Apache Kafka.

However, if you want to use CLI tools that require a connection to ZooKeeper, you can use a terminal inside a ZooKeeper pod and connect to localhost:12181 as the ZooKeeper address.

Prerequisites

  • An OpenShift cluster is available.
  • A Kafka cluster is running.
  • The Cluster Operator is running.

Procedure

  1. Open the terminal using the OpenShift console or run the exec command from your CLI.

    For example:

    oc exec -ti my-cluster-zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /

    Be sure to use localhost:12181.

6.3.6. List of Kafka cluster resources

The following resources are created by the Cluster Operator in the OpenShift cluster.

Shared resources

<kafka_cluster_name>-cluster-ca
Secret with the Cluster CA private key used to encrypt the cluster communication.
<kafka_cluster_name>-cluster-ca-cert
Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers.
<kafka_cluster_name>-clients-ca
Secret with the Clients CA private key used to sign user certificates
<kafka_cluster_name>-clients-ca-cert
Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka users.
<kafka_cluster_name>-cluster-operator-certs
Secret with Cluster operators keys for communication with Kafka and ZooKeeper.

ZooKeeper nodes

<kafka_cluster_name>-zookeeper

Name given to the following ZooKeeper resources:

  • StrimziPodSet for managing the ZooKeeper node pods.
  • Service account used by the ZooKeeper nodes.
  • PodDisruptionBudget configured for the ZooKeeper nodes.
<kafka_cluster_name>-zookeeper-<pod_id>
Pods created by the StrimziPodSet.
<kafka_cluster_name>-zookeeper-nodes
Headless Service needed to have DNS resolve the ZooKeeper pods IP addresses directly.
<kafka_cluster_name>-zookeeper-client
Service used by Kafka brokers to connect to ZooKeeper nodes as clients.
<kafka_cluster_name>-zookeeper-config
ConfigMap that contains the ZooKeeper ancillary configuration, and is mounted as a volume by the ZooKeeper node pods.
<kafka_cluster_name>-zookeeper-nodes
Secret with ZooKeeper node keys.
<kafka_cluster_name>-network-policy-zookeeper
Network policy managing access to the ZooKeeper services.
data-<kafka_cluster_name>-zookeeper-<pod_id>
Persistent Volume Claim for the volume used for storing data for a specific ZooKeeper node. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.

Kafka brokers

<kafka_cluster_name>-kafka

Name given to the following Kafka resources:

  • StrimziPodSet for managing the Kafka broker pods.
  • Service account used by the Kafka pods.
  • PodDisruptionBudget configured for the Kafka brokers.
<kafka_cluster_name>-kafka-<pod_id>

Name given to the following Kafka resources:

  • Pods created by the StrimziPodSet.
  • ConfigMaps with Kafka broker configuration.
<kafka_cluster_name>-kafka-brokers
Service needed to have DNS resolve the Kafka broker pods IP addresses directly.
<kafka_cluster_name>-kafka-bootstrap
Service can be used as bootstrap servers for Kafka clients connecting from within the OpenShift cluster.
<kafka_cluster_name>-kafka-external-bootstrap
Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is external and port is 9094.
<kafka_cluster_name>-kafka-<pod_id>
Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is external and port is 9094.
<kafka_cluster_name>-kafka-external-bootstrap
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type route. The old route name will be used for backwards compatibility when the listener name is external and port is 9094.
<kafka_cluster_name>-kafka-<pod_id>
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type route. The old route name will be used for backwards compatibility when the listener name is external and port is 9094.
<kafka_cluster_name>-kafka-<listener_name>-bootstrap
Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
<kafka_cluster_name>-kafka-<listener_name>-<pod_id>
Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
<kafka_cluster_name>-kafka-<listener_name>-bootstrap
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type route. The new route name will be used for all other external listeners.
<kafka_cluster_name>-kafka-<listener_name>-<pod_id>
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type route. The new route name will be used for all other external listeners.
<kafka_cluster_name>-kafka-config
ConfigMap containing the Kafka ancillary configuration, which is mounted as a volume by the broker pods when the UseStrimziPodSets feature gate is disabled.
<kafka_cluster_name>-kafka-brokers
Secret with Kafka broker keys.
<kafka_cluster_name>-network-policy-kafka
Network policy managing access to the Kafka services.
strimzi-namespace-name-<kafka_cluster_name>-kafka-init
Cluster role binding used by the Kafka brokers.
<kafka_cluster_name>-jmx
Secret with JMX username and password used to secure the Kafka broker port. This resource is created only when JMX is enabled in Kafka.
data-<kafka_cluster_name>-kafka-<pod_id>
Persistent Volume Claim for the volume used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data.
data-<id>-<kafka_cluster_name>-kafka-<pod_id>
Persistent Volume Claim for the volume id used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.

Kafka node pools

If you are using Kafka node pools, the resources created apply to the nodes managed in the node pools whether they are operating as brokers, controllers, or both. The naming convention includes the name of the Kafka cluster and the node pool: <kafka_cluster_name>-<pool_name>.

<kafka_cluster_name>-<pool_name>
Name given to the StrimziPodSet for managing the Kafka node pool.
<kafka_cluster_name>-<pool_name>-<pod_id>

Name given to the following Kafka node pool resources:

  • Pods created by the StrimziPodSet.
  • ConfigMaps with Kafka node configuration.
data-<kafka_cluster_name>-<pool_name>-<pod_id>
Persistent Volume Claim for the volume used for storing data for a specific node. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data.
data-<id>-<kafka_cluster_name>-<pool_name>-<pod_id>
Persistent Volume Claim for the volume id used for storing data for a specific node. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.

Entity Operator

These resources are only created if the Entity Operator is deployed using the Cluster Operator.

<kafka_cluster_name>-entity-operator

Name given to the following Entity Operator resources:

  • Deployment with Topic and User Operators.
  • Service account used by the Entity Operator.
  • Network policy managing access to the Entity Operator metrics.
<kafka_cluster_name>-entity-operator-<random_string>
Pod created by the Entity Operator deployment.
<kafka_cluster_name>-entity-topic-operator-config
ConfigMap with ancillary configuration for Topic Operators.
<kafka_cluster_name>-entity-user-operator-config
ConfigMap with ancillary configuration for User Operators.
<kafka_cluster_name>-entity-topic-operator-certs
Secret with Topic Operator keys for communication with Kafka and ZooKeeper.
<kafka_cluster_name>-entity-user-operator-certs
Secret with User Operator keys for communication with Kafka and ZooKeeper.
strimzi-<kafka_cluster_name>-entity-topic-operator
Role binding used by the Entity Topic Operator.
strimzi-<kafka_cluster_name>-entity-user-operator
Role binding used by the Entity User Operator.

Kafka Exporter

These resources are only created if the Kafka Exporter is deployed using the Cluster Operator.

<kafka_cluster_name>-kafka-exporter

Name given to the following Kafka Exporter resources:

  • Deployment with Kafka Exporter.
  • Service used to collect consumer lag metrics.
  • Service account used by the Kafka Exporter.
  • Network policy managing access to the Kafka Exporter metrics.
<kafka_cluster_name>-kafka-exporter-<random_string>
Pod created by the Kafka Exporter deployment.

Cruise Control

These resources are only created if Cruise Control was deployed using the Cluster Operator.

<kafka_cluster_name>-cruise-control

Name given to the following Cruise Control resources:

  • Deployment with Cruise Control.
  • Service used to communicate with Cruise Control.
  • Service account used by the Cruise Control.
<kafka_cluster_name>-cruise-control-<random_string>
Pod created by the Cruise Control deployment.
<kafka_cluster_name>-cruise-control-config
ConfigMap that contains the Cruise Control ancillary configuration, and is mounted as a volume by the Cruise Control pods.
<kafka_cluster_name>-cruise-control-certs
Secret with Cruise Control keys for communication with Kafka and ZooKeeper.
<kafka_cluster_name>-network-policy-cruise-control
Network policy managing access to the Cruise Control service.

6.4. Deploying Kafka Connect

Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database or messaging system, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed.

In Streams for Apache Kafka, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by Streams for Apache Kafka.

Using the concept of connectors, Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability.

The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect resource and connectors created using the KafkaConnector resource.

In order to use Kafka Connect, you need to do the following.

Note

The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context.

6.4.1. Deploying Kafka Connect to your OpenShift cluster

This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator.

A Kafka Connect cluster deployment is implemented with a configurable number of nodes (also called workers) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable.

The deployment uses a YAML file to provide the specification to create a KafkaConnect resource.

Streams for Apache Kafka provides example configuration files. In this procedure, we use the following example file:

  • examples/connect/kafka-connect.yaml
Important

If deploying Kafka Connect clusters to run in parallel, each instance must use unique names for internal Kafka Connect topics. To do this, configure each Kafka Connect instance to replace the defaults.

Procedure

  1. Deploy Kafka Connect to your OpenShift cluster. Use the examples/connect/kafka-connect.yaml file to deploy Kafka Connect.

    oc apply -f examples/connect/kafka-connect.yaml
  2. Check the status of the deployment:

    oc get pods -n <my_cluster_operator_namespace>

    Output shows the deployment name and readiness

    NAME                                 READY  STATUS   RESTARTS
    my-connect-cluster-connect-<pod_id>  1/1    Running  0

    my-connect-cluster is the name of the Kafka Connect cluster.

    A pod ID identifies each pod created.

    With the default deployment, you create a single Kafka Connect pod.

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

6.4.2. List of Kafka Connect cluster resources

The following resources are created by the Cluster Operator in the OpenShift cluster:

<connect_cluster_name>-connect

Name given to the following Kafka Connect resources:

  • StrimziPodSet that creates the Kafka Connect worker node pods.
  • Headless service that provides stable DNS names to the Kafka Connect pods.
  • Service account used by the Kafka Connect pods.
  • Pod disruption budget configured for the Kafka Connect worker nodes.
  • Network policy managing access to the Kafka Connect REST API.
<connect_cluster_name>-connect-<pod_id>
Pods created by the Kafka Connect StrimziPodSet.
<connect_cluster_name>-connect-api
Service which exposes the REST interface for managing the Kafka Connect cluster.
<connect_cluster_name>-connect-config
ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka Connect pods.
strimzi-<namespace-name>-<connect_cluster_name>-connect-init
Cluster role binding used by the Kafka Connect cluster.
<connect_cluster_name>-connect-build
Pod used to build a new container image with additional connector plugins (only when Kafka Connect Build feature is used).
<connect_cluster_name>-connect-dockerfile
ConfigMap with the Dockerfile generated to build the new container image with additional connector plugins (only when the Kafka Connect build feature is used).

6.5. Adding Kafka Connect connectors

Kafka Connect uses connectors to integrate with other systems to stream data. A connector is an instance of a Kafka Connector class, which can be one of the following type:

Source connector
A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages.
Sink connector
A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system.

Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems.

Add connector plugins to Kafka Connect in one of the following ways:

After plugins have been added to the container image, you can start, stop, and manage connector instances in the following ways:

You can also create new connector instances using these options.

6.5.1. Building a new container image with connector plugins automatically

Configure Kafka Connect so that Streams for Apache Kafka automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins property of the KafkaConnect custom resource. Streams for Apache Kafka will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output and automatically used in the Kafka Connect deployment.

Prerequisites

You need to provide your own container registry where images can be pushed to, stored, and pulled from. Streams for Apache Kafka supports private container registries as well as public registries such as Quay or Docker Hub.

Procedure

  1. Configure the KafkaConnect custom resource by specifying the container registry in .spec.build.output, and additional connectors in .spec.build.plugins:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaConnect
    metadata:
      name: my-connect-cluster
    spec: 1
      #...
      build:
        output: 2
          type: docker
          image: my-registry.io/my-org/my-connect-cluster:latest
          pushSecret: my-registry-credentials
        plugins: 3
          - name: connector-1
            artifacts:
              - type: tgz
                url: <url_to_download_connector_1_artifact>
                sha512sum: <SHA-512_checksum_of_connector_1_artifact>
          - name: connector-2
            artifacts:
              - type: jar
                url: <url_to_download_connector_2_artifact>
                sha512sum: <SHA-512_checksum_of_connector_2_artifact>
      #...
    1
    2
    (Required) Configuration of the container registry where new images are pushed.
    3
    (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact.
  2. Create or update the resource:

    $ oc apply -f <kafka_connect_configuration_file>
  3. Wait for the new container image to build, and for the Kafka Connect cluster to be deployed.
  4. Use the Kafka Connect REST API or KafkaConnector custom resources to use the connector plugins you added.

6.5.2. Building a new container image with connector plugins from the Kafka Connect base image

Create a custom Docker image with connector plugins from the Kafka Connect base image. Add the custom image to the /opt/kafka/plugins directory.

You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins.

At startup, the Streams for Apache Kafka version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins directory.

Procedure

  1. Create a new Dockerfile using registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 as the base image:

    FROM registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0
    USER root:root
    COPY ./my-plugins/ /opt/kafka/plugins/
    USER 1001

    Example plugins file

    $ tree ./my-plugins/
    ./my-plugins/
    ├── debezium-connector-mongodb
    │   ├── bson-<version>.jar
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mongodb-<version>.jar
    │   ├── debezium-core-<version>.jar
    │   ├── LICENSE.txt
    │   ├── mongodb-driver-core-<version>.jar
    │   ├── README.md
    │   └── # ...
    ├── debezium-connector-mysql
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mysql-<version>.jar
    │   ├── debezium-core-<version>.jar
    │   ├── LICENSE.txt
    │   ├── mysql-binlog-connector-java-<version>.jar
    │   ├── mysql-connector-java-<version>.jar
    │   ├── README.md
    │   └── # ...
    └── debezium-connector-postgres
        ├── CHANGELOG.md
        ├── CONTRIBUTE.md
        ├── COPYRIGHT.txt
        ├── debezium-connector-postgres-<version>.jar
        ├── debezium-core-<version>.jar
        ├── LICENSE.txt
        ├── postgresql-<version>.jar
        ├── protobuf-java-<version>.jar
        ├── README.md
        └── # ...

    The COPY command points to the plugin files to copy to the container image.

    This example adds plugins for Debezium connectors (MongoDB, MySQL, and PostgreSQL), though not all files are listed for brevity. Debezium running in Kafka Connect looks the same as any other Kafka Connect task.

  2. Build the container image.
  3. Push your custom image to your container registry.
  4. Point to the new container image.

    You can point to the image in one of the following ways:

    • Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource.

      If set, this property overrides the STRIMZI_KAFKA_CONNECT_IMAGES environment variable in the Cluster Operator.

      apiVersion: kafka.strimzi.io/v1beta2
      kind: KafkaConnect
      metadata:
        name: my-connect-cluster
      spec: 1
        #...
        image: my-new-container-image 2
        config: 3
          #...
      1
      2
      The docker image for Kafka Connect pods.
      3
      Configuration of the Kafka Connect workers (not connectors).
    • Edit the STRIMZI_KAFKA_CONNECT_IMAGES environment variable in the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to point to the new container image, and then reinstall the Cluster Operator.

6.5.3. Deploying KafkaConnector resources

Deploy KafkaConnector resources to manage connectors. The KafkaConnector custom resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. You don’t need to send HTTP requests to manage connectors, as with the Kafka Connect REST API. You manage a running connector instance by updating its corresponding KafkaConnector resource, and then applying the updates. The Cluster Operator updates the configurations of the running connector instances. You remove a connector by deleting its corresponding KafkaConnector.

KafkaConnector resources must be deployed to the same namespace as the Kafka Connect cluster they link to.

In the configuration shown in this procedure, the autoRestart feature is enabled (enabled: true) for automatic restarts of failed connectors and tasks. You can also annotate the KafkaConnector resource to restart a connector or restart a connector task manually.

Example connectors

You can use your own connectors or try the examples provided by Streams for Apache Kafka. Up until Apache Kafka 3.1.0, example file connector plugins were included with Apache Kafka. Starting from the 3.1.1 and 3.2.0 releases of Apache Kafka, the examples need to be added to the plugin path as any other connector.

Streams for Apache Kafka provides an example KafkaConnector configuration file (examples/connect/source-connector.yaml) for the example file connector plugins, which creates the following connector instances as KafkaConnector resources:

  • A FileStreamSourceConnector instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic.
  • A FileStreamSinkConnector instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink).

We use the example file to create connectors in this procedure.

Note

The example connectors are not intended for use in a production environment.

Prerequisites

  • A Kafka Connect deployment
  • The Cluster Operator is running

Procedure

  1. Add the FileStreamSourceConnector and FileStreamSinkConnector plugins to Kafka Connect in one of the following ways:

  2. Set the strimzi.io/use-connector-resources annotation to true in the Kafka Connect configuration.

    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaConnect
    metadata:
      name: my-connect-cluster
      annotations:
        strimzi.io/use-connector-resources: "true"
    spec:
        # ...

    With the KafkaConnector resources enabled, the Cluster Operator watches for them.

  3. Edit the examples/connect/source-connector.yaml file:

    Example KafkaConnector source connector configuration

    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaConnector
    metadata:
      name: my-source-connector  1
      labels:
        strimzi.io/cluster: my-connect-cluster 2
    spec:
      class: org.apache.kafka.connect.file.FileStreamSourceConnector 3
      tasksMax: 2 4
      autoRestart: 5
        enabled: true
      config: 6
        file: "/opt/kafka/LICENSE" 7
        topic: my-topic 8
        # ...

    1
    Name of the KafkaConnector resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource.
    2
    Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
    3
    Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster.
    4
    Maximum number of Kafka Connect tasks that the connector can create.
    5
    Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the maxRestarts property.
    6
    Connector configuration as key-value pairs.
    7
    Location of the external data file. In this example, we’re configuring the FileStreamSourceConnector to read from the /opt/kafka/LICENSE file.
    8
    Kafka topic to publish the source data to.
  4. Create the source KafkaConnector in your OpenShift cluster:

    oc apply -f examples/connect/source-connector.yaml
  5. Create an examples/connect/sink-connector.yaml file:

    touch examples/connect/sink-connector.yaml
  6. Paste the following YAML into the sink-connector.yaml file:

    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaConnector
    metadata:
      name: my-sink-connector
      labels:
        strimzi.io/cluster: my-connect
    spec:
      class: org.apache.kafka.connect.file.FileStreamSinkConnector 1
      tasksMax: 2
      config: 2
        file: "/tmp/my-file" 3
        topics: my-topic 4
    1
    Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
    2
    Connector configuration as key-value pairs.
    3
    Temporary file to publish the source data to.
    4
    Kafka topic to read the source data from.
  7. Create the sink KafkaConnector in your OpenShift cluster:

    oc apply -f examples/connect/sink-connector.yaml
  8. Check that the connector resources were created:

    oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name
    
    my-source-connector
    my-sink-connector

    Replace <my_connect_cluster> with the name of your Kafka Connect cluster.

  9. In the container, execute kafka-console-consumer.sh to read the messages that were written to the topic by the source connector:

    oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap.NAMESPACE.svc:9092 --topic my-topic --from-beginning

    Replace <my_kafka_cluster> with the name of your Kafka cluster.

Source and sink connector configuration options

The connector configuration is defined in the spec.config property of the KafkaConnector resource.

The FileStreamSourceConnector and FileStreamSinkConnector classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options.

Table 6.1. Configuration options for the FileStreamSource connector class
NameTypeDefault valueDescription

file

String

Null

Source file to write messages to. If not specified, the standard input is used.

topic

List

Null

The Kafka topic to publish data to.

Table 6.2. Configuration options for FileStreamSinkConnector class
NameTypeDefault valueDescription

file

String

Null

Destination file to write messages to. If not specified, the standard output is used.

topics

List

Null

One or more Kafka topics to read data from.

topics.regex

String

Null

A regular expression matching one or more Kafka topics to read data from.

6.5.4. Exposing the Kafka Connect API

Use the Kafka Connect REST API as an alternative to using KafkaConnector resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name>-connect-api:8083, where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance.

The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation.

Note

The strimzi.io/use-connector-resources annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.

You can add the connector configuration as a JSON object.

Example curl request to add connector configuration

curl -X POST \
  http://my-connect-cluster-connect-api:8083/connectors \
  -H 'Content-Type: application/json' \
  -d '{ "name": "my-source-connector",
    "config":
    {
      "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector",
      "file": "/opt/kafka/LICENSE",
      "topic":"my-topic",
      "tasksMax": "4",
      "type": "source"
    }
}'

The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:

  • LoadBalancer or NodePort type services
  • Ingress resources (Kubernetes only)
  • OpenShift routes (OpenShift only)
Note

The connection is insecure, so allow external access advisedly.

If you decide to create services, use the labels from the selector of the <connect_cluster_name>-connect-api service to configure the pods to which the service will route the traffic:

Selector configuration for the service

# ...
selector:
  strimzi.io/cluster: my-connect-cluster 1
  strimzi.io/kind: KafkaConnect
  strimzi.io/name: my-connect-cluster-connect 2
#...

1
Name of the Kafka Connect custom resource in your OpenShift cluster.
2
Name of the Kafka Connect deployment created by the Cluster Operator.

You must also create a NetworkPolicy that allows HTTP requests from external clients.

Example NetworkPolicy to allow requests to the Kafka Connect API

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: my-custom-connect-network-policy
spec:
  ingress:
  - from:
    - podSelector: 1
        matchLabels:
          app: my-connector-manager
    ports:
    - port: 8083
      protocol: TCP
  podSelector:
    matchLabels:
      strimzi.io/cluster: my-connect-cluster
      strimzi.io/kind: KafkaConnect
      strimzi.io/name: my-connect-cluster-connect
  policyTypes:
  - Ingress

1
The label of the pod that is allowed to connect to the API.

To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command.

6.5.5. Limiting access to the Kafka Connect API

It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. The Kafka Connect API provides extensive capabilities for altering connector configurations, which makes it all the more important to take security precautions. Someone with access to the Kafka Connect API could potentially obtain sensitive information that an administrator may assume is secure.

The Kafka Connect REST API can be accessed by anyone who has authenticated access to the OpenShift cluster and knows the endpoint URL, which includes the hostname/IP address and port number.

For example, suppose an organization uses a Kafka Connect cluster and connectors to stream sensitive data from a customer database to a central database. The administrator uses a configuration provider plugin to store sensitive information related to connecting to the customer database and the central database, such as database connection details and authentication credentials. The configuration provider protects this sensitive information from being exposed to unauthorized users. However, someone who has access to the Kafka Connect API can still obtain access to the customer database without the consent of the administrator. They can do this by setting up a fake database and configuring a connector to connect to it. They then modify the connector configuration to point to the customer database, but instead of sending the data to the central database, they send it to the fake database. By configuring the connector to connect to the fake database, the login details and credentials for connecting to the customer database are intercepted, even though they are stored securely in the configuration provider.

If you are using the KafkaConnector custom resources, then by default the OpenShift RBAC rules permit only OpenShift cluster administrators to make changes to connectors. You can also designate non-cluster administrators to manage Streams for Apache Kafka resources. With KafkaConnector resources enabled in your Kafka Connect configuration, changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. If you are not using the KafkaConnector resource, the default RBAC rules do not limit access to the Kafka Connect API. If you want to limit direct access to the Kafka Connect REST API using OpenShift RBAC, you need to enable and use the KafkaConnector resources.

For improved security, we recommend configuring the following properties for the Kafka Connect API:

org.apache.kafka.disallowed.login.modules

(Kafka 3.4 or later) Set the org.apache.kafka.disallowed.login.modules Java system property to prevent the use of insecure login modules. For example, specifying com.sun.security.auth.module.JndiLoginModule prevents the use of the Kafka JndiLoginModule.

Example configuration for disallowing login modules

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
  annotations:
    strimzi.io/use-connector-resources: "true"
spec:
  # ...
  jvmOptions:
    javaSystemProperties:
      - name: org.apache.kafka.disallowed.login.modules
        value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule
# ...

Only allow trusted login modules and follow the latest advice from Kafka for the version you are using. As a best practice, you should explicitly disallow insecure login modules in your Kafka Connect configuration by using the org.apache.kafka.disallowed.login.modules system property.

connector.client.config.override.policy

Set the connector.client.config.override.policy property to None to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses.

Example configuration to specify connector override policy

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
  annotations:
    strimzi.io/use-connector-resources: "true"
spec:
  # ...
  config:
    connector.client.config.override.policy: None
# ...

6.5.6. Switching from using the Kafka Connect API to using KafkaConnector custom resources

You can switch from using the Kafka Connect API to using KafkaConnector custom resources to manage your connectors. To make the switch, do the following in the order shown:

  1. Deploy KafkaConnector resources with the configuration to create your connector instances.
  2. Enable KafkaConnector resources in your Kafka Connect configuration by setting the strimzi.io/use-connector-resources annotation to true.
Warning

If you enable KafkaConnector resources before creating them, you delete all connectors.

To switch from using KafkaConnector resources to using the Kafka Connect API, first remove the annotation that enables the KafkaConnector resources from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.

When making the switch, check the status of the KafkaConnect resource. The value of metadata.generation (the current version of the deployment) must match status.observedGeneration (the latest reconciliation of the resource). When the Kafka Connect cluster is Ready, you can delete the KafkaConnector resources.

6.6. Deploying Kafka MirrorMaker

Kafka MirrorMaker replicates data between two or more Kafka clusters, within or across data centers. This process is called mirroring to avoid confusion with the concept of Kafka partition replication. MirrorMaker consumes messages from a source cluster and republishes those messages to a target cluster.

Data replication across clusters supports scenarios that require the following:

  • Recovery of data in the event of a system failure
  • Consolidation of data from multiple source clusters for centralized analysis
  • Restriction of data access to a specific cluster
  • Provision of data at a specific location to improve latency

6.6.1. Deploying Kafka MirrorMaker to your OpenShift cluster

This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator.

The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker or KafkaMirrorMaker2 resource depending on the version of MirrorMaker deployed. MirrorMaker 2 is based on Kafka Connect and uses its configuration properties.

Important

Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. The KafkaMirrorMaker resource will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy.

Streams for Apache Kafka provides example configuration files. In this procedure, we use the following example files:

  • examples/mirror-maker/kafka-mirror-maker.yaml
  • examples/mirror-maker/kafka-mirror-maker-2.yaml
Important

If deploying MirrorMaker 2 clusters to run in parallel, using the same target Kafka cluster, each instance must use unique names for internal Kafka Connect topics. To do this, configure each MirrorMaker 2 instance to replace the defaults.

Procedure

  1. Deploy Kafka MirrorMaker to your OpenShift cluster:

    For MirrorMaker:

    oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml

    For MirrorMaker 2:

    oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml
  2. Check the status of the deployment:

    oc get pods -n <my_cluster_operator_namespace>

    Output shows the deployment name and readiness

    NAME                                    READY  STATUS   RESTARTS
    my-mirror-maker-mirror-maker-<pod_id>   1/1    Running  1
    my-mm2-cluster-mirrormaker2-<pod_id>    1/1    Running  1

    my-mirror-maker is the name of the Kafka MirrorMaker cluster. my-mm2-cluster is the name of the Kafka MirrorMaker 2 cluster.

    A pod ID identifies each pod created.

    With the default deployment, you install a single MirrorMaker or MirrorMaker 2 pod.

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

6.6.2. List of Kafka MirrorMaker 2 cluster resources

The following resources are created by the Cluster Operator in the OpenShift cluster:

<mirrormaker2_cluster_name>-mirrormaker2

Name given to the following MirrorMaker 2 resources:

  • StrimziPodSet that creates the MirrorMaker 2 worker node pods.
  • Headless service that provides stable DNS names to the MirrorMaker 2 pods.
  • Service account used by the MirrorMaker 2 pods.
  • Pod disruption budget configured for the MirrorMaker 2 worker nodes.
  • Network Policy managing access to the MirrorMaker 2 REST API.
<mirrormaker2_cluster_name>-mirrormaker2-<pod_id>
Pods created by the MirrorMaker 2 StrimziPodSet.
<mirrormaker2_cluster_name>-mirrormaker2-api
Service which exposes the REST interface for managing the MirrorMaker 2 cluster.
<mirrormaker2_cluster_name>-mirrormaker2-config
ConfigMap which contains the MirrorMaker 2 ancillary configuration and is mounted as a volume by the MirrorMaker 2 pods.
strimzi-<namespace-name>-<mirrormaker2_cluster_name>-mirrormaker2-init
Cluster role binding used by the MirrorMaker 2 cluster.

6.6.3. List of Kafka MirrorMaker cluster resources

The following resources are created by the Cluster Operator in the OpenShift cluster:

<mirrormaker_cluster_name>-mirror-maker

Name given to the following MirrorMaker resources:

  • Deployment which is responsible for creating the MirrorMaker pods.
  • Service account used by the MirrorMaker nodes.
  • Pod Disruption Budget configured for the MirrorMaker worker nodes.
<mirrormaker_cluster_name>-mirror-maker-config
ConfigMap which contains ancillary configuration for MirrorMaker, and is mounted as a volume by the MirrorMaker pods.

6.7. Deploying Kafka Bridge

Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster.

6.7.1. Deploying Kafka Bridge to your OpenShift cluster

This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator.

The deployment uses a YAML file to provide the specification to create a KafkaBridge resource.

Streams for Apache Kafka provides example configuration files. In this procedure, we use the following example file:

  • examples/bridge/kafka-bridge.yaml

Procedure

  1. Deploy Kafka Bridge to your OpenShift cluster:

    oc apply -f examples/bridge/kafka-bridge.yaml
  2. Check the status of the deployment:

    oc get pods -n <my_cluster_operator_namespace>

    Output shows the deployment name and readiness

    NAME                       READY  STATUS   RESTARTS
    my-bridge-bridge-<pod_id>  1/1    Running  0

    my-bridge is the name of the Kafka Bridge cluster.

    A pod ID identifies each pod created.

    With the default deployment, you install a single Kafka Bridge pod.

    READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS displays as Running.

6.7.2. Exposing the Kafka Bridge service to your local machine

Use port forwarding to expose the Streams for Apache Kafka Bridge service to your local machine on http://localhost:8080.

Note

Port forwarding is only suitable for development and testing purposes.

Procedure

  1. List the names of the pods in your OpenShift cluster:

    oc get pods -o name
    
    pod/kafka-consumer
    # ...
    pod/my-bridge-bridge-<pod_id>
  2. Connect to the Kafka Bridge pod on port 8080:

    oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &
    Note

    If port 8080 on your local machine is already in use, use an alternative HTTP port, such as 8008.

API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod.

6.7.3. Accessing the Kafka Bridge outside of OpenShift

After deployment, the Streams for Apache Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name>-bridge-service service to access the API.

If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:

  • LoadBalancer or NodePort type services
  • Ingress resources (Kubernetes only)
  • OpenShift routes (OpenShift only)

If you decide to create Services, use the labels from the selector of the <kafka_bridge_name>-bridge-service service to configure the pods to which the service will route the traffic:

  # ...
  selector:
    strimzi.io/cluster: kafka-bridge-name 1
    strimzi.io/kind: KafkaBridge
  #...
1
Name of the Kafka Bridge custom resource in your OpenShift cluster.

6.7.4. List of Kafka Bridge cluster resources

The following resources are created by the Cluster Operator in the OpenShift cluster:

<bridge_cluster_name>-bridge
Deployment which is in charge to create the Kafka Bridge worker node pods.
<bridge_cluster_name>-bridge-service
Service which exposes the REST interface of the Kafka Bridge cluster.
<bridge_cluster_name>-bridge-config
ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods.
<bridge_cluster_name>-bridge
Pod Disruption Budget configured for the Kafka Bridge worker nodes.

6.8. Alternative standalone deployment options for Streams for Apache Kafka operators

You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator.

You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster.

6.8.1. Deploying the standalone Topic Operator

This procedure shows how to deploy the Topic Operator in unidirectional mode as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator. Unidirectional topic management maintains topics solely through KafkaTopic resources. For more information on unidirectional topic management, see Section 10.1, “Topic management modes”. Alternate configuration is also shown for deploying the Topic Operator in bidirectional mode.

Standalone deployment files are provided with Streams for Apache Kafka. Use the 05-Deployment-strimzi-topic-operator.yaml deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.

The Topic Operator watches for KafkaTopic resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the Topic Operator configuration. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you want to use more than one Topic Operator, configure each of them to watch different namespaces. In this way, you can use Topic Operators with multiple Kafka clusters.

Prerequisites

  • You are running a Kafka cluster for the Topic Operator to connect to.

    As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.

Procedure

  1. Edit the env properties in the install/topic-operator/05-Deployment-strimzi-topic-operator.yaml standalone deployment file.

    Example standalone Topic Operator deployment configuration

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: strimzi-topic-operator
      labels:
        app: strimzi
    spec:
      # ...
      template:
        # ...
        spec:
          # ...
          containers:
            - name: strimzi-topic-operator
              # ...
              env:
                - name: STRIMZI_NAMESPACE 1
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2
                  value: my-kafka-bootstrap-address:9092
                - name: STRIMZI_RESOURCE_LABELS 3
                  value: "strimzi.io/cluster=my-cluster"
                - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 4
                  value: "120000"
                - name: STRIMZI_LOG_LEVEL 5
                  value: INFO
                - name: STRIMZI_TLS_ENABLED 6
                  value: "false"
                - name: STRIMZI_JAVA_OPTS 7
                  value: "-Xmx=512M -Xms=256M"
                - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 8
                  value: "-Djavax.net.debug=verbose -DpropertyName=value"
                - name: STRIMZI_PUBLIC_CA 9
                  value: "false"
                - name: STRIMZI_TLS_AUTH_ENABLED 10
                  value: "false"
                - name: STRIMZI_SASL_ENABLED 11
                  value: "false"
                - name: STRIMZI_SASL_USERNAME 12
                  value: "admin"
                - name: STRIMZI_SASL_PASSWORD 13
                  value: "password"
                - name: STRIMZI_SASL_MECHANISM 14
                  value: "scram-sha-512"
                - name: STRIMZI_SECURITY_PROTOCOL 15
                  value: "SSL"
                - name: STRIMZI_USE_FINALIZERS
                  value: "false" 16

    1
    The OpenShift namespace for the Topic Operator to watch for KafkaTopic resources. Specify the namespace of the Kafka cluster.
    2
    The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
    3
    The label to identify the KafkaTopic resources managed by the Topic Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to the KafkaTopic resource. If you deploy more than one Topic Operator, the labels must be unique for each. That is, the operators cannot manage the same resources.
    4
    The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes).
    5
    The level for printing logging messages. You can set the level to ERROR, WARNING, INFO, DEBUG, or TRACE.
    6
    Enables TLS support for encrypted communication with the Kafka brokers.
    7
    (Optional) The Java options used by the JVM running the Topic Operator.
    8
    (Optional) The debugging (-D) options set for the Topic Operator.
    9
    (Optional) Skips the generation of trust store certificates if TLS is enabled through STRIMZI_TLS_ENABLED. If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default is false.
    10
    (Optional) Generates key store certificates for mTLS authentication. Setting this to false disables client authentication with mTLS to the Kafka brokers. The default is true.
    11
    (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is false.
    12
    (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED.
    13
    (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED.
    14
    (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED. You can set the value to plain, scram-sha-256, or scram-sha-512.
    15
    (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to PLAINTEXT, SSL, SASL_PLAINTEXT, or SASL_SSL.
    16
    Set STRIMZI_USE_FINALIZERS to false if you do not want to use finalizers to control topic deletion.
  2. If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set STRIMZI_PUBLIC_CA to true. Set this property to true, for example, if you are using Amazon AWS MSK service.
  3. If you enabled mTLS with the STRIMZI_TLS_ENABLED environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster.

    Example mTLS configuration

    # ....
    env:
      - name: STRIMZI_TRUSTSTORE_LOCATION 1
        value: "/path/to/truststore.p12"
      - name: STRIMZI_TRUSTSTORE_PASSWORD 2
        value: "TRUSTSTORE-PASSWORD"
      - name: STRIMZI_KEYSTORE_LOCATION 3
        value: "/path/to/keystore.p12"
      - name: STRIMZI_KEYSTORE_PASSWORD 4
        value: "KEYSTORE-PASSWORD"
    # ...

    1
    The truststore contains the public keys of the Certificate Authorities used to sign the Kafka and ZooKeeper server certificates.
    2
    The password for accessing the truststore.
    3
    The keystore contains the private key for mTLS authentication.
    4
    The password for accessing the keystore.
  4. Apply the changes to the Deployment configuration to deploy the Topic Operator.
  5. Check the status of the deployment:

    oc get deployments

    Output shows the deployment name and readiness

    NAME                    READY  UP-TO-DATE  AVAILABLE
    strimzi-topic-operator  1/1    1           1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1.

6.8.1.1. Deploying the standalone Topic Operator for bidirectional topic management

Bidirectional topic management requires ZooKeeper for cluster management, and maintains topics through KafkaTopic resources and within the Kafka cluster. If you want to switch to using the Topic Operator in this mode, follow these steps to deploy the standalone Topic Operator.

Note

As the feature gate enabling the Topic Operator to run in unidirectional mode progresses to General Availability, bidirectional mode will be phased out. This transition is aimed at enhancing the user experience, particularly in supporting Kafka in KRaft mode.

  1. Undeploy the current standalone Topic Operator.

    Retain the KafkaTopic resources, which are picked up by the Topic Operator when it is deployed again.

  2. Edit the Deployment configuration for the standalone Topic Operator to include ZooKeeper-related environment variables:

    • STRIMZI_ZOOKEEPER_CONNECT
    • STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS
    • TC_ZK_CONNECTION_TIMEOUT_MS
    • STRIMZI_USE_ZOOKEEPER_TOPIC_STORE

      It is the presence or absence of the ZooKeeper variables that defines whether the bidirectional Topic Operator is used. Unidirectional topic management does not use ZooKeeper. If ZooKeeper environment variables are not present, the unidirectional Topic Operator is used. Otherwise, the bidirectional Topic Operator is used.

      Other environment variables that are not used in unidirectional mode can be added if required:

    • STRIMZI_REASSIGN_THROTTLE
    • STRIMZI_REASSIGN_VERIFY_INTERVAL_MS
    • STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS
    • STRIMZI_TOPICS_PATH
    • STRIMZI_STORE_TOPIC
    • STRIMZI_STORE_NAME
    • STRIMZI_APPLICATION_ID
    • STRIMZI_STALE_RESULT_TIMEOUT_MS

      Example standalone Topic Operator deployment configuration for bidirectional topic management

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: strimzi-topic-operator
        labels:
          app: strimzi
      spec:
        # ...
        template:
          # ...
          spec:
            # ...
            containers:
              - name: strimzi-topic-operator
                # ...
                env:
                  - name: STRIMZI_NAMESPACE
                    valueFrom:
                      fieldRef:
                        fieldPath: metadata.namespace
                  - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS
                    value: my-kafka-bootstrap-address:9092
                  - name: STRIMZI_RESOURCE_LABELS
                    value: "strimzi.io/cluster=my-cluster"
                  - name: STRIMZI_ZOOKEEPER_CONNECT 1
                    value: my-cluster-zookeeper-client:2181
                  - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 2
                    value: "18000"
                  - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 3
                    value: "6"
                  - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS
                    value: "120000"
                  - name: STRIMZI_LOG_LEVEL
                    value: INFO
                  - name: STRIMZI_TLS_ENABLED
                    value: "false"
                  - name: STRIMZI_JAVA_OPTS
                    value: "-Xmx=512M -Xms=256M"
                  - name: STRIMZI_JAVA_SYSTEM_PROPERTIES
                    value: "-Djavax.net.debug=verbose -DpropertyName=value"
                  - name: STRIMZI_PUBLIC_CA
                    value: "false"
                  - name: STRIMZI_TLS_AUTH_ENABLED
                    value: "false"
                  - name: STRIMZI_SASL_ENABLED
                    value: "false"
                  - name: STRIMZI_SASL_USERNAME
                    value: "admin"
                  - name: STRIMZI_SASL_PASSWORD
                    value: "password"
                  - name: STRIMZI_SASL_MECHANISM
                    value: "scram-sha-512"
                  - name: STRIMZI_SECURITY_PROTOCOL
                    value: "SSL"

      1
      (ZooKeeper) The host and port pair of the address to connect to the ZooKeeper cluster. This must be the same ZooKeeper cluster that your Kafka cluster is using.
      2
      (ZooKeeper) The ZooKeeper session timeout, in milliseconds. The default is 18000 (18 seconds).
      3
      The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential backoff. Consider increasing this value when topic creation takes more time due to the number of partitions or replicas. The default is 6 attempts.
  3. Apply the changes to the Deployment configuration to deploy the Topic Operator.

6.8.2. Deploying the standalone User Operator

This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator.

A standalone deployment can operate with any Kafka cluster.

Standalone deployment files are provided with Streams for Apache Kafka. Use the 05-Deployment-strimzi-user-operator.yaml deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.

The User Operator watches for KafkaUser resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the User Operator configuration. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use more than one User Operator, configure each of them to watch different namespaces. In this way, you can use the User Operator with multiple Kafka clusters.

Prerequisites

  • You are running a Kafka cluster for the User Operator to connect to.

    As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.

Procedure

  1. Edit the following env properties in the install/user-operator/05-Deployment-strimzi-user-operator.yaml standalone deployment file.

    Example standalone User Operator deployment configuration

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: strimzi-user-operator
      labels:
        app: strimzi
    spec:
      # ...
      template:
        # ...
        spec:
          # ...
          containers:
            - name: strimzi-user-operator
              # ...
              env:
                - name: STRIMZI_NAMESPACE 1
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2
                  value: my-kafka-bootstrap-address:9092
                - name: STRIMZI_CA_CERT_NAME 3
                  value: my-cluster-clients-ca-cert
                - name: STRIMZI_CA_KEY_NAME 4
                  value: my-cluster-clients-ca
                - name: STRIMZI_LABELS 5
                  value: "strimzi.io/cluster=my-cluster"
                - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6
                  value: "120000"
                - name: STRIMZI_WORK_QUEUE_SIZE 7
                  value: 10000
                - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8
                  value: 10
                - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9
                  value: 4
                - name: STRIMZI_LOG_LEVEL 10
                  value: INFO
                - name: STRIMZI_GC_LOG_ENABLED 11
                  value: "true"
                - name: STRIMZI_CA_VALIDITY 12
                  value: "365"
                - name: STRIMZI_CA_RENEWAL 13
                  value: "30"
                - name: STRIMZI_JAVA_OPTS 14
                  value: "-Xmx=512M -Xms=256M"
                - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15
                  value: "-Djavax.net.debug=verbose -DpropertyName=value"
                - name: STRIMZI_SECRET_PREFIX 16
                  value: "kafka-"
                - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17
                  value: "true"
                - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18
                  value: '* * 8-10 * * ?;* * 14-15 * * ?'
                - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19
                  value: |
                    default.api.timeout.ms=120000
                    request.timeout.ms=60000

    1
    The OpenShift namespace for the User Operator to watch for KafkaUser resources. Only one namespace can be specified.
    2
    The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
    3
    The OpenShift Secret that contains the public key (ca.crt) value of the CA (certificate authority) that signs new user certificates for mTLS authentication.
    4
    The OpenShift Secret that contains the private key (ca.key) value of the CA that signs new user certificates for mTLS authentication.
    5
    The label to identify the KafkaUser resources managed by the User Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to the KafkaUser resource. If you deploy more than one User Operator, the labels must be unique for each. That is, the operators cannot manage the same resources.
    6
    The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes).
    7
    The size of the controller event queue. The size of the queue should be at least as big as the maximal amount of users you expect the User Operator to operate. The default is 1024.
    8
    The size of the worker pool for reconciling the users. Bigger pool might require more resources, but it will also handle more KafkaUser resources The default is 50.
    9
    The size of the worker pool for Kafka Admin API and OpenShift operations. Bigger pool might require more resources, but it will also handle more KafkaUser resources The default is 4.
    10
    The level for printing logging messages. You can set the level to ERROR, WARNING, INFO, DEBUG, or TRACE.
    11
    Enables garbage collection (GC) logging. The default is true.
    12
    The validity period for the CA. The default is 365 days.
    13
    The renewal period for the CA. The renewal period is measured backwards from the expiry date of the current certificate. The default is 30 days to initiate certificate renewal before the old certificates expire.
    14
    (Optional) The Java options used by the JVM running the User Operator
    15
    (Optional) The debugging (-D) options set for the User Operator
    16
    (Optional) Prefix for the names of OpenShift secrets created by the User Operator.
    17
    (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to false, the User Operator will reject all resources with simple authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default is true.
    18
    (Optional) Semi-colon separated list of Cron Expressions defining the maintenance time windows during which the expiring user certificates will be renewed.
    19
    (Optional) Configuration options for configuring the Kafka Admin client used by the User Operator in the properties format.
  2. If you are using mTLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the next step.

    Example mTLS configuration

    # ....
    env:
      - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1
        value: my-cluster-cluster-ca-cert
      - name: STRIMZI_EO_KEY_SECRET_NAME 2
        value: my-cluster-entity-operator-certs
    # ..."

    1
    The OpenShift Secret that contains the public key (ca.crt) value of the CA that signs Kafka broker certificates.
    2
    The OpenShift Secret that contains the certificate public key (entity-operator.crt) and private key (entity-operator.key) that is used for mTLS authentication against the Kafka cluster.
  3. Deploy the User Operator.

    oc create -f install/user-operator
  4. Check the status of the deployment:

    oc get deployments

    Output shows the deployment name and readiness

    NAME                   READY  UP-TO-DATE  AVAILABLE
    strimzi-user-operator  1/1    1           1

    READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.