Search

Using AMQ Streams on OpenShift

download PDF
Red Hat AMQ 7.6

For use with AMQ Streams 1.4 on OpenShift Container Platform

Abstract

This guide describes how to install, configure, and manage Red Hat AMQ Streams to build a large-scale messaging network.

Chapter 1. Overview of AMQ Streams

AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster.

1.1. Kafka capabilities

The underlying data stream-processing capabilities and component architecture of Kafka can deliver:

  • Microservices and other applications to share data with extremely high throughput and low latency
  • Message ordering guarantees
  • Message rewind/replay from data storage to reconstruct an application state
  • Message compaction to remove old records when using a key-value log
  • Horizontal scalability in a cluster configuration
  • Replication of data to control fault tolerance
  • Retention of high volumes of data for immediate access

1.2. Kafka use cases

Kafka’s capabilities make it suitable for:

  • Event-driven architectures
  • Event sourcing to capture changes to the state of an application as a log of events
  • Message brokering
  • Website activity tracking
  • Operational monitoring through metrics
  • Log collection and aggregation
  • Commit logs for distributed systems
  • Stream processing so that applications can respond to data in real time

1.3. How AMQ Streams supports Kafka

AMQ Streams provides container images and Operators for running Kafka on OpenShift. AMQ Streams Operators are fundamental to the running of AMQ Streams. The Operators provided with AMQ Streams are purpose-built with specialist operational knowledge to effectively manage Kafka.

Operators simplify the process of:

  • Deploying and running Kafka clusters
  • Deploying and running Kafka components
  • Configuring access to Kafka
  • Securing access to Kafka
  • Upgrading Kafka
  • Managing brokers
  • Creating and managing topics
  • Creating and managing users

1.4. Operators

AMQ Streams provides Operators for managing a Kafka cluster running within an OpenShift cluster.

Cluster Operator
Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator
Entity Operator
Comprises the Topic Operator and User Operator
Topic Operator
Manages Kafka topics
User Operator
Manages Kafka users

The Cluster Operator can deploy the Topic Operator and User Operator as part of an Entity Operator configuration at the same time as a Kafka cluster.

Operators within the AMQ Streams architecture

Operators

1.5. AMQ Streams installation methods

There are two ways to install AMQ Streams on OpenShift.

Installation methodDescriptionSupported versions

Installation artifacts (YAML files)

Download the amq-streams-x.y.z-ocp-install-examples.zip file from the AMQ Streams download site. Next, deploy the YAML installation artifacts to your OpenShift cluster using oc. You start by deploying the Cluster Operator from install/cluster-operator to a single namespace, multiple namespaces, or all namespaces.

OpenShift 3.11 and later

OperatorHub

Use the AMQ Streams Operator in the OperatorHub to deploy the Cluster Operator to a single namespace or all namespaces.

OpenShift 4.x only

For the greatest flexibility, choose the installation artifacts method. Choose the OperatorHub method if you want to install AMQ Streams to OpenShift 4 in a standard configuration using the OpenShift 4 web console. The OperatorHub also allows you to take advantage of automatic updates.

In the case of both methods, the Cluster Operator is deployed to your OpenShift cluster, ready for you to deploy the other components of AMQ Streams, starting with a Kafka cluster, using the YAML example files provided.

AMQ Streams installation artifacts

The AMQ Streams installation artifacts contain various YAML files that can be deployed to OpenShift, using oc, to create custom resources, including:

  • Deployments
  • Custom resource definitions (CRDs)
  • Roles and role bindings
  • Service accounts

YAML installation files are provided for the Cluster Operator, Topic Operator, User Operator, and the Strimzi Admin role.

OperatorHub

In OpenShift 4, the Operator Lifecycle Manager (OLM) helps cluster administrators to install, update, and manage the lifecycle of all Operators and their associated services running across their clusters. The OLM is part of the Operator Framework, an open source toolkit designed to manage Kubernetes-native applications (Operators) in an effective, automated, and scalable way.

The OperatorHub is part of the OpenShift 4 web console. Cluster administrators can use it to discover, install, and upgrade Operators. Operators can be pulled from the OperatorHub, installed on the OpenShift cluster to a single (project) namespace or all (projects) namespaces, and managed by the OLM. Engineering teams can then independently manage the software in development, test, and production environments using the OLM.

Note

The OperatorHub is not available in versions of OpenShift earlier than version 4.

AMQ Streams Operator

The AMQ Streams Operator is available to install from the OperatorHub. Once installed, the AMQ Streams Operator deploys the Cluster Operator to your OpenShift cluster, along with the necessary CRDs and role-based access control (RBAC) resources.

Additional resources

Installing AMQ Streams using the installation artifacts:

Installing AMQ Streams from the OperatorHub:

1.6. Document Conventions

Replaceables

In this document, replaceable text is styled in monospace and italics.

For example, in the following code, you will want to replace my-namespace with the name of your namespace:

sed -i 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

Chapter 2. Getting started with AMQ Streams

AMQ Streams is designed to work on all types of OpenShift cluster regardless of distribution, from public and private clouds to local deployments intended for development. AMQ Streams supports a few features which are specific to OpenShift, where such integration benefits OpenShift users and cannot be implemented equivalently using standard OpenShift.

This guide assumes that an OpenShift cluster is available and the oc command-line tool is installed and configured to connect to the running cluster.

AMQ Streams is based on Strimzi 0.17.x. This chapter describes the procedures to deploy AMQ Streams on OpenShift 3.11 and later.

Note

To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.

2.1. Installing AMQ Streams and deploying components

To install AMQ Streams, download and extract the amq-streams-x.y.z-ocp-install-examples.zip file from the AMQ Streams download site.

The folder contains several YAML files to help you deploy the components of AMQ Streams to OpenShift, perform common operations, and configure your Kafka cluster. The YAML files are referenced throughout this documentation.

The remainder of this chapter provides an overview of each component and instructions for deploying the components to OpenShift using the YAML files provided.

Note

Although container images for AMQ Streams are available in the Red Hat Container Catalog, we recommend that you use the YAML files provided instead.

2.2. Custom resources

Custom resources allow you to configure and introduce changes to a default AMQ Streams deployment. In order to use custom resources, custom resource definitions must first be defined.

Custom resource definitions (CRDs) extend the Kubernetes API, providing definitions to add custom resources to an OpenShift cluster. Custom resources are created as instances of the APIs added by CRDs.

In AMQ Streams, CRDs introduce custom resources specific to AMQ Streams to an OpenShift cluster, such as Kafka, Kafka Connect, Kafka MirrorMaker, and users and topics custom resources. CRDs provide configuration instructions, defining the schemas used to instantiate and manage the AMQ Streams-specific resources. CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation.

CRDs require a one-time installation in a cluster. Depending on the cluster setup, installation typically requires cluster admin privileges.

Note

Access to manage custom resources is limited to AMQ Streams administrators.

CRDs and custom resources are defined as YAML files.

A CRD defines a new kind of resource, such as kind:Kafka, within an OpenShift cluster.

The Kubernetes API server allows custom resources to be created based on the kind and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster.

Warning

When CRDs are deleted, custom resources of that type are also deleted. Additionally, the resources created by the custom resource, such as pods and statefulsets are also deleted.

2.2.1. AMQ Streams custom resource example

Each AMQ Streams-specific custom resource conforms to the schema defined by the CRD for the resource’s kind.

To understand the relationship between a CRD and a custom resource, let’s look at a sample of the CRD for a Kafka topic.

Kafka topic CRD

apiVersion: kafka.strimzi.io/v1beta1
kind: CustomResourceDefinition
metadata: 1
  name: kafkatopics.kafka.strimzi.io
  labels:
    app: strimzi
spec: 2
  group: kafka.strimzi.io
  versions:
    v1beta1
  scope: Namespaced
  names:
    # ...
    singular: kafkatopic
    plural: kafkatopics
    shortNames:
    - kt 3
  additionalPrinterColumns: 4
      # ...
  subresources:
    status: {} 5
  validation: 6
    openAPIV3Schema:
      properties:
        spec:
          type: object
          properties:
            partitions:
              type: integer
              minimum: 1
            replicas:
              type: integer
              minimum: 1
              maximum: 32767
      # ...

1
The metadata for the topic CRD, its name and a label to identify the CRD.
2
The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example, oc get kafkatopic my-topic or oc get kafkatopics.
3
The shortname can be used in CLI commands. For example, oc get kt can be used as an abbreviation instead of oc get kafkatopic.
4
The information presented when using a get command on the custom resource.
5
The current status of the CRD as described in the schema reference for the resource.
6
openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica.
Note

You can identify the CRD YAML files supplied with the AMQ Streams installation files, because the file names contain an index number followed by ‘Crd’.

Here is a corresponding example of a KafkaTopic custom resource.

Kafka topic custom resource

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic 1
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster 2
spec: 3
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824
status:
  conditions: 4
    lastTransitionTime: "2019-08-20T11:37:00.706Z"
    status: "True"
    type: Ready
  observedGeneration: 1
  / ...

1
The kind and apiVersion identify the CRD of which the custom resource is an instance.
2
A label, applicable only to KafkaTopic and KafkaUser resources, that defines the name of the Kafka cluster (which is same as the name of the Kafka resource) to which a topic or user belongs.

The name is used by the Topic Operator and User Operator to identify the Kafka cluster when creating a topic or user.

3
The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified.
4
Status conditions for the KafkaTopic resource. The type condition changed to Ready at the lastTransitionTime.

Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API.

After a KafkaTopic custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in AMQ Streams.

2.2.2. AMQ Streams custom resource status

The status property of a AMQ Streams custom resource publishes information about the resource to users and tools that need it.

Several resources have a status property, as described in the following table.

AMQ Streams resourceSchema referencePublishes status information on…​

Kafka

Section B.68, “KafkaStatus schema reference”

The Kafka cluster.

KafkaConnect

Section B.86, “KafkaConnectStatus schema reference”

The Kafka Connect cluster, if deployed.

KafkaConnectS2I

Section B.90, “KafkaConnectS2IStatus schema reference”

The Kafka Connect cluster with Source-to-Image support, if deployed.

KafkaConnector

Section B.123, “KafkaConnectorStatus schema reference”

KafkaConnector resources, if deployed.

KafkaMirrorMaker

Section B.112, “KafkaMirrorMakerStatus schema reference”

The Kafka MirrorMaker tool, if deployed.

KafkaTopic

Section B.93, “KafkaTopicStatus schema reference”

Kafka topics in your Kafka cluster.

KafkaUser

Section B.105, “KafkaUserStatus schema reference”

Kafka users in your Kafka cluster.

KafkaBridge

Section B.120, “KafkaBridgeStatus schema reference”

The AMQ Streams Kafka Bridge, if deployed.

The status property of a resource provides information on the resource’s:

  • Current state, in the status.conditions property
  • Last observed generation, in the status.observedGeneration property

The status property also provides resource-specific information. For example:

  • KafkaConnectStatus provides the REST API endpoint for Kafka Connect connectors.
  • KafkaUserStatus provides the user name of the Kafka user and the Secret in which their credentials are stored.
  • KafkaBridgeStatus provides the HTTP address at which external client applications can access the Bridge service.

A resource’s current state is useful for tracking progress related to the resource achieving its desired state, as defined by the spec property. The status conditions provide the time and reason the state of the resource changed and details of events preventing or delaying the operator from realizing the resource’s desired state.

The last observed generation is the generation of the resource that was last reconciled by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation, the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource.

AMQ Streams creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit, for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster.

Here we see the status property specified for a Kafka custom resource.

Kafka custom resource with status

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
spec:
  # ...
status:
  conditions: 1
  - lastTransitionTime: 2019-07-23T23:46:57+0000
    status: "True"
    type: Ready 2
  observedGeneration: 4 3
  listeners: 4
  - addresses:
    - host: my-cluster-kafka-bootstrap.myproject.svc
      port: 9092
    type: plain
  - addresses:
    - host: my-cluster-kafka-bootstrap.myproject.svc
      port: 9093
    certificates:
    - |
      -----BEGIN CERTIFICATE-----
      ...
      -----END CERTIFICATE-----
    type: tls
  - addresses:
    - host: 172.29.49.180
      port: 9094
    certificates:
    - |
      -----BEGIN CERTIFICATE-----
      ...
      -----END CERTIFICATE-----
    type: external
    # ...

1
Status conditions describe criteria related to the status that cannot be deduced from the existing resource information, or are specific to the instance of a resource.
2
The Ready condition indicates whether the Cluster Operator currently considers the Kafka cluster able to handle traffic.
3
The observedGeneration indicates the generation of the Kafka custom resource that was last reconciled by the Cluster Operator.
4
The listeners describe the current Kafka bootstrap addresses by type.
Important

The address in the custom resource status for external listeners with type nodeport is currently not supported.

Note

The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a ready state.

Accessing status information

You can access status information for a resource from the command line. For more information, see Section 16.1, “Checking the status of a custom resource”.

2.3. Cluster Operator

The Cluster Operator is responsible for deploying and managing Apache Kafka clusters within an OpenShift cluster.

2.3.1. Cluster Operator

AMQ Streams uses the Cluster Operator to deploy and manage clusters for:

  • Kafka (including ZooKeeper, Entity Operator and Kafka Exporter)
  • Kafka Connect
  • Kafka MirrorMaker
  • Kafka Bridge

Custom resources are used to deploy the clusters.

For example, to deploy a Kafka cluster:

  • A Kafka resource with the cluster configuration is created within the OpenShift cluster.
  • The Cluster Operator deploys a corresponding Kafka cluster, based on what is declared in the Kafka resource.

The Cluster Operator can also deploy (through configuration of the Kafka resource):

  • A Topic Operator to provide operator-style topic management through KafkaTopic custom resources
  • A User Operator to provide operator-style user management through KafkaUser custom resources

The Topic Operator and User Operator function within the Entity Operator on deployment.

Example architecture for the Cluster Operator

Cluster Operator

2.3.2. Watch options for a Cluster Operator deployment

When the Cluster Operator is running, it starts to watch for updates of Kafka resources.

Depending on the deployment, the Cluster Operator can watch Kafka resources from:

Note

AMQ Streams provides example YAML files to make the deployment process easier.

The Cluster Operator watches for changes to the following resources:

  • Kafka for the Kafka cluster.
  • KafkaConnect for the Kafka Connect cluster.
  • KafkaConnectS2I for the Kafka Connect cluster with Source2Image support.
  • KafkaConnector for creating and managing connectors in a Kafka Connect cluster.
  • KafkaMirrorMaker for the Kafka MirrorMaker instance.
  • KafkaBridge for the Kafka Bridge instance

When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as StatefulSets, Services and ConfigMaps.

Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource.

Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption.

When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources.

2.3.3. Deploying the Cluster Operator to watch a single namespace

Prerequisites

  • This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions, ClusterRoles and ClusterRoleBindings. Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin.
  • Modify the installation files according to the namespace the Cluster Operator is going to be installed in.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

Procedure

  • Deploy the Cluster Operator:

    oc apply -f install/cluster-operator -n my-namespace

2.3.4. Deploying the Cluster Operator to watch multiple namespaces

Prerequisites

  • This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions, ClusterRoles and ClusterRoleBindings. Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin.
  • Edit the installation files according to the namespace the Cluster Operator is going to be installed in.

    On Linux, use:

    sed -i 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

    On MacOS, use:

    sed -i '' 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml

Procedure

  1. Edit the file install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml and in the environment variable STRIMZI_NAMESPACE list all the namespaces where Cluster Operator should watch for resources. For example:

    apiVersion: apps/v1
    kind: Deployment
    spec:
      # ...
      template:
        spec:
          serviceAccountName: strimzi-cluster-operator
          containers:
          - name: strimzi-cluster-operator
            image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0
            imagePullPolicy: IfNotPresent
            env:
            - name: STRIMZI_NAMESPACE
              value: watched-namespace-1,watched-namespace-2,watched-namespace-3
  2. For all namespaces which should be watched by the Cluster Operator (watched-namespace-1, watched-namespace-2, watched-namespace-3 in the above example), install the RoleBindings. Replace the watched-namespace with the namespace used in the previous step.

    This can be done using oc apply:

    oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n watched-namespace
    oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n watched-namespace
    oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n watched-namespace
  3. Deploy the Cluster Operator

    This can be done using oc apply:

    oc apply -f install/cluster-operator -n my-namespace

2.3.5. Deploying the Cluster Operator to watch all namespaces

You can configure the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster. When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created.

Prerequisites

  • This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions, ClusterRoles and ClusterRoleBindings. Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin.
  • Your OpenShift cluster is running.

Procedure

  1. Configure the Cluster Operator to watch all namespaces:

    1. Edit the 050-Deployment-strimzi-cluster-operator.yaml file.
    2. Set the value of the STRIMZI_NAMESPACE environment variable to *.

      apiVersion: apps/v1
      kind: Deployment
      spec:
        # ...
        template:
          spec:
            # ...
            serviceAccountName: strimzi-cluster-operator
            containers:
            - name: strimzi-cluster-operator
              image: registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0
              imagePullPolicy: IfNotPresent
              env:
              - name: STRIMZI_NAMESPACE
                value: "*"
              # ...
  2. Create ClusterRoleBindings that grant cluster-wide access to all namespaces to the Cluster Operator.

    Use the oc create clusterrolebinding command:

    oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-namespace:strimzi-cluster-operator
    oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-namespace:strimzi-cluster-operator
    oc create clusterrolebinding strimzi-cluster-operator-topic-operator-delegation --clusterrole=strimzi-topic-operator --serviceaccount my-namespace:strimzi-cluster-operator

    Replace my-namespace with the namespace in which you want to install the Cluster Operator.

  3. Deploy the Cluster Operator to your OpenShift cluster.

    Use the oc apply command:

    oc apply -f install/cluster-operator -n my-namespace

2.3.6. Deploying the Cluster Operator from the OperatorHub

You can deploy the Cluster Operator to your OpenShift cluster by installing the AMQ Streams Operator from the OperatorHub. The OperatorHub is available in OpenShift 4 only.

Prerequisites

  • The Red Hat Operators OperatorSource is enabled in your OpenShift cluster. If you can see Red Hat Operators in the OperatorHub, the correct OperatorSource is enabled. For more information, see the Operators guide.
  • Installation requires a user with sufficient privileges to install Operators from the OperatorHub.

Procedure

  1. In the OpenShift 4 web console, click Operators > OperatorHub.
  2. Search or browse for the AMQ Streams Operator, in the Streaming & Messaging category.

    Image: The AMQ Streams Operator in the OperatorHub in OpenShift 4
  3. Click the AMQ Streams tile and then, in the sidebar on the right, click Install.
  4. On the Create Operator Subscription screen, choose from the following installation and update options:

    • Installation Mode: Choose to install the AMQ Streams Operator to all (projects) namespaces in the cluster (the default option) or a specific (project) namespace. It is good practice to use namespaces to separate functions. We recommend that you install the Operator to its own namespace, separate from the namespace that will contain the Kafka cluster and other AMQ Streams components.
    • Approval Strategy: By default, the AMQ Streams Operator is automatically upgraded to the latest AMQ Streams version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information, see the Operators guide in the OpenShift documentation.
  5. Click Subscribe; the AMQ Streams Operator is installed to your OpenShift cluster.

    The AMQ Streams Operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace, or to all namespaces.

  6. On the Installed Operators screen, check the progress of the installation. The AMQ Streams Operator is ready to use when its status changes to InstallSucceeded.

    Installed Operators in OpenShift 4

Next, you can deploy the other components of AMQ Streams, starting with a Kafka cluster, using the YAML example files.

2.4. Kafka cluster

You can use AMQ Streams to deploy an ephemeral or persistent Kafka cluster to OpenShift. When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper.

You can also use it to deploy Kafka Exporter.

Ephemeral cluster
In general, an ephemeral (that is, temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down.
Persistent cluster
A persistent Kafka cluster uses PersistentVolumes to store ZooKeeper and Kafka data. The PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume. For example, it can use Amazon EBS volumes in Amazon AWS deployments without any changes in the YAML files. The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning.

AMQ Streams includes several examples for deploying a Kafka cluster.

  • kafka-persistent.yaml deploys a persistent cluster with three ZooKeeper and three Kafka nodes.
  • kafka-jbod.yaml deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes).
  • kafka-persistent-single.yaml deploys a persistent cluster with a single ZooKeeper node and a single Kafka node.
  • kafka-ephemeral.yaml deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes.
  • kafka-ephemeral-single.yaml deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node.

The example clusters are named my-cluster by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the resource in the relevant YAML file.

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
# ...

2.4.1. Deploying the Kafka cluster

You can deploy an ephemeral or persistent Kafka cluster to OpenShift on the command line.

Prerequisites

  • The Cluster Operator is deployed.

Procedure

  1. If you plan to use the cluster for development or testing purposes, you can create and deploy an ephemeral cluster using oc apply.

    oc apply -f examples/kafka/kafka-ephemeral.yaml
  2. If you plan to use the cluster in production, create and deploy a persistent cluster using oc apply.

    oc apply -f examples/kafka/kafka-persistent.yaml

Additional resources

2.5. Kafka Connect

Kafka Connect is a tool for streaming data between Apache Kafka and external systems. It provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with external databases and storage and messaging systems.

In Kafka Connect, a source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages. A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system. The workload of connectors is divided into tasks. Tasks are distributed among nodes (also called workers), which form a Connect cluster. This allows the message flow to be highly scalable and reliable.

Each connector is an instance of a particular connector class that knows how to communicate with the relevant external system in terms of messages. Connectors are available for many external systems, or you can develop your own.

The term connector is used interchangably to mean a connector instance running within a Kafka Connect cluster, or a connector class. This guide uses the term connector when the meaning is clear from the context.

AMQ Streams allows you to:

  • Create a Kafka Connect image containing the connectors you want
  • Deploy and manage a Kafka Connect cluster running within OpenShift using a KafkaConnect resource
  • Run connectors within your Kafka Connect cluster, optionally managed using KafkaConnector resources

Kafka Connect includes the following built-in connectors for moving file-based data into and out of your Kafka cluster.

File ConnectorDescription

FileStreamSourceConnector

Transfers data to your Kafka cluster from a file (the source).

FileStreamSinkConnector

Transfers data from your Kafka cluster to a file (the sink).

To use other connector classes, you need to prepare connector images by following one of these procedures:

The Cluster Operator can use images that you create to deploy a Kafka Connect cluster to your OpenShift cluster.

A Kafka Connect cluster is implemented as a Deployment with a configurable number of workers.

You can create and manage connectors using KafkaConnector resources or manually using the Kafka Connect REST API, which is available on port 8083 as the <connect-cluster-name>-connect-api service. The operations supported by the REST API are described in the Apache Kafka documentation.

2.5.1. Deploying Kafka Connect to your cluster

You can deploy a Kafka Connect cluster to your OpenShift cluster by using the Cluster Operator.

Procedure

  • Use the oc apply command to create a KafkaConnect resource based on the kafka-connect.yaml file:

    oc apply -f examples/kafka-connect/kafka-connect.yaml

2.5.2. Extending Kafka Connect with connector plug-ins

The AMQ Streams container images for Kafka Connect include the two built-in file connectors: FileStreamSourceConnector and FileStreamSinkConnector. You can add your own connectors by:

  • Creating a container image from the Kafka Connect base image (manually or using your CI (continuous integration), for example).
  • Creating a container image using OpenShift builds and Source-to-Image (S2I) - available only on OpenShift.
2.5.2.1. Creating a Docker image from the Kafka Connect base image

You can use the Kafka container image on Red Hat Container Catalog as a base image for creating your own custom image with additional connector plug-ins.

The following procedure explains how to create your custom image and add it to the /opt/kafka/plugins directory. At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plug-ins contained in the /opt/kafka/plugins directory.

Procedure

  1. Create a new Dockerfile using registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 as the base image:

    FROM registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0
    USER root:root
    COPY ./my-plugins/ /opt/kafka/plugins/
    USER 1001
  2. Build the container image.
  3. Push your custom image to your container registry.
  4. Point to the new container image.

    You can either:

    • Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource.

      If set, this property overrides the STRIMZI_KAFKA_CONNECT_IMAGES variable in the Cluster Operator.

      apiVersion: kafka.strimzi.io/v1beta1
      kind: KafkaConnect
      metadata:
        name: my-connect-cluster
      spec:
        #...
        image: my-new-container-image

      or

    • In the install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml file, edit the STRIMZI_KAFKA_CONNECT_IMAGES variable to point to the new container image, and then reinstall the Cluster Operator.

Additional resources

2.5.2.2. Creating a container image using OpenShift builds and Source-to-Image

You can use OpenShift builds and the Source-to-Image (S2I) framework to create new container images. An OpenShift build takes a builder image with S2I support, together with source code and binaries provided by the user, and uses them to build a new container image. Once built, container images are stored in OpenShift’s local container image repository and are available for use in deployments.

A Kafka Connect builder image with S2I support is provided on the Red Hat Container Catalog as part of the registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 image. This S2I image takes your binaries (with plug-ins and connectors) and stores them in the /tmp/kafka-plugins/s2i directory. It creates a new Kafka Connect image from this directory, which can then be used with the Kafka Connect deployment. When started using the enhanced image, Kafka Connect loads any third-party plug-ins from the /tmp/kafka-plugins/s2i directory.

Procedure

  1. On the command line, use the oc apply command to create and deploy a Kafka Connect S2I cluster:

    oc apply -f examples/kafka-connect/kafka-connect-s2i.yaml
  2. Create a directory with Kafka Connect plug-ins:

    $ tree ./my-plugins/
    ./my-plugins/
    ├── debezium-connector-mongodb
    │   ├── bson-3.4.2.jar
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mongodb-0.7.1.jar
    │   ├── debezium-core-0.7.1.jar
    │   ├── LICENSE.txt
    │   ├── mongodb-driver-3.4.2.jar
    │   ├── mongodb-driver-core-3.4.2.jar
    │   └── README.md
    ├── debezium-connector-mysql
    │   ├── CHANGELOG.md
    │   ├── CONTRIBUTE.md
    │   ├── COPYRIGHT.txt
    │   ├── debezium-connector-mysql-0.7.1.jar
    │   ├── debezium-core-0.7.1.jar
    │   ├── LICENSE.txt
    │   ├── mysql-binlog-connector-java-0.13.0.jar
    │   ├── mysql-connector-java-5.1.40.jar
    │   ├── README.md
    │   └── wkb-1.0.2.jar
    └── debezium-connector-postgres
        ├── CHANGELOG.md
        ├── CONTRIBUTE.md
        ├── COPYRIGHT.txt
        ├── debezium-connector-postgres-0.7.1.jar
        ├── debezium-core-0.7.1.jar
        ├── LICENSE.txt
        ├── postgresql-42.0.0.jar
        ├── protobuf-java-2.6.1.jar
        └── README.md
  3. Use the oc start-build command to start a new build of the image using the prepared directory:

    oc start-build my-connect-cluster-connect --from-dir ./my-plugins/
    Note

    The name of the build is the same as the name of the deployed Kafka Connect cluster.

  4. Once the build has finished, the new image is used automatically by the Kafka Connect deployment.

2.5.3. Creating and managing connectors

When you have created a container image for your connector plug-in, you need to create a connector instance in your Kafka Connect cluster. You can then configure, monitor, and manage a running connector instance.

AMQ Streams provides two APIs for creating and managing connectors:

  • KafkaConnector resources (referred to as KafkaConnectors)
  • Kafka Connect REST API

Using the APIs, you can:

  • Check the status of a connector instance
  • Reconfigure a running connector
  • Increase or decrease the number of tasks for a connector instance
  • Restart failed tasks (not supported by KafkaConnector resource)
  • Pause a connector instance
  • Resume a previously paused connector instance
  • Delete a connector instance
2.5.3.1. KafkaConnector resources

KafkaConnectors allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way, so an HTTP client such as cURL is not required. Like other Kafka resources, you declare a connector’s desired state in a KafkaConnector YAML file that is deployed to your OpenShift cluster to create the connector instance.

You manage a running connector instance by updating its corresponding KafkaConnector, and then applying the updates. You remove a connector by deleting its corresponding KafkaConnector.

To ensure compatibility with earlier versions of AMQ Streams, KafkaConnectors are disabled by default. To enable them for a Kafka Connect cluster, you must use annotations on the KafkaConnect resource. For instructions, see Section 3.2.14, “Enabling KafkaConnector resources”.

When KafkaConnectors are enabled, the Cluster Operator begins to watch for them. It updates the configurations of running connector instances to match the configurations defined in their KafkaConnectors.

AMQ Streams includes an example KafkaConnector, named examples/connector/source-connector.yaml. You can use this example to create and manage a FileStreamSourceConnector.

2.5.3.2. Availability of the Kafka Connect REST API

The Kafka Connect REST API is available on port 8083 as the <connect-cluster-name>-connect-api service.

If KafkaConnectors are enabled, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.

2.5.4. Deploying a KafkaConnector resource to Kafka Connect

Deploy the example KafkaConnector to a Kafka Connect cluster. The example YAML will create a FileStreamSourceConnector to send each line of the license file to Kafka as a message in a topic named my-topic.

Prerequisites

Procedure

  1. Edit the examples/connector/source-connector.yaml file:

    apiVersion: kafka.strimzi.io/v1alpha1
    kind: KafkaConnector
    metadata:
      name: my-source-connector 1
      labels:
        strimzi.io/cluster: my-connect-cluster 2
    spec:
      class: org.apache.kafka.connect.file.FileStreamSourceConnector 3
      tasksMax: 2 4
      config: 5
        file: "/opt/kafka/LICENSE"
        topic: my-topic
        # ...
    1
    Enter a name for the KafkaConnector resource. This will be used as the name of the connector within Kafka Connect. You can choose any name that is valid for an OpenShift resource.
    2
    Enter the name of the Kafka Connect cluster in which to create the connector.
    3
    The name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
    4
    The maximum number of tasks that the connector can create.
    5
    Configuration settings for the connector. Available configuration options depend on the connector class.
  2. Create the KafkaConnector in your OpenShift cluster:

    oc apply -f examples/connector/source-connector.yaml
  3. Check that the resource was created:

    oc get kctr --selector strimzi.io/cluster=my-connect-cluster -o name

2.6. Kafka MirrorMaker

The Cluster Operator deploys one or more Kafka MirrorMaker replicas to replicate data between Kafka clusters. This process is called mirroring to avoid confusion with the Kafka partitions replication concept. The MirrorMaker consumes messages from the source cluster and republishes those messages to the target cluster.

For information about example resources and the format for deploying Kafka MirrorMaker, see Kafka MirrorMaker configuration.

2.6.1. Deploying Kafka MirrorMaker

Prerequisites

  • Before deploying Kafka MirrorMaker, the Cluster Operator must be deployed.

Procedure

  • Create a Kafka MirrorMaker cluster from the command-line:

    oc apply -f examples/kafka-mirror-maker/kafka-mirror-maker.yaml

Additional resources

2.7. Kafka Bridge

The Cluster Operator deploys one or more Kafka bridge replicas to send data between Kafka clusters and clients via HTTP API.

For information about example resources and the format for deploying Kafka Bridge, see Kafka Bridge configuration.

2.7.1. Deploying Kafka Bridge to your OpenShift cluster

You can deploy a Kafka Bridge cluster to your OpenShift cluster by using the Cluster Operator.

Procedure

  • Use the oc apply command to create a KafkaBridge resource based on the kafka-bridge.yaml file:

    oc apply -f examples/kafka-bridge/kafka-bridge.yaml

Additional resources

2.8. Deploying example clients

Prerequisites

  • An existing Kafka cluster for the client to connect to.

Procedure

  1. Deploy the producer.

    Use oc run:

    oc run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list cluster-name-kafka-bootstrap:9092 --topic my-topic
  2. Type your message into the console where the producer is running.
  3. Press Enter to send the message.
  4. Deploy the consumer.

    Use oc run:

    oc run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name-kafka-bootstrap:9092 --topic my-topic --from-beginning
  5. Confirm that you see the incoming messages in the consumer console.

2.9. Topic Operator

The Topic Operator is responsible for managing Kafka topics within a Kafka cluster running within an OpenShift cluster.

2.9.1. Topic Operator

The Topic Operator provides a way of managing topics in a Kafka cluster through OpenShift resources.

Example architecture for the Topic Operator

Topic Operator

The role of the Topic Operator is to keep a set of KafkaTopic OpenShift resources describing Kafka topics in-sync with corresponding Kafka topics.

Specifically, if a KafkaTopic is:

  • Created, the Topic Operator creates the topic
  • Deleted, the Topic Operator deletes the topic
  • Changed, the Topic Operator updates the topic

Working in the other direction, if a topic is:

  • Created within the Kafka cluster, the Operator creates a KafkaTopic
  • Deleted from the Kafka cluster, the Operator deletes the KafkaTopic
  • Changed in the Kafka cluster, the Operator updates the KafkaTopic

This allows you to declare a KafkaTopic as part of your application’s deployment and the Topic Operator will take care of creating the topic for you. Your application just needs to deal with producing or consuming from the necessary topics.

If the topic is reconfigured or reassigned to different Kafka nodes, the KafkaTopic will always be up to date.

2.9.2. Deploying the Topic Operator using the Cluster Operator

This procedure describes how to deploy the Topic Operator using the Cluster Operator. If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component. For more information, see Section 4.2.6, “Deploying the standalone Topic Operator”.

Prerequisites

  • A running Cluster Operator
  • A Kafka resource to be created or updated

Procedure

  1. Ensure that the Kafka.spec.entityOperator object exists in the Kafka resource. This configures the Entity Operator.

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      #...
      entityOperator:
        topicOperator: {}
        userOperator: {}
  2. Configure the Topic Operator using the properties described in Section B.62, “EntityTopicOperatorSpec schema reference”.
  3. Create or update the Kafka resource in OpenShift.

    Use oc apply:

    oc apply -f your-file

Additional resources

2.10. User Operator

The User Operator is responsible for managing Kafka users within a Kafka cluster running within an OpenShift cluster.

2.10.1. User Operator

The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users, and ensuring that they are configured properly in the Kafka cluster.

For example, if a KafkaUser is:

  • Created, the User Operator creates the user it describes
  • Deleted, the User Operator deletes the user it describes
  • Changed, the User Operator updates the user it describes

Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Kafka topics can be created by applications directly in Kafka, but it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator.

The User Operator allows you to declare a KafkaUser resource as part of your application’s deployment. You can specify the authentication and authorization mechanism for the user. You can also configure user quotas that control usage of Kafka resources to ensure, for example, that a user does not monopolize access to a broker.

When the user is created, the user credentials are created in a Secret. Your application needs to use the user and its credentials for authentication and to produce or consume messages.

In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user’s access rights in the KafkaUser declaration.

2.10.2. Deploying the User Operator using the Cluster Operator

Prerequisites

  • A running Cluster Operator
  • A Kafka resource to be created or updated.

Procedure

  1. Edit the Kafka resource ensuring it has a Kafka.spec.entityOperator.userOperator object that configures the User Operator how you want.
  2. Create or update the Kafka resource in OpenShift.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

2.11. Strimzi Administrators

AMQ Streams includes several custom resources. By default, permission to create, edit, and delete these resources is limited to OpenShift cluster administrators. If you want to allow non-cluster administators to manage AMQ Streams resources, you must assign them the Strimzi Administrator role.

2.11.1. Designating Strimzi Administrators

Prerequisites

  • AMQ Streams CustomResourceDefinitions are installed.

Procedure

  1. Create the strimzi-admin cluster role in OpenShift.

    Use oc apply:

    oc apply -f install/strimzi-admin
  2. Assign the strimzi-admin ClusterRole to one or more existing users in the OpenShift cluster.

    Use oc create:

    oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2

2.12. Container images

Container images for AMQ Streams are available in the Red Hat Container Catalog. The installation YAML files provided by AMQ Streams will pull the images directly from the Red Hat Container Catalog.

If you do not have access to the Red Hat Container Catalog or want to use your own container repository:

  1. Pull all container images listed here
  2. Push them into your own registry
  3. Update the image names in the installation YAML files
Note

Each Kafka version supported for the release has a separate image.

Container imageNamespace/RepositoryDescription

Kafka

  • registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0
  • registry.redhat.io/amq7/amq-streams-kafka-23-rhel7:1.4.0

AMQ Streams image for running Kafka, including:

  • Kafka Broker
  • Kafka Connect / S2I
  • Kafka Mirror Maker
  • ZooKeeper
  • TLS Sidecars

Operator

  • registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0

AMQ Streams image for running the operators:

  • Cluster Operator
  • Topic Operator
  • User Operator
  • Kafka Initializer

Kafka Bridge

  • registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.4.0

AMQ Streams image for running the AMQ Streams kafka Bridge

Chapter 3. Deployment configuration

This chapter describes how to configure different aspects of the supported deployments:

  • Kafka clusters
  • Kafka Connect clusters
  • Kafka Connect clusters with Source2Image support
  • Kafka Mirror Maker
  • Kafka Bridge
  • OAuth 2.0 token-based authentication
  • OAuth 2.0 token-based authorization

3.1. Kafka cluster configuration

The full schema of the Kafka resource is described in the Section B.1, “Kafka schema reference”. All labels that are applied to the desired Kafka resource will also be applied to the OpenShift resources making up the Kafka cluster. This provides a convenient mechanism for resources to be labeled as required.

3.1.1. Sample Kafka YAML configuration

For help in understanding the configuration options available for your Kafka deployment, refer to sample YAML file provided here.

The sample shows only some of the possible configuration options, but those that are particularly important include:

  • Resource requests (CPU / Memory)
  • JVM options for maximum and minimum memory allocation
  • Listeners (and authentication)
  • Authentication
  • Storage
  • Rack awareness
  • Metrics
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    replicas: 3 1
    version: 1.4 2
    resources: 3
      requests:
        memory: 64Gi
        cpu: "8"
      limits: 4
        memory: 64Gi
        cpu: "12"
    jvmOptions: 5
      -Xms: 8192m
      -Xmx: 8192m
    listeners: 6
      tls:
        authentication:7
          type: tls
      external: 8
        type: route
        authentication:
          type: tls
        configuration:
          brokerCertChainAndKey: 9
            secretName: my-secret
            certificate: my-certificate.crt
            key: my-key.key
    authorization: 10
      type: simple
    config: 11
      auto.create.topics.enable: "false"
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
    storage: 12
      type: persistent-claim 13
      size: 10000Gi 14
    rack: 15
      topologyKey: failure-domain.beta.kubernetes.io/zone
    metrics: 16
      lowercaseOutputName: true
      rules: 17
      # Special cases and very specific rules
      - pattern : kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
        name: kafka_server_$1_$2
        type: GAUGE
        labels:
          clientId: "$3"
          topic: "$4"
          partition: "$5"
        # ...
  zookeeper: 18
    replicas: 3
    resources:
      requests:
        memory: 8Gi
        cpu: "2"
      limits:
        memory: 8Gi
        cpu: "2"
    jvmOptions:
      -Xms: 4096m
      -Xmx: 4096m
    storage:
      type: persistent-claim
      size: 1000Gi
    metrics:
      # ...
  entityOperator: 19
    topicOperator:
      resources:
        requests:
          memory: 512Mi
          cpu: "1"
        limits:
          memory: 512Mi
          cpu: "1"
    userOperator:
      resources:
        requests:
          memory: 512Mi
          cpu: "1"
        limits:
          memory: 512Mi
          cpu: "1"
  kafkaExporter: 20
    # ...
1
2
3
4
Resource limits specify the maximum resources that can be consumed by a container.
5
6
Listeners configure how clients connect to the Kafka cluster via bootstrap addresses. Listeners are configured as plain (without encryption), tls or external.
7
Listener authentication mechanisms may be configured for each listener, and specified as mutual TLS or SCRAM-SHA.
8
9
Optional configuration for a Kafka listener certificate managed by an external Certificate Authority. The brokerCertChainAndKey property specifies a Secret that holds a server certificate and a private key. Kafka listener certificates can also be configured for TLS listeners.
10
11
12
13
14
Persistent storage has additional configuration options, such as a storage id and class for dynamic volume provisioning.
15
Rack awareness is configured to spread replicas across different racks. A topology key must match the label of a cluster node.
16
17
Kafka rules for exporting metrics to a Grafana dashboard through the JMX Exporter. A set of rules provided with AMQ Streams may be copied to your Kafka resource configuration.
18
ZooKeeper-specific configuration, which contains properties similar to the Kafka configuration.
19
20
Kafka Exporter configuration, which is used to expose data as Prometheus metrics.

3.1.2. Data storage considerations

An efficient data storage infrastructure is essential to the optimal performance of AMQ Streams.

Block storage is required. File storage, such as NFS, does not work with Kafka.

For your block storage, you can choose, for example:

Note

Strimzi does not require OpenShift raw block volumes.

3.1.2.1. File systems

It is recommended that you configure your storage system to use the XFS file system. AMQ Streams is also compatible with the ext4 file system, but this might require additional configuration for best results.

3.1.2.2. Apache Kafka and ZooKeeper storage

Use separate disks for Apache Kafka and ZooKeeper.

Three types of data storage are supported:

  • Ephemeral (Recommended for development only)
  • Persistent
  • JBOD (Just a Bunch of Disks, suitable for Kafka only)

For more information, see Kafka and ZooKeeper storage.

Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access.

Note

You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication.

3.1.3. Kafka and ZooKeeper storage types

As stateful applications, Kafka and ZooKeeper need to store data on disk. AMQ Streams supports three storage types for this data:

  • Ephemeral
  • Persistent
  • JBOD storage
Note

JBOD storage is supported only for Kafka, not for ZooKeeper.

When configuring a Kafka resource, you can specify the type of storage used by the Kafka broker and its corresponding ZooKeeper node. You configure the storage type using the storage property in the following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper

The storage type is configured in the type field.

Warning

The storage type cannot be changed after a Kafka cluster is deployed.

Additional resources

3.1.3.1. Ephemeral storage

Ephemeral storage uses the `emptyDir` volumes volumes to store data. To use ephemeral storage, the type field should be set to ephemeral.

Important

emptyDir volumes are not persistent and the data stored in them will be lost when the Pod is restarted. After the new pod is started, it has to recover all data from other nodes of the cluster. Ephemeral storage is not suitable for use with single node ZooKeeper clusters and for Kafka topics with replication factor 1, because it will lead to data loss.

An example of Ephemeral storage

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    storage:
      type: ephemeral
    # ...
  zookeeper:
    # ...
    storage:
      type: ephemeral
    # ...

3.1.3.1.1. Log directories

The ephemeral volume will be used by the Kafka brokers as log directories mounted into the following path:

/var/lib/kafka/data/kafka-log_idx_
Where idx is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0.
3.1.3.2. Persistent storage

Persistent storage uses Persistent Volume Claims to provision persistent volumes for storing data. Persistent Volume Claims can be used to provision volumes of many different types, depending on the Storage Class which will provision the volume. The data types which can be used with persistent volume claims include many types of SAN storage as well as Local persistent volumes.

To use persistent storage, the type has to be set to persistent-claim. Persistent storage supports additional configuration options:

id (optional)
Storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. Default is 0.
size (required)
Defines the size of the persistent volume claim, for example, "1000Gi".
class (optional)
The OpenShift Storage Class to use for dynamic volume provisioning.
selector (optional)
Allows selecting a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume.
deleteClaim (optional)
Boolean value which specifies if the Persistent Volume Claim has to be deleted when the cluster is undeployed. Default is false.
Warning

Increasing the size of persistent volumes in an existing AMQ Streams cluster is only supported in OpenShift versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. For other versions of OpenShift and storage classes which do not support volume expansion, you must decide the necessary storage size before deploying the cluster. Decreasing the size of existing persistent volumes is not possible.

Example fragment of persistent storage configuration with 1000Gi size

# ...
storage:
  type: persistent-claim
  size: 1000Gi
# ...

The following example demonstrates the use of a storage class.

Example fragment of persistent storage configuration with specific Storage Class

# ...
storage:
  type: persistent-claim
  size: 1Gi
  class: my-storage-class
# ...

Finally, a selector can be used to select a specific labeled persistent volume to provide needed features such as an SSD.

Example fragment of persistent storage configuration with selector

# ...
storage:
  type: persistent-claim
  size: 1Gi
  selector:
    hdd-type: ssd
  deleteClaim: true
# ...

3.1.3.2.1. Storage class overrides

You can specify a different storage class for one or more Kafka brokers, instead of using the default storage class. This is useful if, for example, storage classes are restricted to different availability zones or data centers. You can use the overrides field for this purpose.

In this example, the default storage class is named my-storage-class:

Example AMQ Streams cluster using storage class overrides

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  labels:
    app: my-cluster
  name: my-cluster
  namespace: myproject
spec:
  # ...
  kafka:
    replicas: 3
    storage:
      deleteClaim: true
      size: 100Gi
      type: persistent-claim
      class: my-storage-class
      overrides:
        - broker: 0
          class: my-storage-class-zone-1a
        - broker: 1
          class: my-storage-class-zone-1b
        - broker: 2
          class: my-storage-class-zone-1c
  # ...

As a result of the configured overrides property, the broker volumes use the following storage classes:

  • The persistent volumes of broker 0 will use my-storage-class-zone-1a.
  • The persistent volumes of broker 1 will use my-storage-class-zone-1b.
  • The persistent volumes of broker 2 will use my-storage-class-zone-1c.

The overrides property is currently used only to override storage class configurations. Overriding other storage configuration fields is not currently supported. Other fields from the storage configuration are currently not supported.

3.1.3.2.2. Persistent Volume Claim naming

When persistent storage is used, it creates Persistent Volume Claims with the following names:

data-cluster-name-kafka-idx
Persistent Volume Claim for the volume used for storing data for the Kafka broker pod idx.
data-cluster-name-zookeeper-idx
Persistent Volume Claim for the volume used for storing data for the ZooKeeper node pod idx.
3.1.3.2.3. Log directories

The persistent volume will be used by the Kafka brokers as log directories mounted into the following path:

/var/lib/kafka/data/kafka-log_idx_
Where idx is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0.
3.1.3.3. Resizing persistent volumes

You can provision increased storage capacity by increasing the size of the persistent volumes used by an existing AMQ Streams cluster. Resizing persistent volumes is supported in clusters that use either a single persistent volume or multiple persistent volumes in a JBOD storage configuration.

Note

You can increase but not decrease the size of persistent volumes. Decreasing the size of persistent volumes is not currently supported in OpenShift.

Prerequisites

  • An OpenShift cluster with support for volume resizing.
  • The Cluster Operator is running.
  • A Kafka cluster using persistent volumes created using a storage class that supports volume expansion.

Procedure

  1. In a Kafka resource, increase the size of the persistent volume allocated to the Kafka cluster, the ZooKeeper cluster, or both.

    • To increase the volume size allocated to the Kafka cluster, edit the spec.kafka.storage property.
    • To increase the volume size allocated to the ZooKeeper cluster, edit the spec.zookeeper.storage property.

      For example, to increase the volume size from 1000Gi to 2000Gi:

      apiVersion: kafka.strimzi.io/v1beta1
      kind: Kafka
      metadata:
        name: my-cluster
      spec:
        kafka:
          # ...
          storage:
            type: persistent-claim
            size: 2000Gi
            class: my-storage-class
          # ...
        zookeeper:
          # ...
  2. Create or update the resource.

    Use oc apply:

    oc apply -f your-file

    OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically.

Additional resources

For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes.

3.1.3.4. JBOD storage overview

You can configure AMQ Streams to use JBOD, a data storage configuration of multiple disks or volumes. JBOD is one approach to providing increased data storage for Kafka brokers. It can also improve performance.

A JBOD configuration is described by one or more volumes, each of which can be either ephemeral or persistent. The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. For example, you cannot change the size of a persistent storage volume after it has been provisioned.

3.1.3.4.1. JBOD configuration

To use JBOD with AMQ Streams, the storage type must be set to jbod. The volumes property allows you to describe the disks that make up your JBOD storage array or configuration. The following fragment shows an example JBOD configuration:

# ...
storage:
  type: jbod
  volumes:
  - id: 0
    type: persistent-claim
    size: 100Gi
    deleteClaim: false
  - id: 1
    type: persistent-claim
    size: 100Gi
    deleteClaim: false
# ...

The ids cannot be changed once the JBOD volumes are created.

Users can add or remove volumes from the JBOD configuration.

3.1.3.4.2. JBOD and Persistent Volume Claims

When persistent storage is used to declare JBOD volumes, the naming scheme of the resulting Persistent Volume Claims is as follows:

data-id-cluster-name-kafka-idx
Where id is the ID of the volume used for storing data for Kafka broker pod idx.
3.1.3.4.3. Log directories

The JBOD volumes will be used by the Kafka brokers as log directories mounted into the following path:

/var/lib/kafka/data-id/kafka-log_idx_
Where id is the ID of the volume used for storing data for Kafka broker pod idx. For example /var/lib/kafka/data-0/kafka-log0.
3.1.3.5. Adding volumes to JBOD storage

This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type.

Note

When adding a new volume under an id which was already used in the past and removed, you have to make sure that the previously used PersistentVolumeClaims have been deleted.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • A Kafka cluster with JBOD storage

Procedure

  1. Edit the spec.kafka.storage.volumes property in the Kafka resource. Add the new volumes to the volumes array. For example, add the new volume with id 2:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        storage:
          type: jbod
          volumes:
          - id: 0
            type: persistent-claim
            size: 100Gi
            deleteClaim: false
          - id: 1
            type: persistent-claim
            size: 100Gi
            deleteClaim: false
          - id: 2
            type: persistent-claim
            size: 100Gi
            deleteClaim: false
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
  3. Create new topics or reassign existing partitions to the new disks.

Additional resources

For more information about reassigning topics, see Section 3.1.25.2, “Partition reassignment”.

3.1.3.6. Removing volumes from JBOD storage

This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. The JBOD storage always has to contain at least one volume.

Important

To avoid data loss, you have to move all partitions before removing the volumes.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • A Kafka cluster with JBOD storage with two or more volumes

Procedure

  1. Reassign all partitions from the disks which are you going to remove. Any data in partitions still assigned to the disks which are going to be removed might be lost.
  2. Edit the spec.kafka.storage.volumes property in the Kafka resource. Remove one or more volumes from the volumes array. For example, remove the volumes with ids 1 and 2:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        storage:
          type: jbod
          volumes:
          - id: 0
            type: persistent-claim
            size: 100Gi
            deleteClaim: false
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

For more information about reassigning topics, see Section 3.1.25.2, “Partition reassignment”.

3.1.4. Kafka broker replicas

A Kafka cluster can run with many brokers. You can configure the number of brokers used for the Kafka cluster in Kafka.spec.kafka.replicas. The best number of brokers for your cluster has to be determined based on your specific use case.

3.1.4.1. Configuring the number of broker nodes

This procedure describes how to configure the number of Kafka broker nodes in a new cluster. It only applies to new clusters with no partitions. If your cluster already has topics defined, see Section 3.1.25, “Scaling clusters”.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • A Kafka cluster with no topics defined yet

Procedure

  1. Edit the replicas property in the Kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        replicas: 3
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

If your cluster already has topics defined, see Section 3.1.25, “Scaling clusters”.

3.1.5. Kafka broker configuration

AMQ Streams allows you to customize the configuration of the Kafka brokers in your Kafka cluster. You can specify and configure most of the options listed in the "Broker Configs" section of the Apache Kafka documentation. You cannot configure options that are related to the following areas:

  • Security (Encryption, Authentication, and Authorization)
  • Listener configuration
  • Broker ID configuration
  • Configuration of log data directories
  • Inter-broker communication
  • ZooKeeper connectivity

These options are automatically configured by AMQ Streams.

3.1.5.1. Kafka broker configuration

The config property in Kafka.spec.kafka contains Kafka broker configuration options as keys with values in one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure all of the options in the "Broker Configs" section of the Apache Kafka documentation apart from those managed directly by AMQ Streams. Specifically, you are prevented from modifying all configuration options with keys equal to or starting with one of the following strings:

  • listeners
  • advertised.
  • broker.
  • listener.
  • host.name
  • port
  • inter.broker.listener.name
  • sasl.
  • ssl.
  • security.
  • password.
  • principal.builder.class
  • log.dir
  • zookeeper.connect
  • zookeeper.set.acl
  • authorizer.
  • super.user

If the config property specifies a restricted option, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to Kafka.

An example Kafka broker configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    config:
      num.partitions: 1
      num.recovery.threads.per.data.dir: 1
      default.replication.factor: 3
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 1
      log.retention.hours: 168
      log.segment.bytes: 1073741824
      log.retention.check.interval.ms: 300000
      num.network.threads: 3
      num.io.threads: 8
      socket.send.buffer.bytes: 102400
      socket.receive.buffer.bytes: 102400
      socket.request.max.bytes: 104857600
      group.initial.rebalance.delay.ms: 0
    # ...

3.1.5.2. Configuring Kafka brokers

You can configure an existing Kafka broker, or create a new Kafka broker with a specified configuration.

Prerequisites

  • An OpenShift cluster is available.
  • The Cluster Operator is running.

Procedure

  1. Open the YAML configuration file that contains the Kafka resource specifying the cluster deployment.
  2. In the spec.kafka.config property in the Kafka resource, enter one or more Kafka configuration settings. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        config:
          default.replication.factor: 3
          offsets.topic.replication.factor: 3
          transaction.state.log.replication.factor: 3
          transaction.state.log.min.isr: 1
        # ...
      zookeeper:
        # ...
  3. Apply the new configuration to create or update the resource.

    Use oc apply:

    oc apply -f kafka.yaml

    where kafka.yaml is the YAML configuration file for the resource that you want to configure; for example, kafka-persistent.yaml.

3.1.6. Kafka broker listeners

You can configure the listeners enabled in Kafka brokers. The following types of listeners are supported:

  • Plain listener on port 9092 (without TLS encryption)
  • TLS listener on port 9093 (with TLS encryption)
  • External listener on port 9094 for access from outside of OpenShift

OAuth 2.0

If you are using OAuth 2.0 token-based authentication, you can configure the listeners to connect to your authorization server. For more information, see Using OAuth 2.0 token-based authentication.

Listener certificates

You can provide your own server certificates, called Kafka listener certificates, for TLS listeners or external listeners which have TLS encryption enabled. For more information, see Section 13.8, “Kafka listener certificates”.

3.1.6.1. Kafka listeners

You can configure Kafka broker listeners using the listeners property in the Kafka.spec.kafka resource. The listeners property contains three sub-properties:

  • plain
  • tls
  • external

Each listener will only be defined when the listeners object has the given property.

An example of listeners property with all listeners enabled

# ...
listeners:
  plain: {}
  tls: {}
  external:
    type: loadbalancer
# ...

An example of listeners property with only the plain listener enabled

# ...
listeners:
  plain: {}
# ...

3.1.6.2. Configuring Kafka listeners

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the listeners property in the Kafka.spec.kafka resource.

    An example configuration of the plain (unencrypted) listener without authentication:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          plain: {}
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.6.3. Listener authentication

The listener authentication property is used to specify an authentication mechanism specific to that listener:

  • Mutual TLS authentication (only on the listeners with TLS encryption)
  • SCRAM-SHA authentication

If no authentication property is specified then the listener does not authenticate clients which connect through that listener.

Authentication must be configured when using the User Operator to manage KafkaUsers.

3.1.6.3.1. Authentication configuration for a listener

The following example shows:

  • A plain listener configured for SCRAM-SHA authentication
  • A tls listener with mutual TLS authentication
  • An external listener with mutual TLS authentication

An example showing listener authentication configuration

# ...
listeners:
  plain:
    authentication:
      type: scram-sha-512
  tls:
    authentication:
      type: tls
  external:
    type: loadbalancer
    tls: true
    authentication:
      type: tls
# ...

3.1.6.3.2. Mutual TLS authentication

Mutual TLS authentication is always used for the communication between Kafka brokers and ZooKeeper pods.

Mutual authentication or two-way authentication is when both the server and the client present certificates. AMQ Streams can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. When you configure mutual authentication, the broker authenticates the client and the client authenticates the broker.

Note

TLS authentication is more commonly one-way, with one party authenticating the identity of another. For example, when HTTPS is used between a web browser and a web server, the server obtains proof of the identity of the browser.

3.1.6.3.2.1. When to use mutual TLS authentication for clients

Mutual TLS authentication is recommended for authenticating Kafka clients when:

  • The client supports authentication using mutual TLS authentication
  • It is necessary to use the TLS certificates rather than passwords
  • You can reconfigure and restart client applications periodically so that they do not use expired certificates.
3.1.6.3.3. SCRAM-SHA authentication

SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. AMQ Streams can configure Kafka to use SASL (Simple Authentication and Security Layer) SCRAM-SHA-512 to provide authentication on both unencrypted and TLS-encrypted client connections. TLS authentication is always used internally between Kafka brokers and ZooKeeper nodes. When used with a TLS client connection, the TLS protocol provides encryption, but is not used for authentication.

The following properties of SCRAM make it safe to use SCRAM-SHA even on unencrypted connections:

  • The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user.
  • The server and client each generate a new challenge for each authentication exchange. This means that the exchange is resilient against replay attacks.
3.1.6.3.3.1. Supported SCRAM credentials

AMQ Streams supports SCRAM-SHA-512 only. When a KafkaUser.spec.authentication.type is configured with scram-sha-512 the User Operator will generate a random 12 character password consisting of upper and lowercase ASCII letters and numbers.

3.1.6.3.3.2. When to use SCRAM-SHA authentication for clients

SCRAM-SHA is recommended for authenticating Kafka clients when:

  • The client supports authentication using SCRAM-SHA-512
  • It is necessary to use passwords rather than the TLS certificates
  • Authentication for unencrypted communication is required
3.1.6.4. External listeners

Use an external listener to expose your AMQ Streams Kafka cluster to a client outside an OpenShift environment.

Additional resources

3.1.6.4.1. Customizing advertised addresses on external listeners

By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed. You can customize the advertised hostname and port in the overrides property of the external listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of external listeners.

Example of an external listener configured with overrides for advertised addresses

# ...
listeners:
  external:
    type: route
    authentication:
      type: tls
    overrides:
      brokers:
      - broker: 0
        advertisedHost: example.hostname.0
        advertisedPort: 12340
      - broker: 1
        advertisedHost: example.hostname.1
        advertisedPort: 12341
      - broker: 2
        advertisedHost: example.hostname.2
        advertisedPort: 12342
# ...

Additionally, you can specify the name of the bootstrap service. This name will be added to the broker certificates and can be used for TLS hostname verification. Adding the additional bootstrap address is available for all types of external listeners.

Example of an external listener configured with an additional bootstrap address

# ...
listeners:
  external:
    type: route
    authentication:
      type: tls
    overrides:
      bootstrap:
        address: example.hostname
# ...

3.1.6.4.2. Route external listeners

An external listener of type route exposes Kafka using OpenShift Routes and the HAProxy router.

Note

route is only supported on OpenShift

3.1.6.4.2.1. Exposing Kafka using OpenShift Routes

When exposing Kafka using OpenShift Routes and the HAProxy router, a dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443.

TLS encryption is always used with Routes.

By default, the route hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying the requested hosts in the overrides property. AMQ Streams will not perform any validation that the requested hosts are available; you must ensure that they are free and can be used.

Example of an external listener of type routes configured with overrides for OpenShift route hosts

# ...
listeners:
  external:
    type: route
    authentication:
      type: tls
    overrides:
      bootstrap:
        host: bootstrap.myrouter.com
      brokers:
      - broker: 0
        host: broker-0.myrouter.com
      - broker: 1
        host: broker-1.myrouter.com
      - broker: 2
        host: broker-2.myrouter.com
# ...

For more information on using Routes to access Kafka, see Section 3.1.6.4.2.2, “Accessing Kafka using OpenShift routes”.

3.1.6.4.2.2. Accessing Kafka using OpenShift routes

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Deploy Kafka cluster with an external listener enabled and configured to the type route.

    An example configuration with an external listener configured to use Routes:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          external:
            type: route
            # ...
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    oc apply -f your-file
  3. Find the address of the bootstrap Route.

    oc get routes _cluster-name_-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'

    Use the address together with port 443 in your Kafka client as the bootstrap address.

  4. Extract the public certificate of the broker certification authority

    oc get secret _<cluster-name>_-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt

    Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.

Additional resources

3.1.6.4.3. Loadbalancer external listeners

External listeners of type loadbalancer expose Kafka by using Loadbalancer type Services.

3.1.6.4.3.1. Exposing Kafka using loadbalancers

When exposing Kafka using Loadbalancer type Services, a new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to connections on port 9094.

By default, TLS encryption is enabled. To disable it, set the tls field to false.

Example of an external listener of type loadbalancer

# ...
listeners:
  external:
    type: loadbalancer
    authentication:
      type: tls
# ...

For more information on using loadbalancers to access Kafka, see Section 3.1.6.4.3.4, “Accessing Kafka using loadbalancers”.

3.1.6.4.3.2. Customizing the DNS names of external loadbalancer listeners

On loadbalancer listeners, you can use the dnsAnnotations property to add additional annotations to the loadbalancer services. You can use these annotations to instrument DNS tooling such as External DNS, which automatically assigns DNS names to the loadbalancer services.

Example of an external listener of type loadbalancer using dnsAnnotations

# ...
listeners:
  external:
    type: loadbalancer
    authentication:
      type: tls
    overrides:
      bootstrap:
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      brokers:
      - broker: 0
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      - broker: 1
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      - broker: 2
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
# ...

3.1.6.4.3.3. Customizing the loadbalancer IP addresses

On loadbalancer listeners, you can use the loadBalancerIP property to request a specific IP address when creating a loadbalancer. Use this property when you need to use a loadbalancer with a specific IP address. The loadBalancerIP field is ignored if the cloud provider does not support the feature.

Example of an external listener of type loadbalancer with specific loadbalancer IP address requests

# ...
listeners:
  external:
    type: loadbalancer
    authentication:
      type: tls
    overrides:
      bootstrap:
        loadBalancerIP: 172.29.3.10
      brokers:
      - broker: 0
        loadBalancerIP: 172.29.3.1
      - broker: 1
        loadBalancerIP: 172.29.3.2
      - broker: 2
        loadBalancerIP: 172.29.3.3
# ...

3.1.6.4.3.4. Accessing Kafka using loadbalancers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Deploy Kafka cluster with an external listener enabled and configured to the type loadbalancer.

    An example configuration with an external listener configured to use loadbalancers:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          external:
            type: loadbalancer
            authentication:
              type: tls
            # ...
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
  3. Find the hostname of the bootstrap loadbalancer.

    This can be done using oc get:

    oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}{"\n"}'

    If no hostname was found (nothing was returned by the command), use the loadbalancer IP address.

    This can be done using oc get:

    oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'

    Use the hostname or IP address together with port 9094 in your Kafka client as the bootstrap address.

  4. Unless TLS encryption was disabled, extract the public certificate of the broker certification authority.

    This can be done using oc get:

    oc get secret cluster-name-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt

    Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.

Additional resources

3.1.6.4.4. Node Port external listeners

External listeners of type nodeport expose Kafka by using NodePort type Services.

3.1.6.4.4.1. Exposing Kafka using node ports

When exposing Kafka using NodePort type Services, Kafka clients connect directly to the nodes of OpenShift. You must enable access to the ports on the OpenShift nodes for each client (for example, in firewalls or security groups). Each Kafka broker pod is then accessible on a separate port. Additional NodePort type Service is created to serve as a Kafka bootstrap address.

When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. When selecting the node address, the different address types are used with the following priority:

  1. ExternalDNS
  2. ExternalIP
  3. Hostname
  4. InternalDNS
  5. InternalIP

By default, TLS encryption is enabled. To disable it, set the tls field to false.

Note

TLS hostname verification is not currently supported when exposing Kafka clusters using node ports.

By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. However, you can override the assigned node ports by specifying the requested port numbers in the overrides property. AMQ Streams does not perform any validation on the requested ports; you must ensure that they are free and available for use.

Example of an external listener configured with overrides for node ports

# ...
listeners:
  external:
    type: nodeport
    tls: true
    authentication:
      type: tls
    overrides:
      bootstrap:
        nodePort: 32100
      brokers:
      - broker: 0
        nodePort: 32000
      - broker: 1
        nodePort: 32001
      - broker: 2
        nodePort: 32002
# ...

For more information on using node ports to access Kafka, see Section 3.1.6.4.4.3, “Accessing Kafka using node ports”.

3.1.6.4.4.2. Customizing the DNS names of external node port listeners

On nodeport listeners, you can use the dnsAnnotations property to add additional annotations to the nodeport services. You can use these annotations to instrument DNS tooling such as External DNS, which automatically assigns DNS names to the cluster nodes.

Example of an external listener of type nodeport using dnsAnnotations

# ...
listeners:
  external:
    type: nodeport
    tls: true
    authentication:
      type: tls
    overrides:
      bootstrap:
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      brokers:
      - broker: 0
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      - broker: 1
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      - broker: 2
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
# ...

3.1.6.4.4.3. Accessing Kafka using node ports

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Deploy Kafka cluster with an external listener enabled and configured to the type nodeport.

    An example configuration with an external listener configured to use node ports:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          external:
            type: nodeport
            tls: true
            # ...
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
  3. Find the port number of the bootstrap service.

    This can be done using oc get:

    oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}'

    The port should be used in the Kafka bootstrap address.

  4. Find the address of the OpenShift node.

    This can be done using oc get:

    oc get node node-name -o=jsonpath='{range .status.addresses[*]}{.type}{"\t"}{.address}{"\n"}'

    If several different addresses are returned, select the address type you want based on the following order:

    1. ExternalDNS
    2. ExternalIP
    3. Hostname
    4. InternalDNS
    5. InternalIP

      Use the address with the port found in the previous step in the Kafka bootstrap address.

  5. Unless TLS encryption was disabled, extract the public certificate of the broker certification authority.

    This can be done using oc get:

    oc get secret cluster-name-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt

    Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.

Additional resources

3.1.6.4.5. OpenShift Ingress external listeners

External listeners of type ingress exposes Kafka by using Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes.

3.1.6.4.5.1. Exposing Kafka using Kubernetes Ingress

When exposing Kafka using using Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes, a dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443.

Note

External listeners using Ingress have been currently tested only with the NGINX Ingress Controller for Kubernetes.

AMQ Streams uses the TLS passthrough feature of the NGINX Ingress Controller for Kubernetes. Make sure TLS passthrough is enabled in your NGINX Ingress Controller for Kubernetes deployment. For more information about enabling TLS passthrough see TLS passthrough documentation. Because it is using the TLS passthrough functionality, TLS encryption cannot be disabled when exposing Kafka using Ingress.

The Ingress controller does not assign any hostnames automatically. You have to specify the hostnames which should be used by the bootstrap and per-broker services in the spec.kafka.listeners.external.configuration section. You also have to make sure that the hostnames resolve to the Ingress endpoints. AMQ Streams will not perform any validation that the requested hosts are available and properly routed to the Ingress endpoints.

Example of an external listener of type ingress

# ...
listeners:
  external:
    type: ingress
    authentication:
      type: tls
    configuration:
      bootstrap:
        host: bootstrap.myingress.com
      brokers:
      - broker: 0
        host: broker-0.myingress.com
      - broker: 1
        host: broker-1.myingress.com
      - broker: 2
        host: broker-2.myingress.com
# ...

For more information on using Ingress to access Kafka, see Section 3.1.6.4.5.4, “Accessing Kafka using ingress”.

3.1.6.4.5.2. Configuring the Ingress class

By default, the Ingress class is set to nginx. You can change the Ingress class using the class property.

Example of an external listener of type ingress using Ingress class nginx-internal

# ...
listeners:
  external:
    type: ingress
    class: nginx-internal
    # ...
# ...

3.1.6.4.5.3. Customizing the DNS names of external ingress listeners

On ingress listeners, you can use the dnsAnnotations property to add additional annotations to the ingress resources. You can use these annotations to instrument DNS tooling such as External DNS, which automatically assigns DNS names to the ingress resources.

Example of an external listener of type ingress using dnsAnnotations

# ...
listeners:
  external:
    type: ingress
    authentication:
      type: tls
    configuration:
      bootstrap:
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: bootstrap.myingress.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
        host: bootstrap.myingress.com
      brokers:
      - broker: 0
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: broker-0.myingress.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
        host: broker-0.myingress.com
      - broker: 1
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: broker-1.myingress.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
        host: broker-1.myingress.com
      - broker: 2
        dnsAnnotations:
          external-dns.alpha.kubernetes.io/hostname: broker-2.myingress.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
        host: broker-2.myingress.com
# ...

3.1.6.4.5.4. Accessing Kafka using ingress

This procedure shows how to access AMQ Streams Kafka clusters from outside of OpenShift using Ingress.

Prerequisites

Procedure

  1. Deploy Kafka cluster with an external listener enabled and configured to the type ingress.

    An example configuration with an external listener configured to use Ingress:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          external:
            type: ingress
            authentication:
              type: tls
            configuration:
              bootstrap:
                host: bootstrap.myingress.com
              brokers:
              - broker: 0
                host: broker-0.myingress.com
              - broker: 1
                host: broker-1.myingress.com
              - broker: 2
                host: broker-2.myingress.com
        # ...
      zookeeper:
        # ...
  2. Make sure the hosts in the configuration section properly resolve to the Ingress endpoints.
  3. Create or update the resource.

    oc apply -f your-file
  4. Extract the public certificate of the broker certificate authority

    oc get secret cluster-name-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
  5. Use the extracted certificate in your Kafka client to configure the TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. Connect with your client to the host you specified in the configuration on port 443.

Additional resources

3.1.6.5. Network policies

AMQ Streams automatically creates a NetworkPolicy resource for every listener that is enabled on a Kafka broker. By default, a NetworkPolicy grants access to a listener to all applications and namespaces.

If you want to restrict access to a listener at the network level to only selected applications or namespaces, use the networkPolicyPeers field.

Use network policies in conjunction with authentication and authorization.

Each listener can have a different networkPolicyPeers configuration.

3.1.6.5.1. Network policy configuration for a listener

The following example shows a networkPolicyPeers configuration for a plain and a tls listener:

# ...
listeners:
  plain:
    authentication:
      type: scram-sha-512
    networkPolicyPeers:
      - podSelector:
          matchLabels:
            app: kafka-sasl-consumer
      - podSelector:
          matchLabels:
            app: kafka-sasl-producer
  tls:
    authentication:
      type: tls
    networkPolicyPeers:
      - namespaceSelector:
          matchLabels:
            project: myproject
      - namespaceSelector:
          matchLabels:
            project: myproject2
# ...

In the example:

  • Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker.
  • Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener.

The syntax of the networkPolicyPeers field is the same as the from field in NetworkPolicy resources. For more information about the schema, see NetworkPolicyPeer API reference and the KafkaListeners schema reference.

Note

Your configuration of OpenShift must support ingress NetworkPolicies in order to use network policies in AMQ Streams.

3.1.6.5.2. Restricting access to Kafka listeners using networkPolicyPeers

You can restrict access to a listener to only selected applications by using the networkPolicyPeers field.

Prerequisites

  • An OpenShift cluster with support for Ingress NetworkPolicies.
  • The Cluster Operator is running.

Procedure

  1. Open the Kafka resource.
  2. In the networkPolicyPeers field, define the application pods or namespaces that will be allowed to access the Kafka cluster.

    For example, to configure a tls listener to allow connections only from application pods with the label app set to kafka-client:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          tls:
            networkPolicyPeers:
              - podSelector:
                  matchLabels:
                    app: kafka-client
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    Use oc apply:

    oc apply -f your-file

Additional resources

3.1.7. Authentication and Authorization

AMQ Streams supports authentication and authorization. Authentication can be configured independently for each listener. Authorization is always configured for the whole Kafka cluster.

3.1.7.1. Authentication

Authentication is configured as part of the listener configuration in the authentication property. The authentication mechanism is defined by the type field.

When the authentication property is missing, no authentication is enabled on a given listener. The listener will accept all connections without authentication.

Supported authentication mechanisms:

3.1.7.1.1. TLS client authentication

TLS Client authentication is enabled by specifying the type as tls. The TLS client authentication is supported only on the tls listener.

An example of authentication with type tls

# ...
authentication:
  type: tls
# ...

3.1.7.2. Configuring authentication in Kafka brokers

Prerequisites

  • An OpenShift cluster is available.
  • The Cluster Operator is running.

Procedure

  1. Open the YAML configuration file that contains the Kafka resource specifying the cluster deployment.
  2. In the spec.kafka.listeners property in the Kafka resource, add the authentication field to the listeners for which you want to enable authentication. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        listeners:
          tls:
            authentication:
              type: tls
        # ...
      zookeeper:
        # ...
  3. Apply the new configuration to create or update the resource.

    Use oc apply:

    oc apply -f kafka.yaml

    where kafka.yaml is the YAML configuration file for the resource that you want to configure; for example, kafka-persistent.yaml.

Additional resources

3.1.7.3. Authorization

You can configure authorization for Kafka brokers using the authorization property in the Kafka.spec.kafka resource. If the authorization property is missing, no authorization is enabled. When enabled, authorization is applied to all enabled listeners. The authorization method is defined in the type field.

You can configure:

3.1.7.3.1. Simple authorization

Simple authorization in AMQ Streams uses the SimpleAclAuthorizer plugin, the default Access Control Lists (ACLs) authorization plugin provided with Apache Kafka. ACLs allow you to define which users have access to which resources at a granular level. To enable simple authorization, set the type field to simple.

An example of Simple authorization

# ...
authorization:
  type: simple
# ...

Access rules for users are defined using Access Control Lists (ACLs). You can optionally designate a list of super users in the superUsers field.

3.1.7.3.2. Super users

Super users can access all resources in your Kafka cluster regardless of any access restrictions defined in ACLs. To designate super users for a Kafka cluster, enter a list of user principles in the superUsers field. If a user uses TLS Client Authentication, the username will be the common name from their certificate subject prefixed with CN=.

An example of designating super users

# ...
authorization:
  type: simple
  superUsers:
    - CN=fred
    - sam
    - CN=edward
# ...

Note

The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration.

3.1.7.4. Configuring authorization in Kafka brokers

Configure authorization and designate super users for a particular Kafka broker.

Prerequisites

  • An OpenShift cluster
  • The Cluster Operator is running

Procedure

  1. Add or edit the authorization property in the Kafka.spec.kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        authorization:
          type: simple
          superUsers:
            - CN=fred
            - sam
            - CN=edward
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.8. ZooKeeper replicas

ZooKeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven.

The majority of nodes must be available in order to maintain an effective quorum. If the ZooKeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available ZooKeeper cluster is crucial for AMQ Streams.

Three-node cluster
A three-node ZooKeeper cluster requires at least two nodes to be up and running in order to maintain the quorum. It can tolerate only one node being unavailable.
Five-node cluster
A five-node ZooKeeper cluster requires at least three nodes to be up and running in order to maintain the quorum. It can tolerate two nodes being unavailable.
Seven-node cluster
A seven-node ZooKeeper cluster requires at least four nodes to be up and running in order to maintain the quorum. It can tolerate three nodes being unavailable.
Note

For development purposes, it is also possible to run ZooKeeper with a single node.

Having more nodes does not necessarily mean better performance, as the costs to maintain the quorum will rise with the number of nodes in the cluster. Depending on your availability requirements, you can decide for the number of nodes to use.

3.1.8.1. Number of ZooKeeper nodes

The number of ZooKeeper nodes can be configured using the replicas property in Kafka.spec.zookeeper.

An example showing replicas configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
    replicas: 3
    # ...

3.1.8.2. Changing the number of ZooKeeper replicas

Prerequisites

  • An OpenShift cluster is available.
  • The Cluster Operator is running.

Procedure

  1. Open the YAML configuration file that contains the Kafka resource specifying the cluster deployment.
  2. In the spec.zookeeper.replicas property in the Kafka resource, enter the number of replicated ZooKeeper servers. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        replicas: 3
        # ...
  3. Apply the new configuration to create or update the resource.

    Use oc apply:

    oc apply -f kafka.yaml

    where kafka.yaml is the YAML configuration file for the resource that you want to configure; for example, kafka-persistent.yaml.

3.1.9. ZooKeeper configuration

AMQ Streams allows you to customize the configuration of Apache ZooKeeper nodes. You can specify and configure most of the options listed in the ZooKeeper documentation.

Options which cannot be configured are those related to the following areas:

  • Security (Encryption, Authentication, and Authorization)
  • Listener configuration
  • Configuration of data directories
  • ZooKeeper cluster composition

These options are automatically configured by AMQ Streams.

3.1.9.1. ZooKeeper configuration

ZooKeeper nodes are configured using the config property in Kafka.spec.zookeeper. This property contains the ZooKeeper configuration options as keys. The values can be described using one of the following JSON types:

  • String
  • Number
  • Boolean

Users can specify and configure the options listed in ZooKeeper documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • server.
  • dataDir
  • dataLogDir
  • clientPort
  • authProvider
  • quorum.auth
  • requireClientAuthScheme

When one of the forbidden options is present in the config property, it is ignored and a warning message is printed to the Custer Operator log file. All other options are passed to ZooKeeper.

Important

The Cluster Operator does not validate keys or values in the provided config object. When invalid configuration is provided, the ZooKeeper cluster might not start or might become unstable. In such cases, the configuration in the Kafka.spec.zookeeper.config object should be fixed and the Cluster Operator will roll out the new configuration to all ZooKeeper nodes.

Selected options have default values:

  • timeTick with default value 2000
  • initLimit with default value 5
  • syncLimit with default value 2
  • autopurge.purgeInterval with default value 1

These options will be automatically configured when they are not present in the Kafka.spec.zookeeper.config property.

An example showing ZooKeeper configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
spec:
  kafka:
    # ...
  zookeeper:
    # ...
    config:
      autopurge.snapRetainCount: 3
      autopurge.purgeInterval: 1
    # ...

3.1.9.2. Configuring ZooKeeper

Prerequisites

  • An OpenShift cluster is available.
  • The Cluster Operator is running.

Procedure

  1. Open the YAML configuration file that contains the Kafka resource specifying the cluster deployment.
  2. In the spec.zookeeper.config property in the Kafka resource, enter one or more ZooKeeper configuration settings. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        config:
          autopurge.snapRetainCount: 3
          autopurge.purgeInterval: 1
        # ...
  3. Apply the new configuration to create or update the resource.

    Use oc apply:

    oc apply -f kafka.yaml

    where kafka.yaml is the YAML configuration file for the resource that you want to configure; for example, kafka-persistent.yaml.

3.1.10. ZooKeeper connection

ZooKeeper services are secured with encryption and authentication and are not intended to be used by external applications that are not part of AMQ Streams.

However, if you want to use Kafka CLI tools that require a connection to ZooKeeper, such as the kafka-topics tool, you can use a terminal inside a Kafka container and connect to the local end of the TLS tunnel to ZooKeeper by using localhost:2181 as the ZooKeeper address.

3.1.10.1. Connecting to ZooKeeper from a terminal

Open a terminal inside a Kafka container to use Kafka CLI tools that require a ZooKeeper connection.

Prerequisites

  • An OpenShift cluster is available.
  • A kafka cluster is running.
  • The Cluster Operator is running.

Procedure

  1. Open the terminal using the OpenShift console or run the exec command from your CLI.

    For example:

    oc exec -it my-cluster-kafka-0 -- bin/kafka-topics.sh --list --zookeeper localhost:2181

    Be sure to use localhost:2181.

    You can now run Kafka commands to ZooKeeper.

3.1.11. Entity Operator

The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster.

The Entity Operator comprises the:

Through Kafka resource configuration, the Cluster Operator can deploy the Entity Operator, including one or both operators, when deploying a Kafka cluster.

Note

When deployed, the Entity Operator contains the operators according to the deployment configuration.

The operators are automatically configured to manage the topics and users of the Kafka cluster.

3.1.11.1. Entity Operator configuration properties

The Entity Operator can be configured using the entityOperator property in Kafka.spec

The entityOperator property supports several sub-properties:

  • tlsSidecar
  • topicOperator
  • userOperator
  • template

The tlsSidecar property can be used to configure the TLS sidecar container which is used to communicate with ZooKeeper. For more details about configuring the TLS sidecar, see Section 3.1.20, “TLS sidecar”.

The template property can be used to configure details of the Entity Operator pod, such as labels, annotations, affinity, tolerations and so on.

The topicOperator property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator.

The userOperator property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator.

Example of basic configuration enabling both operators

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    topicOperator: {}
    userOperator: {}

When both topicOperator and userOperator properties are missing, the Entity Operator is not deployed.

3.1.11.2. Topic Operator configuration properties

Topic Operator deployment can be configured using additional options inside the topicOperator object. The following properties are supported:

watchedNamespace
The OpenShift namespace in which the topic operator watches for KafkaTopics. Default is the namespace where the Kafka cluster is deployed.
reconciliationIntervalSeconds
The interval between periodic reconciliations in seconds. Default 90.
zookeeperSessionTimeoutSeconds
The ZooKeeper session timeout in seconds. Default 20.
topicMetadataMaxAttempts
The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation could take more time due to the number of partitions or replicas. Default 6.
image
The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.19, “Container images”.
resources
The resources property configures the amount of resources allocated to the Topic Operator. For more details about resource request and limit configuration, see Section 3.1.12, “CPU and memory resources”.
logging
The logging property configures the logging of the Topic Operator. For more details, see Section 3.1.11.4, “Operator loggers”.

Example of Topic Operator configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    topicOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
    # ...

3.1.11.3. User Operator configuration properties

User Operator deployment can be configured using additional options inside the userOperator object. The following properties are supported:

watchedNamespace
The OpenShift namespace in which the topic operator watches for KafkaUsers. Default is the namespace where the Kafka cluster is deployed.
reconciliationIntervalSeconds
The interval between periodic reconciliations in seconds. Default 120.
zookeeperSessionTimeoutSeconds
The ZooKeeper session timeout in seconds. Default 6.
image
The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.19, “Container images”.
resources
The resources property configures the amount of resources allocated to the User Operator. For more details about resource request and limit configuration, see Section 3.1.12, “CPU and memory resources”.
logging
The logging property configures the logging of the User Operator. For more details, see Section 3.1.11.4, “Operator loggers”.

Example of User Operator configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    userOperator:
      watchedNamespace: my-user-namespace
      reconciliationIntervalSeconds: 60
    # ...

3.1.11.4. Operator loggers

The Topic Operator and User Operator have a configurable logger:

  • rootLogger.level

The operators use the Apache log4j2 logger implementation.

Use the logging property in the Kafka resource to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    topicOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
      logging:
        type: inline
        loggers:
          rootLogger.level: INFO
    # ...
    userOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
      logging:
        type: inline
        loggers:
          rootLogger.level: INFO
# ...

External logging

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    topicOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
      logging:
        type: external
        name: customConfigMap
# ...

Additional resources

3.1.11.5. Configuring Entity Operator

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the entityOperator property in the Kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
      entityOperator:
        topicOperator:
          watchedNamespace: my-topic-namespace
          reconciliationIntervalSeconds: 60
        userOperator:
          watchedNamespace: my-user-namespace
          reconciliationIntervalSeconds: 60
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.1.12. CPU and memory resources

For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.

AMQ Streams supports two types of resources:

  • CPU
  • Memory

AMQ Streams uses the OpenShift syntax for specifying CPU and memory resources.

3.1.12.1. Resource limits and requests

Resource limits and requests are configured using the resources property in the following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • Kafka.spec.KafkaExporter
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaBridge.spec

Additional resources

3.1.12.1.1. Resource requests

Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.

Important

If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.

Resources requests are specified in the requests property. Resources requests currently supported by AMQ Streams:

  • cpu
  • memory

A request may be configured for one or more supported resources.

Example resource request configuration with all resources

# ...
resources:
  requests:
    cpu: 12
    memory: 64Gi
# ...

3.1.12.1.2. Resource limits

Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.

Resource limits are specified in the limits property. Resource limits currently supported by AMQ Streams:

  • cpu
  • memory

A resource may be configured for one or more supported limits.

Example resource limits configuration

# ...
resources:
  limits:
    cpu: 12
    memory: 64Gi
# ...

3.1.12.1.3. Supported CPU formats

CPU requests and limits are supported in the following formats:

  • Number of CPU cores as integer (5 CPU core) or decimal (2.5 CPU core).
  • Number or millicpus / millicores (100m) where 1000 millicores is the same 1 CPU core.

Example CPU units

# ...
resources:
  requests:
    cpu: 500m
  limits:
    cpu: 2.5
# ...

Note

The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.

Additional resources

3.1.12.1.4. Supported memory formats

Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.

  • To specify memory in megabytes, use the M suffix. For example 1000M.
  • To specify memory in gigabytes, use the G suffix. For example 1G.
  • To specify memory in mebibytes, use the Mi suffix. For example 1000Mi.
  • To specify memory in gibibytes, use the Gi suffix. For example 1Gi.

An example of using different memory units

# ...
resources:
  requests:
    memory: 512Mi
  limits:
    memory: 2Gi
# ...

Additional resources

  • For more details about memory specification and additional supported units, see Meaning of memory.
3.1.12.2. Configuring resource requests and limits

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the resources property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        resources:
          requests:
            cpu: "8"
            memory: 64Gi
          limits:
            cpu: "12"
            memory: 128Gi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.13. Kafka loggers

Kafka has its own configurable loggers:

  • kafka.root.logger.level
  • log4j.logger.org.I0Itec.zkclient.ZkClient
  • log4j.logger.org.apache.zookeeper
  • log4j.logger.kafka
  • log4j.logger.org.apache.kafka
  • log4j.logger.kafka.request.logger
  • log4j.logger.kafka.network.Processor
  • log4j.logger.kafka.server.KafkaApis
  • log4j.logger.kafka.network.RequestChannel$
  • log4j.logger.kafka.controller
  • log4j.logger.kafka.log.LogCleaner
  • log4j.logger.state.change.logger
  • log4j.logger.kafka.authorizer.logger

ZooKeeper also has a configurable logger:

  • zookeeper.root.logger

Kafka and ZooKeeper use the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
spec:
  # ...
  logging:
    type: inline
    loggers:
      kafka.root.logger.level: "INFO"
  # ...
  zookeeper:
    # ...
    logging:
      type: inline
      loggers:
        zookeeper.root.logger: "INFO"
  # ...
  entityOperator:
    # ...
    topicOperator:
      # ...
      logging:
        type: inline
        loggers:
          rootLogger.level: INFO
    # ...
    userOperator:
      # ...
      logging:
        type: inline
        loggers:
          rootLogger.level: INFO
    # ...

External logging

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
spec:
  # ...
  logging:
    type: external
    name: customConfigMap
  # ...

Operators use the Apache log4j2 logger implementation, so the logging configuration is described inside the ConfigMap using log4j2.properties. For more information, see Section 3.1.11.4, “Operator loggers”.

Additional resources

3.1.14. Kafka rack awareness

The rack awareness feature in AMQ Streams helps to spread the Kafka broker pods and Kafka topic replicas across different racks. Enabling rack awareness helps to improve availability of Kafka brokers and the topics they are hosting.

Note

"Rack" might represent an availability zone, data center, or an actual rack in your data center.

3.1.14.1. Configuring rack awareness in Kafka brokers

Kafka rack awareness can be configured in the rack property of Kafka.spec.kafka. The rack object has one mandatory field named topologyKey. This key needs to match one of the labels assigned to the OpenShift cluster nodes. The label is used by OpenShift when scheduling the Kafka broker pods to nodes. If the OpenShift cluster is running on a cloud provider platform, that label should represent the availability zone where the node is running. Usually, the nodes are labeled with failure-domain.beta.kubernetes.io/zone that can be easily used as the topologyKey value. This has the effect of spreading the broker pods across zones, and also setting the brokers' broker.rack configuration parameter inside Kafka broker.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Consult your OpenShift administrator regarding the node label that represents the zone / rack into which the node is deployed.
  2. Edit the rack property in the Kafka resource using the label as the topology key.

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        rack:
          topologyKey: failure-domain.beta.kubernetes.io/zone
        # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.15. Healthchecks

Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.

OpenShift supports two types of Healthcheck probes:

  • Liveness probes
  • Readiness probes

For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.

Users can configure selected options for liveness and readiness probes.

3.1.15.1. Healthcheck configurations

Liveness and readiness probes can be configured using the livenessProbe and readinessProbe properties in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.KafkaExporter
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaMirrorMaker.spec
  • KafkaBridge.spec

Both livenessProbe and readinessProbe support the following options:

  • initialDelaySeconds
  • timeoutSeconds
  • periodSeconds
  • successThreshold
  • failureThreshold

For more information about the livenessProbe and readinessProbe options, see Section B.39, “Probe schema reference”.

An example of liveness and readiness probe configuration

# ...
readinessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
livenessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
# ...

3.1.15.2. Configuring healthchecks

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the livenessProbe or readinessProbe property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        readinessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.1.16. Prometheus metrics

AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.

For more information about configuring Prometheus and Grafana, see Metrics.

3.1.16.1. Metrics configuration

Prometheus metrics are enabled by configuring the metrics property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

When the metrics property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}).

Example of enabling metrics without any further configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics: {}
    # ...
  zookeeper:
    # ...

The metrics property might contain additional configuration for the Prometheus JMX exporter.

Example of enabling metrics with additional Prometheus JMX Exporter configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics:
      lowercaseOutputName: true
      rules:
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count"
          name: "kafka_server_$1_$2_total"
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count"
          name: "kafka_server_$1_$2_total"
          labels:
            topic: "$3"
    # ...
  zookeeper:
    # ...

3.1.16.2. Configuring Prometheus metrics

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the metrics property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        metrics:
          lowercaseOutputName: true
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.1.17. JMX Options

AMQ Streams supports obtaining JMX metrics from the Kafka brokers by opening a JMX port on 9999. You can obtain various metrics about each Kafka broker, for example, usage data such as the BytesPerSecond value or the request rate of the network of the broker. AMQ Streams supports opening a password and username protected JMX port or a non-protected JMX port.

3.1.17.1. Configuring JMX options

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

You can configure JMX options by using the jmxOptions property in the following resources:

  • Kafka.spec.kafka

You can configure username and password protection for the JMX port that is opened on the Kafka brokers.

Securing the JMX Port

You can secure the JMX port to prevent unauthorized pods from accessing the port. Currently the JMX port can only be secured using a username and password. To enable security for the JMX port, set the type parameter in the authentication field to password.:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    jmxOptions:
      authentication:
        type: "password"
    # ...
  zookeeper:
    # ...

This allows you to deploy a pod internally into a cluster and obtain JMX metrics by using the headless service and specifying which broker you want to address. To get JMX metrics from broker 0 we address the headless service appending broker 0 in front of the headless service:

"<cluster-name>-kafka-0-<cluster-name>-<headless-service-name>"

If the JMX port is secured, you can get the username and password by referencing them from the JMX secret in the deployment of your pod.

Using an open JMX port

To disable security for the JMX port, do not fill in the authentication field

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    jmxOptions: {}
    # ...
  zookeeper:
    # ...

This will just open the JMX Port on the headless service and you can follow a similar approach as described above to deploy a pod into the cluster. The only difference is that any pod will be able to read from the JMX port.

3.1.18. JVM Options

The following components of AMQ Streams run inside a Virtual Machine (VM):

  • Apache Kafka
  • Apache ZooKeeper
  • Apache Kafka Connect
  • Apache Kafka MirrorMaker
  • AMQ Streams Kafka Bridge

JVM configuration options optimize the performance for different platforms and architectures. AMQ Streams allows you to configure some of these options.

3.1.18.1. JVM configuration

JVM options can be configured using the jvmOptions property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaMirrorMaker.spec
  • KafkaBridge.spec

Only a selected subset of available JVM options can be configured. The following options are supported:

-Xms and -Xmx

-Xms configures the minimum initial allocation heap size when the JVM starts. -Xmx configures the maximum heap size.

Note

The units accepted by JVM settings such as -Xmx and -Xms are those accepted by the JDK java binary in the corresponding image. Accordingly, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes

The default values used for -Xms and -Xmx depends on whether there is a memory request limit configured for the container:

  • If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
  • If there is no memory limit then the JVM’s minimum memory will be set to 128M and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Important

Setting -Xmx explicitly requires some care:

  • The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by -Xmx.
  • If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it).
  • If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if -Xms is set to -Xmx, or some later time if not).

When setting -Xmx explicitly, it is recommended to:

  • set the memory request and the memory limit to the same value,
  • use a memory request that is at least 4.5 × the -Xmx,
  • consider setting -Xms to the same value as -Xmx.
Important

Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.

Example fragment configuring -Xmx and -Xms

# ...
jvmOptions:
  "-Xmx": "2g"
  "-Xms": "2g"
# ...

In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.

Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and ZooKeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.

-server

-server enables the server JVM. This option can be set to true or false.

Example fragment configuring -server

# ...
jvmOptions:
  "-server": true
# ...

Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

-XX

-XX object can be used for configuring advanced runtime options of a JVM. The -server and -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.

Example showing the use of the -XX object

jvmOptions:
  "-XX":
    "UseG1GC": true
    "MaxGCPauseMillis": 20
    "InitiatingHeapOccupancyPercent": 35
    "ExplicitGCInvokesConcurrent": true
    "UseParNewGC": false

The example configuration above will result in the following JVM options:

-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

3.1.18.1.1. Garbage collector logging

The jvmOptions section also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows:

Example of enabling GC logging

# ...
jvmOptions:
  gcLoggingEnabled: true
# ...

3.1.18.2. Configuring JVM options

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the jvmOptions property in the Kafka, KafkaConnect, KafkaConnectS2I, KafkaMirrorMaker, or KafkaBridge resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        jvmOptions:
          "-Xmx": "8g"
          "-Xms": "8g"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.1.19. Container images

AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.

3.1.19.1. Container image configurations

You can specify which container image to use for each component using the image property in the following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaBridge.spec
3.1.19.1.1. Configuring the image property for Kafka, Kafka Connect, and Kafka MirrorMaker

Kafka, Kafka Connect (including Kafka Connect with S2I support), and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables:

  • STRIMZI_KAFKA_IMAGES
  • STRIMZI_KAFKA_CONNECT_IMAGES
  • STRIMZI_KAFKA_CONNECT_S2I_IMAGES
  • STRIMZI_KAFKA_MIRROR_MAKER_IMAGES

These environment variables contain mappings between the Kafka versions and their corresponding images. The mappings are used together with the image and version properties:

  • If neither image nor version are given in the custom resource then the version will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in the environment variable.
  • If image is given but version is not, then the given image is used and the version is assumed to be the Cluster Operator’s default Kafka version.
  • If version is given but image is not, then the image that corresponds to the given version in the environment variable is used.
  • If both version and image are given, then the given image is used. The image is assumed to contain a Kafka image with the given version.

The image and version for the different components can be configured in the following properties:

  • For Kafka in spec.kafka.image and spec.kafka.version.
  • For Kafka Connect, Kafka Connect S2I, and Kafka MirrorMaker in spec.image and spec.version.
Warning

It is recommended to provide only the version and leave the image property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator’s environment variables.

3.1.19.1.2. Configuring the image property in other resources

For the image property in the other custom resources, the given value will be used during deployment. If the image property is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used.

  • For Kafka broker TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For ZooKeeper nodes:
  • For ZooKeeper node TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For Topic Operator:

    1. Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0 container image.
  • For User Operator:

    1. Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0 container image.
  • For Entity Operator TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For Kafka Exporter:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For Kafka Bridge:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.4.0 container image.
  • For Kafka broker initializer:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0 container image.
Warning

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.

Example of container image configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    image: my-org/my-image:latest
    # ...
  zookeeper:
    # ...

3.1.19.2. Configuring container images

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the image property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        image: my-org/my-image:latest
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.1.20. TLS sidecar

A sidecar is a container that runs in a pod but serves a supporting purpose. In AMQ Streams, the TLS sidecar uses TLS to encrypt and decrypt all communication between the various components and ZooKeeper. ZooKeeper does not have native TLS support.

The TLS sidecar is used in:

  • Kafka brokers
  • ZooKeeper nodes
  • Entity Operator
3.1.20.1. TLS sidecar configuration

The TLS sidecar can be configured using the tlsSidecar property in:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator

The TLS sidecar supports the following additional options:

  • image
  • resources
  • logLevel
  • readinessProbe
  • livenessProbe

The resources property can be used to specify the memory and CPU resources allocated for the TLS sidecar.

The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.19, “Container images”.

The logLevel property is used to specify the logging level. Following logging levels are supported:

  • emerg
  • alert
  • crit
  • err
  • warning
  • notice
  • info
  • debug

The default value is notice.

For more information about configuring the readinessProbe and livenessProbe properties for the healthchecks, see Section 3.1.15.1, “Healthcheck configurations”.

Example of TLS sidecar configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    tlsSidecar:
      image: my-org/my-image:latest
      resources:
        requests:
          cpu: 200m
          memory: 64Mi
        limits:
          cpu: 500m
          memory: 128Mi
      logLevel: debug
      readinessProbe:
        initialDelaySeconds: 15
        timeoutSeconds: 5
      livenessProbe:
        initialDelaySeconds: 15
        timeoutSeconds: 5
    # ...
  zookeeper:
    # ...

3.1.20.2. Configuring TLS sidecar

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the tlsSidecar property in the Kafka resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        tlsSidecar:
          resources:
            requests:
              cpu: 200m
              memory: 64Mi
            limits:
              cpu: 500m
              memory: 128Mi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.1.21. Configuring pod scheduling

Important

When two applications are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.

3.1.21.1. Scheduling pods based on other applications
3.1.21.1.1. Avoid critical applications to share the node

Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.

3.1.21.1.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.1.21.1.3. Configuring pod anti-affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        template:
          pod:
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: application
                          operator: In
                          values:
                            - postgresql
                            - mongodb
                    topologyKey: "kubernetes.io/hostname"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
3.1.21.2. Scheduling pods to specific nodes
3.1.21.2.1. Node scheduling

The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.

OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node.

3.1.21.2.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.1.21.2.3. Configuring node affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Label the nodes where AMQ Streams components should be scheduled.

    This can be done using oc label:

    oc label node your-node node-type=fast-network

    Alternatively, some of the existing labels might be reused.

  2. Edit the affinity property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        template:
          pod:
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                    - matchExpressions:
                      - key: node-type
                        operator: In
                        values:
                        - fast-network
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
3.1.21.3. Using dedicated nodes
3.1.21.3.1. Dedicated nodes

Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.

Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.

To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.

3.1.21.3.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.1.21.3.3. Tolerations

Tolerations can be configured using the tolerations property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The format of the tolerations property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.

3.1.21.3.4. Setting up dedicated nodes and scheduling pods on them

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Select the nodes which should be used as dedicated.
  2. Make sure there are no workloads scheduled on these nodes.
  3. Set the taints on the selected nodes:

    This can be done using oc adm taint:

    oc adm taint node your-node dedicated=Kafka:NoSchedule
  4. Additionally, add a label to the selected nodes as well.

    This can be done using oc label:

    oc label node your-node dedicated=Kafka
  5. Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        template:
          pod:
            tolerations:
              - key: "dedicated"
                operator: "Equal"
                value: "Kafka"
                effect: "NoSchedule"
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                  - matchExpressions:
                    - key: dedicated
                      operator: In
                      values:
                      - Kafka
        # ...
      zookeeper:
        # ...
  6. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.1.22. Kafka Exporter

You can configure the Kafka resource to automatically deploy Kafka Exporter in your cluster.

Kafka Exporter extracts data for analysis as Prometheus metrics, primarily data relating to offsets, consumer groups, consumer lag and topics.

For information on Kafka Exporter and why it is important to monitor consumer lag for performance, see Kafka Exporter.

3.1.22.1. Configuring Kafka Exporter

Configure Kafka Exporter in the Kafka resource through KafkaExporter properties.

Refer to the sample Kafka YAML configuration for an overview of the Kafka resource and its properties.

The properties relevant to the Kafka Exporter configuration are shown in this procedure.

You can configure these properties as part of a deployment or redeployment of the Kafka cluster.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the KafkaExporter properties for the Kafka resource.

    The properties you can configure are shown in this example configuration:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      # ...
      kafkaExporter:
        image: my-org/my-image:latest 1
        groupRegex: ".*" 2
        topicRegex: ".*" 3
        resources: 4
          requests:
            cpu: 200m
            memory: 64Mi
          limits:
            cpu: 500m
            memory: 128Mi
        logging: debug 5
        enableSaramaLogging: true 6
        template: 7
          pod:
            metadata:
              labels:
                label1: value1
            imagePullSecrets:
              - name: my-docker-credentials
            securityContext:
              runAsUser: 1000001
              fsGroup: 0
            terminationGracePeriodSeconds: 120
        readinessProbe: 8
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe: 9
          initialDelaySeconds: 15
          timeoutSeconds: 5
    # ...
    1
    ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
    2
    A regular expression to specify the consumer groups to include in the metrics.
    3
    A regular expression to specify the topics to include in the metrics.
    4
    5
    Logging configuration, to log messages with a given severity (debug, info, warn, error, fatal) or above.
    6
    Boolean to enable Sarama logging, a Go client library used by Kafka Exporter.
    7
    8
    9
  2. Create or update the resource:

    oc apply -f kafka.yaml

What to do next

After configuring and deploying Kafka Exporter, you can enable Grafana to present the Kafka Exporter dashboards.

3.1.23. Performing a rolling update of a Kafka cluster

This procedure describes how to manually trigger a rolling update of an existing Kafka cluster by using an OpenShift annotation.

Prerequisites

  • A running Kafka cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the StatefulSet that controls the Kafka pods you want to manually update.

    For example, if your Kafka cluster is named my-cluster, the corresponding StatefulSet is named my-cluster-kafka.

  2. Annotate the StatefulSet resource in OpenShift. For example, using oc annotate:

    oc annotate statefulset cluster-name-kafka strimzi.io/manual-rolling-update=true
  3. Wait for the next reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the StatefulSet.

Additional resources

3.1.24. Performing a rolling update of a ZooKeeper cluster

This procedure describes how to manually trigger a rolling update of an existing ZooKeeper cluster by using an OpenShift annotation.

Prerequisites

  • A running ZooKeeper cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the StatefulSet that controls the ZooKeeper pods you want to manually update.

    For example, if your Kafka cluster is named my-cluster, the corresponding StatefulSet is named my-cluster-zookeeper.

  2. Annotate the StatefulSet resource in OpenShift. For example, using oc annotate:

    oc annotate statefulset cluster-name-zookeeper strimzi.io/manual-rolling-update=true
  3. Wait for the next reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the StatefulSet.

Additional resources

3.1.25. Scaling clusters

3.1.25.1. Scaling Kafka clusters
3.1.25.1.1. Adding brokers to a cluster

The primary way of increasing throughput for a topic is to increase the number of partitions for that topic. That works because the extra partitions allow the load of the topic to be shared between the different brokers in the cluster. However, in situations where every broker is constrained by a particular resource (typically I/O) using more partitions will not result in increased throughput. Instead, you need to add brokers to the cluster.

When you add an extra broker to the cluster, Kafka does not assign any partitions to it automatically. You must decide which partitions to move from the existing brokers to the new broker.

Once the partitions have been redistributed between all the brokers, the resource utilization of each broker should be reduced.

3.1.25.1.2. Removing brokers from a cluster

Because AMQ Streams uses StatefulSets to manage broker pods, you cannot remove any pod from the cluster. You can only remove one or more of the highest numbered pods from the cluster. For example, in a cluster of 12 brokers the pods are named cluster-name-kafka-0 up to cluster-name-kafka-11. If you decide to scale down by one broker, the cluster-name-kafka-11 will be removed.

Before you remove a broker from a cluster, ensure that it is not assigned to any partitions. You should also decide which of the remaining brokers will be responsible for each of the partitions on the broker being decommissioned. Once the broker has no assigned partitions, you can scale the cluster down safely.

3.1.25.2. Partition reassignment

The Topic Operator does not currently support reassigning replicas to different brokers, so it is necessary to connect directly to broker pods to reassign replicas to brokers.

Within a broker pod, the kafka-reassign-partitions.sh utility allows you to reassign partitions to different brokers.

It has three different modes:

--generate
Takes a set of topics and brokers and generates a reassignment JSON file which will result in the partitions of those topics being assigned to those brokers. Because this operates on whole topics, it cannot be used when you just need to reassign some of the partitions of some topics.
--execute
Takes a reassignment JSON file and applies it to the partitions and brokers in the cluster. Brokers that gain partitions as a result become followers of the partition leader. For a given partition, once the new broker has caught up and joined the ISR (in-sync replicas) the old broker will stop being a follower and will delete its replica.
--verify
Using the same reassignment JSON file as the --execute step, --verify checks whether all of the partitions in the file have been moved to their intended brokers. If the reassignment is complete, --verify also removes any throttles that are in effect. Unless removed, throttles will continue to affect the cluster even after the reassignment has finished.

It is only possible to have one reassignment running in a cluster at any given time, and it is not possible to cancel a running reassignment. If you need to cancel a reassignment, wait for it to complete and then perform another reassignment to revert the effects of the first reassignment. The kafka-reassign-partitions.sh will print the reassignment JSON for this reversion as part of its output. Very large reassignments should be broken down into a number of smaller reassignments in case there is a need to stop in-progress reassignment.

3.1.25.2.1. Reassignment JSON file

The reassignment JSON file has a specific structure:

{
  "version": 1,
  "partitions": [
    <PartitionObjects>
  ]
}

Where <PartitionObjects> is a comma-separated list of objects like:

{
  "topic": <TopicName>,
  "partition": <Partition>,
  "replicas": [ <AssignedBrokerIds> ]
}
Note

Although Kafka also supports a "log_dirs" property this should not be used in Red Hat AMQ Streams.

The following is an example reassignment JSON file that assigns topic topic-a, partition 4 to brokers 2, 4 and 7, and topic topic-b partition 2 to brokers 1, 5 and 7:

{
  "version": 1,
  "partitions": [
    {
      "topic": "topic-a",
      "partition": 4,
      "replicas": [2,4,7]
    },
    {
      "topic": "topic-b",
      "partition": 2,
      "replicas": [1,5,7]
    }
  ]
}

Partitions not included in the JSON are not changed.

3.1.25.2.2. Reassigning partitions between JBOD volumes

When using JBOD storage in your Kafka cluster, you can choose to reassign the partitions between specific volumes and their log directories (each volume has a single log directory). To reassign a partition to a specific volume, add the log_dirs option to <PartitionObjects> in the reassignment JSON file.

{
  "topic": <TopicName>,
  "partition": <Partition>,
  "replicas": [ <AssignedBrokerIds> ],
  "log_dirs": [ <AssignedLogDirs> ]
}

The log_dirs object should contain the same number of log directories as the number of replicas specified in the replicas object. The value should be either an absolute path to the log directory, or the any keyword.

For example:

{
      "topic": "topic-a",
      "partition": 4,
      "replicas": [2,4,7].
      "log_dirs": [ "/var/lib/kafka/data-0/kafka-log2", "/var/lib/kafka/data-0/kafka-log4", "/var/lib/kafka/data-0/kafka-log7" ]
}
3.1.25.3. Generating reassignment JSON files

This procedure describes how to generate a reassignment JSON file that reassigns all the partitions for a given set of topics using the kafka-reassign-partitions.sh tool.

Prerequisites

  • A running Cluster Operator
  • A Kafka resource
  • A set of topics to reassign the partitions of

Procedure

  1. Prepare a JSON file named topics.json that lists the topics to move. It must have the following structure:

    {
      "version": 1,
      "topics": [
        <TopicObjects>
      ]
    }

    where <TopicObjects> is a comma-separated list of objects like:

    {
      "topic": <TopicName>
    }

    For example if you want to reassign all the partitions of topic-a and topic-b, you would need to prepare a topics.json file like this:

    {
      "version": 1,
      "topics": [
        { "topic": "topic-a"},
        { "topic": "topic-b"}
      ]
    }
  2. Copy the topics.json file to one of the broker pods:

    cat topics.json | oc exec -c kafka <BrokerPod> -i -- \
      /bin/bash -c \
      'cat > /tmp/topics.json'
  3. Use the kafka-reassign-partitions.sh command to generate the reassignment JSON.

    oc exec <BrokerPod> -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --topics-to-move-json-file /tmp/topics.json \
      --broker-list <BrokerList> \
      --generate

    For example, to move all the partitions of topic-a and topic-b to brokers 4 and 7

    oc exec <BrokerPod> -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --topics-to-move-json-file /tmp/topics.json \
      --broker-list 4,7 \
      --generate
3.1.25.4. Creating reassignment JSON files manually

You can manually create the reassignment JSON file if you want to move specific partitions.

3.1.25.5. Reassignment throttles

Partition reassignment can be a slow process because it involves transferring large amounts of data between brokers. To avoid a detrimental impact on clients, you can throttle the reassignment process. This might cause the reassignment to take longer to complete.

  • If the throttle is too low then the newly assigned brokers will not be able to keep up with records being published and the reassignment will never complete.
  • If the throttle is too high then clients will be impacted.

For example, for producers, this could manifest as higher than normal latency waiting for acknowledgement. For consumers, this could manifest as a drop in throughput caused by higher latency between polls.

3.1.25.6. Scaling up a Kafka cluster

This procedure describes how to increase the number of brokers in a Kafka cluster.

Prerequisites

  • An existing Kafka cluster.
  • A reassignment JSON file named reassignment.json that describes how partitions should be reassigned to brokers in the enlarged cluster.

Procedure

  1. Add as many new brokers as you need by increasing the Kafka.spec.kafka.replicas configuration option.
  2. Verify that the new broker pods have started.
  3. Copy the reassignment.json file to the broker pod on which you will later execute the commands:

    cat reassignment.json | \
      oc exec broker-pod -c kafka -i -- /bin/bash -c \
      'cat > /tmp/reassignment.json'

    For example:

    cat reassignment.json | \
      oc exec my-cluster-kafka-0 -c kafka -i -- /bin/bash -c \
      'cat > /tmp/reassignment.json'
  4. Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool from the same broker pod.

    oc exec broker-pod -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --execute

    If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example:

    oc exec my-cluster-kafka-0 -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 5000000 \
      --execute

    This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file.

  5. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example:

    oc exec my-cluster-kafka-0 -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 10000000 \
      --execute
  6. Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the previous step but with the --verify option instead of the --execute option.

    oc exec broker-pod -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify

    For example,

    oc exec my-cluster-kafka-0 -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify
  7. The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers.
3.1.25.7. Scaling down a Kafka cluster

Additional resources

This procedure describes how to decrease the number of brokers in a Kafka cluster.

Prerequisites

  • An existing Kafka cluster.
  • A reassignment JSON file named reassignment.json describing how partitions should be reassigned to brokers in the cluster once the broker(s) in the highest numbered Pod(s) have been removed.

Procedure

  1. Copy the reassignment.json file to the broker pod on which you will later execute the commands:

    cat reassignment.json | \
      oc exec broker-pod -c kafka -i -- /bin/bash -c \
      'cat > /tmp/reassignment.json'

    For example:

    cat reassignment.json | \
      oc exec my-cluster-kafka-0 -c kafka -i -- /bin/bash -c \
      'cat > /tmp/reassignment.json'
  2. Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool from the same broker pod.

    oc exec broker-pod -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --execute

    If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example:

    oc exec my-cluster-kafka-0 -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 5000000 \
      --execute

    This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file.

  3. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example:

    oc exec my-cluster-kafka-0 -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --throttle 10000000 \
      --execute
  4. Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the previous step but with the --verify option instead of the --execute option.

    oc exec broker-pod -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify

    For example,

    oc exec my-cluster-kafka-0 -c kafka -it -- \
      bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \
      --reassignment-json-file /tmp/reassignment.json \
      --verify
  5. The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers.
  6. Once all the partition reassignments have finished, the broker(s) being removed should not have responsibility for any of the partitions in the cluster. You can verify this by checking that the broker’s data log directory does not contain any live partition logs. If the log directory on the broker contains a directory that does not match the extended regular expression \.[a-z0-9]-delete$ then the broker still has live partitions and it should not be stopped.

    You can check this by executing the command:

    oc exec my-cluster-kafka-0 -c kafka -it -- \
      /bin/bash -c \
      "ls -l /var/lib/kafka/kafka-log_<N>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-delete$'"

    where N is the number of the Pod(s) being deleted.

    If the above command prints any output then the broker still has live partitions. In this case, either the reassignment has not finished, or the reassignment JSON file was incorrect.

  7. Once you have confirmed that the broker has no live partitions you can edit the Kafka.spec.kafka.replicas of your Kafka resource, which will scale down the StatefulSet, deleting the highest numbered broker Pod(s).

3.1.26. Deleting Kafka nodes manually

Additional resources

This procedure describes how to delete an existing Kafka node by using an OpenShift annotation. Deleting a Kafka node consists of deleting both the Pod on which the Kafka broker is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically.

Warning

Deleting a PersistentVolumeClaim can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues.

Prerequisites

  • A running Kafka cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the Pod that you want to delete.

    For example, if the cluster is named cluster-name, the pods are named cluster-name-kafka-index, where index starts at zero and ends at the total number of replicas.

  2. Annotate the Pod resource in OpenShift.

    Use oc annotate:

    oc annotate pod cluster-name-kafka-index strimzi.io/delete-pod-and-pvc=true
  3. Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.

Additional resources

3.1.27. Deleting ZooKeeper nodes manually

This procedure describes how to delete an existing ZooKeeper node by using an OpenShift annotation. Deleting a ZooKeeper node consists of deleting both the Pod on which ZooKeeper is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically.

Warning

Deleting a PersistentVolumeClaim can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues.

Prerequisites

  • A running ZooKeeper cluster.
  • A running Cluster Operator.

Procedure

  1. Find the name of the Pod that you want to delete.

    For example, if the cluster is named cluster-name, the pods are named cluster-name-zookeeper-index, where index starts at zero and ends at the total number of replicas.

  2. Annotate the Pod resource in OpenShift.

    Use oc annotate:

    oc annotate pod cluster-name-zookeeper-index strimzi.io/delete-pod-and-pvc=true
  3. Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.

Additional resources

3.1.28. Maintenance time windows for rolling updates

Maintenance time windows allow you to schedule certain rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time.

3.1.28.1. Maintenance time windows overview

In most cases, the Cluster Operator only updates your Kafka or ZooKeeper clusters in response to changes to the corresponding Kafka resource. This enables you to plan when to apply changes to a Kafka resource to minimize the impact on Kafka client applications.

However, some updates to your Kafka and ZooKeeper clusters can happen without any corresponding change to the Kafka resource. For example, the Cluster Operator will need to perform a rolling restart if a CA (Certificate Authority) certificate that it manages is close to expiry.

While a rolling restart of the pods should not affect availability of the service (assuming correct broker and topic configurations), it could affect performance of the Kafka client applications. Maintenance time windows allow you to schedule such spontaneous rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time. If maintenance time windows are not configured for a cluster then it is possible that such spontaneous rolling updates will happen at an inconvenient time, such as during a predictable period of high load.

3.1.28.2. Maintenance time window definition

You configure maintenance time windows by entering an array of strings in the Kafka.spec.maintenanceTimeWindows property. Each string is a cron expression interpreted as being in UTC (Coordinated Universal Time, which for practical purposes is the same as Greenwich Mean Time).

The following example configures a single maintenance time window that starts at midnight and ends at 01:59am (UTC), on Sundays, Mondays, Tuesdays, Wednesdays, and Thursdays:

# ...
maintenanceTimeWindows:
  - "* * 0-1 ? * SUN,MON,TUE,WED,THU *"
# ...

In practice, maintenance windows should be set in conjunction with the Kafka.spec.clusterCa.renewalDays and Kafka.spec.clientsCa.renewalDays properties of the Kafka resource, to ensure that the necessary CA certificate renewal can be completed in the configured maintenance time windows.

Note

AMQ Streams does not schedule maintenance operations exactly according to the given windows. Instead, for each reconciliation, it checks whether a maintenance window is currently "open". This means that the start of maintenance operations within a given time window can be delayed by up to the Cluster Operator reconciliation interval. Maintenance time windows must therefore be at least this long.

Additional resources

3.1.28.3. Configuring a maintenance time window

You can configure a maintenance time window for rolling updates triggered by supported processes.

Prerequisites

  • An OpenShift cluster.
  • The Cluster Operator is running.

Procedure

  1. Add or edit the maintenanceTimeWindows property in the Kafka resource. For example to allow maintenance between 0800 and 1059 and between 1400 and 1559 you would set the maintenanceTimeWindows as shown below:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
      maintenanceTimeWindows:
        - "* * 8-10 * * ?"
        - "* * 14-15 * * ?"
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

3.1.29. Renewing CA certificates manually

Unless the Kafka.spec.clusterCa.generateCertificateAuthority and Kafka.spec.clientsCa.generateCertificateAuthority objects are set to false, the cluster and clients CA certificates will auto-renew at the start of their respective certificate renewal periods. You can manually renew one or both of these certificates before the certificate renewal period starts, if required for security reasons. A renewed certificate uses the same private key as the old certificate.

Prerequisites

  • The Cluster Operator is running.
  • A Kafka cluster in which CA certificates and private keys are installed.

Procedure

  • Apply the strimzi.io/force-renew annotation to the Secret that contains the CA certificate that you want to renew.

    CertificateSecretAnnotate command

    Cluster CA

    <cluster-name>-cluster-ca-cert

    oc annotate secret <cluster-name>-cluster-ca-cert strimzi.io/force-renew=true

    Clients CA

    <cluster-name>-clients-ca-cert

    oc annotate secret <cluster-name>-clients-ca-cert strimzi.io/force-renew=true

At the next reconciliation the Cluster Operator will generate a new CA certificate for the Secret that you annotated. If maintenance time windows are configured, the Cluster Operator will generate the new CA certificate at the first reconciliation within the next maintenance time window.

Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator.

3.1.30. Replacing private keys

You can replace the private keys used by the cluster CA and clients CA certificates. When a private key is replaced, the Cluster Operator generates a new CA certificate for the new private key.

Prerequisites

  • The Cluster Operator is running.
  • A Kafka cluster in which CA certificates and private keys are installed.

Procedure

  • Apply the strimzi.io/force-replace annotation to the Secret that contains the private key that you want to renew.

    Private key forSecretAnnotate command

    Cluster CA

    <cluster-name>-cluster-ca

    oc annotate secret <cluster-name>-cluster-ca strimzi.io/force-replace=true

    Clients CA

    <cluster-name>-clients-ca

    oc annotate secret <cluster-name>-clients-ca strimzi.io/force-replace=true

At the next reconciliation the Cluster Operator will:

  • Generate a new private key for the Secret that you annotated
  • Generate a new CA certificate

If maintenance time windows are configured, the Cluster Operator will generate the new private key and CA certificate at the first reconciliation within the next maintenance time window.

Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator.

3.1.31. List of resources created as part of Kafka cluster

The following resources will created by the Cluster Operator in the OpenShift cluster:

cluster-name-kafka
StatefulSet which is in charge of managing the Kafka broker pods.
cluster-name-kafka-brokers
Service needed to have DNS resolve the Kafka broker pods IP addresses directly.
cluster-name-kafka-bootstrap
Service can be used as bootstrap servers for Kafka clients.
cluster-name-kafka-external-bootstrap
Bootstrap service for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled.
cluster-name-kafka-pod-id
Service used to route traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled.
cluster-name-kafka-external-bootstrap
Bootstrap route for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled and set to type route.
cluster-name-kafka-pod-id
Route for traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled and set to type route.
cluster-name-kafka-config
ConfigMap which contains the Kafka ancillary configuration and is mounted as a volume by the Kafka broker pods.
cluster-name-kafka-brokers
Secret with Kafka broker keys.
cluster-name-kafka
Service account used by the Kafka brokers.
cluster-name-kafka
Pod Disruption Budget configured for the Kafka brokers.
strimzi-namespace-name-cluster-name-kafka-init
Cluster role binding used by the Kafka brokers.
cluster-name-zookeeper
StatefulSet which is in charge of managing the ZooKeeper node pods.
cluster-name-zookeeper-nodes
Service needed to have DNS resolve the ZooKeeper pods IP addresses directly.
cluster-name-zookeeper-client
Service used by Kafka brokers to connect to ZooKeeper nodes as clients.
cluster-name-zookeeper-config
ConfigMap which contains the ZooKeeper ancillary configuration and is mounted as a volume by the ZooKeeper node pods.
cluster-name-zookeeper-nodes
Secret with ZooKeeper node keys.
cluster-name-zookeeper
Pod Disruption Budget configured for the ZooKeeper nodes.
cluster-name-entity-operator
Deployment with Topic and User Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-topic-operator-config
Configmap with ancillary configuration for Topic Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-user-operator-config
Configmap with ancillary configuration for User Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-operator-certs
Secret with Entity operators keys for communication with Kafka and ZooKeeper. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-operator
Service account used by the Entity Operator.
strimzi-cluster-name-topic-operator
Role binding used by the Entity Operator.
strimzi-cluster-name-user-operator
Role binding used by the Entity Operator.
cluster-name-cluster-ca
Secret with the Cluster CA used to encrypt the cluster communication.
cluster-name-cluster-ca-cert
Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers.
cluster-name-clients-ca
Secret with the Clients CA used to encrypt the communication between Kafka brokers and Kafka clients.
cluster-name-clients-ca-cert
Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka brokers.
cluster-name-cluster-operator-certs
Secret with Cluster operators keys for communication with Kafka and ZooKeeper.
data-cluster-name-kafka-idx
Persistent Volume Claim for the volume used for storing data for the Kafka broker pod idx. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.
data-id-cluster-name-kafka-idx
Persistent Volume Claim for the volume id used for storing data for the Kafka broker pod idx. This resource is only created if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.
data-cluster-name-zookeeper-idx
Persistent Volume Claim for the volume used for storing data for the ZooKeeper node pod idx. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.
cluster-name-jmx
Secret with JMX username and password used to secure the Kafka broker port.

3.2. Kafka Connect cluster configuration

The full schema of the KafkaConnect resource is described in the Section B.72, “KafkaConnect schema reference”. All labels that are applied to the desired KafkaConnect resource will also be applied to the OpenShift resources making up the Kafka Connect cluster. This provides a convenient mechanism for resources to be labeled as required.

3.2.1. Replicas

Kafka Connect clusters can consist of one or more nodes. The number of nodes is defined in the KafkaConnect and KafkaConnectS2I resources. Running a Kafka Connect cluster with multiple nodes can provide better availability and scalability. However, when running Kafka Connect on OpenShift it is not necessary to run multiple nodes of Kafka Connect for high availability. If a node where Kafka Connect is deployed to crashes, OpenShift will automatically reschedule the Kafka Connect pod to a different node. However, running Kafka Connect with multiple nodes can provide faster failover times, because the other nodes will be up and running already.

3.2.1.1. Configuring the number of nodes

The number of Kafka Connect nodes is configured using the replicas property in KafkaConnect.spec and KafkaConnectS2I.spec.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the replicas property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnectS2I
    metadata:
      name: my-cluster
    spec:
      # ...
      replicas: 3
      # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.2. Bootstrap servers

A Kafka Connect cluster always works in combination with a Kafka cluster. A Kafka cluster is specified as a list of bootstrap servers. On OpenShift, the list must ideally contain the Kafka cluster bootstrap service named cluster-name-kafka-bootstrap, and a port of 9092 for plain traffic or 9093 for encrypted traffic.

The list of bootstrap servers is configured in the bootstrapServers property in KafkaConnect.spec and KafkaConnectS2I.spec. The servers must be defined as a comma-separated list specifying one or more Kafka brokers, or a service pointing to Kafka brokers specified as a hostname:_port_ pairs.

When using Kafka Connect with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the cluster.

3.2.2.1. Configuring bootstrap servers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the bootstrapServers property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-cluster
    spec:
      # ...
      bootstrapServers: my-cluster-kafka-bootstrap:9092
      # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.3. Connecting to Kafka brokers using TLS

By default, Kafka Connect tries to connect to Kafka brokers using a plain text connection. If you prefer to use TLS, additional configuration is required.

3.2.3.1. TLS support in Kafka Connect

TLS support is configured in the tls property in KafkaConnect.spec and KafkaConnectS2I.spec. The tls property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format.

An example showing TLS configuration with multiple certificates

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-other-secret
        certificate: certificate.crt
  # ...

When multiple certificates are stored in the same secret, it can be listed multiple times.

An example showing TLS configuration with multiple certificates from the same secret

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnectS2I
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-secret
        certificate: ca2.crt
  # ...

3.2.3.2. Configuring TLS in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • If they exist, the name of the Secret for the certificate used for TLS Server Authentication, and the key under which the certificate is stored in the Secret

Procedure

  1. (Optional) If they do not already exist, prepare the TLS certificate used in authentication in a file and create a Secret.

    Note

    The secrets created by the Cluster Operator for Kafka cluster may be used directly.

    This can be done using oc create:

    oc create secret generic my-secret --from-file=my-file.crt
  2. Edit the tls property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      tls:
        trustedCertificates:
          - secretName: my-cluster-cluster-cert
            certificate: ca.crt
      # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.4. Connecting to Kafka brokers with Authentication

By default, Kafka Connect will try to connect to Kafka brokers without authentication. Authentication is enabled through the KafkaConnect and KafkaConnectS2I resources.

3.2.4.1. Authentication support in Kafka Connect

Authentication is configured through the authentication property in KafkaConnect.spec and KafkaConnectS2I.spec. The authentication property specifies the type of the authentication mechanisms which should be used and additional configuration details depending on the mechanism. The supported authentication types are:

3.2.4.1.1. TLS Client Authentication

To use TLS client authentication, set the type property to the value tls. TLS client authentication uses a TLS certificate to authenticate. The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.

Note

TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Connect see Section 3.2.3, “Connecting to Kafka brokers using TLS”.

An example TLS client authentication configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: tls
    certificateAndKey:
      secretName: my-secret
      certificate: public.crt
      key: private.key
  # ...

3.2.4.1.2. SASL based SCRAM-SHA-512 authentication

To configure Kafka Connect to use SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512. This authentication mechanism requires a username and password.

  • Specify the username in the username property.
  • In the passwordSecret property, specify a link to a Secret containing the password. The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret.
Important

Do not specify the actual password in the password field.

An example SASL based SCRAM-SHA-512 client authentication configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: scram-sha-512
    username: my-connect-user
    passwordSecret:
      secretName: my-connect-user
      password: my-connect-password-key
  # ...

3.2.4.1.3. SASL based PLAIN authentication

To configure Kafka Connect to use SASL-based PLAIN authentication, set the type property to plain. This authentication mechanism requires a username and password.

Warning

The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.

  • Specify the username in the username property.
  • In the passwordSecret property, specify a link to a Secret containing the password. The secretName property contains the name of such a Secret and the password property contains the name of the key under which the password is stored inside the Secret.
Important

Do not specify the actual password in the password field.

An example showing SASL based PLAIN client authentication configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: plain
    username: my-connect-user
    passwordSecret:
      secretName: my-connect-user
      password: my-connect-password-key
  # ...

3.2.4.2. Configuring TLS client authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • If they exist, the name of the Secret with the public and private keys used for TLS Client Authentication, and the keys under which they are stored in the Secret

Procedure

  1. (Optional) If they do not already exist, prepare the keys used for authentication in a file and create the Secret.

    Note

    Secrets created by the User Operator may be used.

    This can be done using oc create:

    oc create secret generic my-secret --from-file=my-public.crt --from-file=my-private.key
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: tls
        certificateAndKey:
          secretName: my-secret
          certificate: my-public.crt
          key: my-private.key
      # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
3.2.4.3. Configuring SCRAM-SHA-512 authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • Username of the user which should be used for authentication
  • If they exist, the name of the Secret with the password used for authentication and the key under which the password is stored in the Secret

Procedure

  1. (Optional) If they do not already exist, prepare a file with the password used in authentication and create the Secret.

    Note

    Secrets created by the User Operator may be used.

    This can be done using oc create:

    echo -n '<password>' > <my-password.txt>
    oc create secret generic <my-secret> --from-file=<my-password.txt>
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: scram-sha-512
        username: _<my-username>_
        passwordSecret:
          secretName: _<my-secret>_
          password: _<my-password.txt>_
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.2.5. Kafka Connect configuration

AMQ Streams allows you to customize the configuration of Apache Kafka Connect nodes by editing certain options listed in Apache Kafka documentation.

Configuration options that cannot be configured relate to:

  • Kafka cluster bootstrap address
  • Security (Encryption, Authentication, and Authorization)
  • Listener / REST interface configuration
  • Plugin path configuration

These options are automatically configured by AMQ Streams.

3.2.5.1. Kafka Connect configuration

Kafka Connect is configured using the config property in KafkaConnect.spec and KafkaConnectS2I.spec. This property contains the Kafka Connect configuration options as keys. The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • listeners
  • plugin.path
  • rest.
  • bootstrap.servers

When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Custer Operator log file. All other options are passed to Kafka Connect.

Important

The Cluster Operator does not validate keys or values in the config object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.

Certain options have default values:

  • group.id with default value connect-cluster
  • offset.storage.topic with default value connect-cluster-offsets
  • config.storage.topic with default value connect-cluster-configs
  • status.storage.topic with default value connect-cluster-status
  • key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • value.converter with default value org.apache.kafka.connect.json.JsonConverter

These options are automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties.

Example Kafka Connect configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  config:
    group.id: my-connect-cluster
    offset.storage.topic: my-connect-cluster-offsets
    config.storage.topic: my-connect-cluster-configs
    status.storage.topic: my-connect-cluster-status
    key.converter: org.apache.kafka.connect.json.JsonConverter
    value.converter: org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable: true
    value.converter.schemas.enable: true
    config.storage.replication.factor: 3
    offset.storage.replication.factor: 3
    status.storage.replication.factor: 3
  # ...

3.2.5.2. Kafka Connect configuration for multiple instances

If you are running multiple instances of Kafka Connect, pay attention to the default configuration of the following properties:

# ...
  group.id: connect-cluster 1
  offset.storage.topic: connect-cluster-offsets 2
  config.storage.topic: connect-cluster-configs 3
  status.storage.topic: connect-cluster-status  4
# ...
1
Kafka Connect cluster group the instance belongs to.
2
Kafka topic that stores connector offsets.
3
Kafka topic that stores connector and task status configurations.
4
Kafka topic that stores connector and task status updates.
Note

Values for the three topics must be the same for all Kafka Connect instances with the same group.id.

Unless you change the default settings, each Kafka Connect instance connecting to the same Kafka cluster is deployed with the same values. What happens, in effect, is all instances are coupled to run in a cluster and use the same topics.

If multiple Kafka Connect clusters try to use the same topics, Kafka Connect will not work as expected and generate errors.

If you wish to run multiple Kafka Connect instances, change the values of these properties for each instance.

3.2.5.3. Configuring Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the config property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      config:
        group.id: my-connect-cluster
        offset.storage.topic: my-connect-cluster-offsets
        config.storage.topic: my-connect-cluster-configs
        status.storage.topic: my-connect-cluster-status
        key.converter: org.apache.kafka.connect.json.JsonConverter
        value.converter: org.apache.kafka.connect.json.JsonConverter
        key.converter.schemas.enable: true
        value.converter.schemas.enable: true
        config.storage.replication.factor: 3
        offset.storage.replication.factor: 3
        status.storage.replication.factor: 3
      # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.6. CPU and memory resources

For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.

AMQ Streams supports two types of resources:

  • CPU
  • Memory

AMQ Streams uses the OpenShift syntax for specifying CPU and memory resources.

3.2.6.1. Resource limits and requests

Resource limits and requests are configured using the resources property in the following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • Kafka.spec.KafkaExporter
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaBridge.spec

Additional resources

3.2.6.1.1. Resource requests

Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.

Important

If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.

Resources requests are specified in the requests property. Resources requests currently supported by AMQ Streams:

  • cpu
  • memory

A request may be configured for one or more supported resources.

Example resource request configuration with all resources

# ...
resources:
  requests:
    cpu: 12
    memory: 64Gi
# ...

3.2.6.1.2. Resource limits

Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.

Resource limits are specified in the limits property. Resource limits currently supported by AMQ Streams:

  • cpu
  • memory

A resource may be configured for one or more supported limits.

Example resource limits configuration

# ...
resources:
  limits:
    cpu: 12
    memory: 64Gi
# ...

3.2.6.1.3. Supported CPU formats

CPU requests and limits are supported in the following formats:

  • Number of CPU cores as integer (5 CPU core) or decimal (2.5 CPU core).
  • Number or millicpus / millicores (100m) where 1000 millicores is the same 1 CPU core.

Example CPU units

# ...
resources:
  requests:
    cpu: 500m
  limits:
    cpu: 2.5
# ...

Note

The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.

Additional resources

3.2.6.1.4. Supported memory formats

Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.

  • To specify memory in megabytes, use the M suffix. For example 1000M.
  • To specify memory in gigabytes, use the G suffix. For example 1G.
  • To specify memory in mebibytes, use the Mi suffix. For example 1000Mi.
  • To specify memory in gibibytes, use the Gi suffix. For example 1Gi.

An example of using different memory units

# ...
resources:
  requests:
    memory: 512Mi
  limits:
    memory: 2Gi
# ...

Additional resources

  • For more details about memory specification and additional supported units, see Meaning of memory.
3.2.6.2. Configuring resource requests and limits

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the resources property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        resources:
          requests:
            cpu: "8"
            memory: 64Gi
          limits:
            cpu: "12"
            memory: 128Gi
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

Additional resources

3.2.7. Kafka Connect loggers

Kafka Connect has its own configurable loggers:

  • connect.root.logger.level
  • log4j.logger.org.reflections

Kafka Connect uses the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
spec:
  # ...
  logging:
    type: inline
    loggers:
      connect.root.logger.level: "INFO"
  # ...

External logging

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
spec:
  # ...
  logging:
    type: external
    name: customConfigMap
  # ...

Additional resources

3.2.8. Healthchecks

Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.

OpenShift supports two types of Healthcheck probes:

  • Liveness probes
  • Readiness probes

For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.

Users can configure selected options for liveness and readiness probes.

3.2.8.1. Healthcheck configurations

Liveness and readiness probes can be configured using the livenessProbe and readinessProbe properties in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.KafkaExporter
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaMirrorMaker.spec
  • KafkaBridge.spec

Both livenessProbe and readinessProbe support the following options:

  • initialDelaySeconds
  • timeoutSeconds
  • periodSeconds
  • successThreshold
  • failureThreshold

For more information about the livenessProbe and readinessProbe options, see Section B.39, “Probe schema reference”.

An example of liveness and readiness probe configuration

# ...
readinessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
livenessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
# ...

3.2.8.2. Configuring healthchecks

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the livenessProbe or readinessProbe property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        readinessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          initialDelaySeconds: 15
          timeoutSeconds: 5
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.9. Prometheus metrics

AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.

For more information about configuring Prometheus and Grafana, see Metrics.

3.2.9.1. Metrics configuration

Prometheus metrics are enabled by configuring the metrics property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec

When the metrics property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}).

Example of enabling metrics without any further configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics: {}
    # ...
  zookeeper:
    # ...

The metrics property might contain additional configuration for the Prometheus JMX exporter.

Example of enabling metrics with additional Prometheus JMX Exporter configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metrics:
      lowercaseOutputName: true
      rules:
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count"
          name: "kafka_server_$1_$2_total"
        - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count"
          name: "kafka_server_$1_$2_total"
          labels:
            topic: "$3"
    # ...
  zookeeper:
    # ...

3.2.9.2. Configuring Prometheus metrics

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the metrics property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
      zookeeper:
        # ...
        metrics:
          lowercaseOutputName: true
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.10. JVM Options

The following components of AMQ Streams run inside a Virtual Machine (VM):

  • Apache Kafka
  • Apache ZooKeeper
  • Apache Kafka Connect
  • Apache Kafka MirrorMaker
  • AMQ Streams Kafka Bridge

JVM configuration options optimize the performance for different platforms and architectures. AMQ Streams allows you to configure some of these options.

3.2.10.1. JVM configuration

JVM options can be configured using the jvmOptions property in following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaMirrorMaker.spec
  • KafkaBridge.spec

Only a selected subset of available JVM options can be configured. The following options are supported:

-Xms and -Xmx

-Xms configures the minimum initial allocation heap size when the JVM starts. -Xmx configures the maximum heap size.

Note

The units accepted by JVM settings such as -Xmx and -Xms are those accepted by the JDK java binary in the corresponding image. Accordingly, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes

The default values used for -Xms and -Xmx depends on whether there is a memory request limit configured for the container:

  • If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
  • If there is no memory limit then the JVM’s minimum memory will be set to 128M and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Important

Setting -Xmx explicitly requires some care:

  • The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by -Xmx.
  • If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it).
  • If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if -Xms is set to -Xmx, or some later time if not).

When setting -Xmx explicitly, it is recommended to:

  • set the memory request and the memory limit to the same value,
  • use a memory request that is at least 4.5 × the -Xmx,
  • consider setting -Xms to the same value as -Xmx.
Important

Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.

Example fragment configuring -Xmx and -Xms

# ...
jvmOptions:
  "-Xmx": "2g"
  "-Xms": "2g"
# ...

In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.

Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and ZooKeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.

-server

-server enables the server JVM. This option can be set to true or false.

Example fragment configuring -server

# ...
jvmOptions:
  "-server": true
# ...

Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

-XX

-XX object can be used for configuring advanced runtime options of a JVM. The -server and -XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.

Example showing the use of the -XX object

jvmOptions:
  "-XX":
    "UseG1GC": true
    "MaxGCPauseMillis": 20
    "InitiatingHeapOccupancyPercent": 35
    "ExplicitGCInvokesConcurrent": true
    "UseParNewGC": false

The example configuration above will result in the following JVM options:

-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
Note

When neither of the two options (-server and -XX) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS will be used.

3.2.10.1.1. Garbage collector logging

The jvmOptions section also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows:

Example of enabling GC logging

# ...
jvmOptions:
  gcLoggingEnabled: true
# ...

3.2.10.2. Configuring JVM options

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the jvmOptions property in the Kafka, KafkaConnect, KafkaConnectS2I, KafkaMirrorMaker, or KafkaBridge resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        jvmOptions:
          "-Xmx": "8g"
          "-Xms": "8g"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.11. Container images

AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.

3.2.11.1. Container image configurations

You can specify which container image to use for each component using the image property in the following resources:

  • Kafka.spec.kafka
  • Kafka.spec.kafka.tlsSidecar
  • Kafka.spec.zookeeper
  • Kafka.spec.zookeeper.tlsSidecar
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaBridge.spec
3.2.11.1.1. Configuring the image property for Kafka, Kafka Connect, and Kafka MirrorMaker

Kafka, Kafka Connect (including Kafka Connect with S2I support), and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables:

  • STRIMZI_KAFKA_IMAGES
  • STRIMZI_KAFKA_CONNECT_IMAGES
  • STRIMZI_KAFKA_CONNECT_S2I_IMAGES
  • STRIMZI_KAFKA_MIRROR_MAKER_IMAGES

These environment variables contain mappings between the Kafka versions and their corresponding images. The mappings are used together with the image and version properties:

  • If neither image nor version are given in the custom resource then the version will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in the environment variable.
  • If image is given but version is not, then the given image is used and the version is assumed to be the Cluster Operator’s default Kafka version.
  • If version is given but image is not, then the image that corresponds to the given version in the environment variable is used.
  • If both version and image are given, then the given image is used. The image is assumed to contain a Kafka image with the given version.

The image and version for the different components can be configured in the following properties:

  • For Kafka in spec.kafka.image and spec.kafka.version.
  • For Kafka Connect, Kafka Connect S2I, and Kafka MirrorMaker in spec.image and spec.version.
Warning

It is recommended to provide only the version and leave the image property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator’s environment variables.

3.2.11.1.2. Configuring the image property in other resources

For the image property in the other custom resources, the given value will be used during deployment. If the image property is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used.

  • For Kafka broker TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For ZooKeeper nodes:
  • For ZooKeeper node TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For Topic Operator:

    1. Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0 container image.
  • For User Operator:

    1. Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0 container image.
  • For Entity Operator TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For Kafka Exporter:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-24-rhel7:1.4.0 container image.
  • For Kafka Bridge:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.4.0 container image.
  • For Kafka broker initializer:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.4.0 container image.
Warning

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.

Example of container image configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    image: my-org/my-image:latest
    # ...
  zookeeper:
    # ...

3.2.11.2. Configuring container images

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the image property in the Kafka, KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    metadata:
      name: my-cluster
    spec:
      kafka:
        # ...
        image: my-org/my-image:latest
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.12. Configuring pod scheduling

Important

When two applications are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.

3.2.12.1. Scheduling pods based on other applications
3.2.12.1.1. Avoid critical applications to share the node

Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.

3.2.12.1.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.2.12.1.3. Configuring pod anti-affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        template:
          pod:
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  - labelSelector:
                      matchExpressions:
                        - key: application
                          operator: In
                          values:
                            - postgresql
                            - mongodb
                    topologyKey: "kubernetes.io/hostname"
        # ...
      zookeeper:
        # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
3.2.12.2. Scheduling pods to specific nodes
3.2.12.2.1. Node scheduling

The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.

OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node.

3.2.12.2.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.2.12.2.3. Configuring node affinity in Kafka components

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Label the nodes where AMQ Streams components should be scheduled.

    This can be done using oc label:

    oc label node your-node node-type=fast-network

    Alternatively, some of the existing labels might be reused.

  2. Edit the affinity property in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        template:
          pod:
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                    - matchExpressions:
                      - key: node-type
                        operator: In
                        values:
                        - fast-network
        # ...
      zookeeper:
        # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
3.2.12.3. Using dedicated nodes
3.2.12.3.1. Dedicated nodes

Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.

Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.

To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.

3.2.12.3.2. Affinity

Affinity can be configured using the affinity property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The affinity configuration can include different types of affinity:

  • Pod affinity and anti-affinity
  • Node affinity

The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.

3.2.12.3.3. Tolerations

Tolerations can be configured using the tolerations property in following resources:

  • Kafka.spec.kafka.template.pod
  • Kafka.spec.zookeeper.template.pod
  • Kafka.spec.entityOperator.template.pod
  • KafkaConnect.spec.template.pod
  • KafkaConnectS2I.spec.template.pod
  • KafkaBridge.spec.template.pod

The format of the tolerations property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.

3.2.12.3.4. Setting up dedicated nodes and scheduling pods on them

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Select the nodes which should be used as dedicated.
  2. Make sure there are no workloads scheduled on these nodes.
  3. Set the taints on the selected nodes:

    This can be done using oc adm taint:

    oc adm taint node your-node dedicated=Kafka:NoSchedule
  4. Additionally, add a label to the selected nodes as well.

    This can be done using oc label:

    oc label node your-node dedicated=Kafka
  5. Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: Kafka
    spec:
      kafka:
        # ...
        template:
          pod:
            tolerations:
              - key: "dedicated"
                operator: "Equal"
                value: "Kafka"
                effect: "NoSchedule"
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                  - matchExpressions:
                    - key: dedicated
                      operator: In
                      values:
                      - Kafka
        # ...
      zookeeper:
        # ...
  6. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.2.13. Using external configuration and secrets

Connectors are created, reconfigured, and deleted using the Kafka Connect HTTP REST interface, or by using KafkaConnectors. For more information on these methods, see Section 2.5.3, “Creating and managing connectors”. The connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself.

ConfigMaps and Secrets are standard OpenShift resources used for storing configurations and confidential data. Whichever method you use to manage connectors, you can use ConfigMaps and Secrets to configure certain elements of a connector. You can then reference the configuration values in HTTP REST commands (this keeps the configuration separate and more secure, if needed). This method applies especially to confidential data, such as usernames, passwords, or certificates.

3.2.13.1. Storing connector configurations externally

You can mount ConfigMaps or Secrets into a Kafka Connect pod as volumes or environment variables. Volumes and environment variables are configured in the externalConfiguration property in KafkaConnect.spec and KafkaConnectS2I.spec.

3.2.13.1.1. External configuration as environment variables

The env property is used to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret.

Note

The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_.

To mount a value from a Secret to an environment variable, use the valueFrom property and the secretKeyRef as shown in the following example.

Example of an environment variable set to a value from a Secret

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  externalConfiguration:
    env:
      - name: MY_ENVIRONMENT_VARIABLE
        valueFrom:
          secretKeyRef:
            name: my-secret
            key: my-key

A common use case for mounting Secrets to environment variables is when your connector needs to communicate with Amazon AWS and needs to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables with credentials.

To mount a value from a ConfigMap to an environment variable, use configMapKeyRef in the valueFrom property as shown in the following example.

Example of an environment variable set to a value from a ConfigMap

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  externalConfiguration:
    env:
      - name: MY_ENVIRONMENT_VARIABLE
        valueFrom:
          configMapKeyRef:
            name: my-config-map
            key: my-key

3.2.13.1.2. External configuration as volumes

You can also mount ConfigMaps or Secrets to a Kafka Connect pod as volumes. Using volumes instead of environment variables is useful in the following scenarios:

  • Mounting truststores or keystores with TLS certificates
  • Mounting a properties file that is used to configure Kafka Connect connectors

In the volumes property of the externalConfiguration resource, list the ConfigMaps or Secrets that will be mounted as volumes. Each volume must specify a name in the name property and a reference to ConfigMap or Secret.

Example of volumes with external configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  externalConfiguration:
    volumes:
      - name: connector1
        configMap:
          name: connector1-configuration
      - name: connector1-certificates
        secret:
          secretName: connector1-certificates

The volumes will be mounted inside the Kafka Connect containers in the path /opt/kafka/external-configuration/<volume-name>. For example, the files from a volume named connector1 would appear in the directory /opt/kafka/external-configuration/connector1.

The FileConfigProvider has to be used to read the values from the mounted properties files in connector configurations.

3.2.13.2. Mounting Secrets as environment variables

You can create an OpenShift Secret and mount it to Kafka Connect as an environment variable.

Prerequisites

  • A running Cluster Operator.

Procedure

  1. Create a secret containing the information that will be mounted as an environment variable. For example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: aws-creds
    type: Opaque
    data:
      awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg=
      awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=
  2. Create or edit the Kafka Connect resource. Configure the externalConfiguration section of the KafkaConnect or KafkaConnectS2I custom resource to reference the secret. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      externalConfiguration:
        env:
          - name: AWS_ACCESS_KEY_ID
            valueFrom:
              secretKeyRef:
                name: aws-creds
                key: awsAccessKey
          - name: AWS_SECRET_ACCESS_KEY
            valueFrom:
              secretKeyRef:
                name: aws-creds
                key: awsSecretAccessKey
  3. Apply the changes to your Kafka Connect deployment.

    Use oc apply:

    oc apply -f your-file

The environment variables are now available for use when developing your connectors.

Additional resources

3.2.13.3. Mounting Secrets as volumes

You can create an OpenShift Secret, mount it as a volume to Kafka Connect, and then use it to configure a Kafka Connect connector.

Prerequisites

  • A running Cluster Operator.

Procedure

  1. Create a secret containing a properties file that defines the configuration options for your connector configuration. For example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: mysecret
    type: Opaque
    stringData:
      connector.properties: |-
        dbUsername: my-user
        dbPassword: my-password
  2. Create or edit the Kafka Connect resource. Configure the FileConfigProvider in the config section and the externalConfiguration section of the KafkaConnect or KafkaConnectS2I custom resource to reference the secret. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      config:
        config.providers: file
        config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
      #...
      externalConfiguration:
        volumes:
          - name: connector-config
            secret:
              secretName: mysecret
  3. Apply the changes to your Kafka Connect deployment.

    Use oc apply:

    oc apply -f your-file
  4. Use the values from the mounted properties file in your JSON payload with connector configuration. For example:

    {
       "name":"my-connector",
       "config":{
          "connector.class":"MyDbConnector",
          "tasks.max":"3",
          "database": "my-postgresql:5432"
          "username":"${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbUsername}",
          "password":"${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbPassword}",
          # ...
       }
    }

Additional resources

3.2.14. Enabling KafkaConnector resources

To enable KafkaConnectors for a Kafka Connect cluster, add the strimzi.io/use-connector-resources annotation to the KafkaConnect or KafkaConnectS2I custom resource.

Prerequisites

  • A running Cluster Operator

Procedure

  1. Edit the KafkaConnect or KafkaConnectS2I resource. Add the strimzi.io/use-connector-resources annotation. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect-cluster
      annotations:
        strimzi.io/use-connector-resources: "true"
    spec:
      # ...
  2. Create or update the resource using oc apply:

    oc apply -f kafka-connect.yaml

3.2.15. List of resources created as part of Kafka Connect cluster

The following resources will created by the Cluster Operator in the OpenShift cluster:

connect-cluster-name-connect
Deployment which is in charge to create the Kafka Connect worker node pods.
connect-cluster-name-connect-api
Service which exposes the REST interface for managing the Kafka Connect cluster.
connect-cluster-name-config
ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods.
connect-cluster-name-connect
Pod Disruption Budget configured for the Kafka Connect worker nodes.

3.3. Kafka Connect cluster with Source2Image support

The full schema of the KafkaConnectS2I resource is described in the Section B.88, “KafkaConnectS2I schema reference”. All labels that are applied to the desired KafkaConnectS2I resource will also be applied to the OpenShift resources making up the Kafka Connect cluster with Source2Image support. This provides a convenient mechanism for resources to be labeled as required.

3.3.1. Replicas

Kafka Connect clusters can consist of one or more nodes. The number of nodes is defined in the KafkaConnect and KafkaConnectS2I resources. Running a Kafka Connect cluster with multiple nodes can provide better availability and scalability. However, when running Kafka Connect on OpenShift it is not necessary to run multiple nodes of Kafka Connect for high availability. If a node where Kafka Connect is deployed to crashes, OpenShift will automatically reschedule the Kafka Connect pod to a different node. However, running Kafka Connect with multiple nodes can provide faster failover times, because the other nodes will be up and running already.

3.3.1.1. Configuring the number of nodes

The number of Kafka Connect nodes is configured using the replicas property in KafkaConnect.spec and KafkaConnectS2I.spec.

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the replicas property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnectS2I
    metadata:
      name: my-cluster
    spec:
      # ...
      replicas: 3
      # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.3.2. Bootstrap servers

A Kafka Connect cluster always works in combination with a Kafka cluster. A Kafka cluster is specified as a list of bootstrap servers. On OpenShift, the list must ideally contain the Kafka cluster bootstrap service named cluster-name-kafka-bootstrap, and a port of 9092 for plain traffic or 9093 for encrypted traffic.

The list of bootstrap servers is configured in the bootstrapServers property in KafkaConnect.spec and KafkaConnectS2I.spec. The servers must be defined as a comma-separated list specifying one or more Kafka brokers, or a service pointing to Kafka brokers specified as a hostname:_port_ pairs.

When using Kafka Connect with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the cluster.

3.3.2.1. Configuring bootstrap servers

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator

Procedure

  1. Edit the bootstrapServers property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-cluster
    spec:
      # ...
      bootstrapServers: my-cluster-kafka-bootstrap:9092
      # ...
  2. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.3.3. Connecting to Kafka brokers using TLS

By default, Kafka Connect tries to connect to Kafka brokers using a plain text connection. If you prefer to use TLS, additional configuration is required.

3.3.3.1. TLS support in Kafka Connect

TLS support is configured in the tls property in KafkaConnect.spec and KafkaConnectS2I.spec. The tls property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format.

An example showing TLS configuration with multiple certificates

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-other-secret
        certificate: certificate.crt
  # ...

When multiple certificates are stored in the same secret, it can be listed multiple times.

An example showing TLS configuration with multiple certificates from the same secret

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnectS2I
metadata:
  name: my-cluster
spec:
  # ...
  tls:
    trustedCertificates:
      - secretName: my-secret
        certificate: ca.crt
      - secretName: my-secret
        certificate: ca2.crt
  # ...

3.3.3.2. Configuring TLS in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • If they exist, the name of the Secret for the certificate used for TLS Server Authentication, and the key under which the certificate is stored in the Secret

Procedure

  1. (Optional) If they do not already exist, prepare the TLS certificate used in authentication in a file and create a Secret.

    Note

    The secrets created by the Cluster Operator for Kafka cluster may be used directly.

    This can be done using oc create:

    oc create secret generic my-secret --from-file=my-file.crt
  2. Edit the tls property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      tls:
        trustedCertificates:
          - secretName: my-cluster-cluster-cert
            certificate: ca.crt
      # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file

3.3.4. Connecting to Kafka brokers with Authentication

By default, Kafka Connect will try to connect to Kafka brokers without authentication. Authentication is enabled through the KafkaConnect and KafkaConnectS2I resources.

3.3.4.1. Authentication support in Kafka Connect

Authentication is configured through the authentication property in KafkaConnect.spec and KafkaConnectS2I.spec. The authentication property specifies the type of the authentication mechanisms which should be used and additional configuration details depending on the mechanism. The supported authentication types are:

3.3.4.1.1. TLS Client Authentication

To use TLS client authentication, set the type property to the value tls. TLS client authentication uses a TLS certificate to authenticate. The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.

Note

TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Connect see Section 3.3.3, “Connecting to Kafka brokers using TLS”.

An example TLS client authentication configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: tls
    certificateAndKey:
      secretName: my-secret
      certificate: public.crt
      key: private.key
  # ...

3.3.4.1.2. SASL based SCRAM-SHA-512 authentication

To configure Kafka Connect to use SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512. This authentication mechanism requires a username and password.

  • Specify the username in the username property.
  • In the passwordSecret property, specify a link to a Secret containing the password. The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret.
Important

Do not specify the actual password in the password field.

An example SASL based SCRAM-SHA-512 client authentication configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: scram-sha-512
    username: my-connect-user
    passwordSecret:
      secretName: my-connect-user
      password: my-connect-password-key
  # ...

3.3.4.1.3. SASL based PLAIN authentication

To configure Kafka Connect to use SASL-based PLAIN authentication, set the type property to plain. This authentication mechanism requires a username and password.

Warning

The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.

  • Specify the username in the username property.
  • In the passwordSecret property, specify a link to a Secret containing the password. The secretName property contains the name of such a Secret and the password property contains the name of the key under which the password is stored inside the Secret.
Important

Do not specify the actual password in the password field.

An example showing SASL based PLAIN client authentication configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-cluster
spec:
  # ...
  authentication:
    type: plain
    username: my-connect-user
    passwordSecret:
      secretName: my-connect-user
      password: my-connect-password-key
  # ...

3.3.4.2. Configuring TLS client authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • If they exist, the name of the Secret with the public and private keys used for TLS Client Authentication, and the keys under which they are stored in the Secret

Procedure

  1. (Optional) If they do not already exist, prepare the keys used for authentication in a file and create the Secret.

    Note

    Secrets created by the User Operator may be used.

    This can be done using oc create:

    oc create secret generic my-secret --from-file=my-public.crt --from-file=my-private.key
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: tls
        certificateAndKey:
          secretName: my-secret
          certificate: my-public.crt
          key: my-private.key
      # ...
  3. Create or update the resource.

    This can be done using oc apply:

    oc apply -f your-file
3.3.4.3. Configuring SCRAM-SHA-512 authentication in Kafka Connect

Prerequisites

  • An OpenShift cluster
  • A running Cluster Operator
  • Username of the user which should be used for authentication
  • If they exist, the name of the Secret with the password used for authentication and the key under which the password is stored in the Secret

Procedure

  1. (Optional) If they do not already exist, prepare a file with the password used in authentication and create the Secret.

    Note

    Secrets created by the User Operator may be used.

    This can be done using oc create:

    echo -n '<password>' > <my-password.txt>
    oc create secret generic <my-secret> --from-file=<my-password.txt>
  2. Edit the authentication property in the KafkaConnect or KafkaConnectS2I resource. For example:

    apiVersion: kafka.strimzi.io/v1beta1
    kind: KafkaConnect
    metadata:
      name: my-connect
    spec:
      # ...
      authentication:
        type: scram-sha-512
        username: _<my-username>_
        passwordSecret:
          secretName: _<my-secret>_
          password: _<my-password.txt>_
      # ...
  3. Create or update the resource.

    On OpenShift this can be done using oc apply:

    oc apply -f your-file

3.3.5. Kafka Connect configuration

AMQ Streams allows you to customize the configuration of Apache Kafka Connect nodes by editing certain options listed in Apache Kafka documentation.

Configuration options that cannot be configured relate to:

  • Kafka cluster bootstrap address
  • Security (Encryption, Authentication, and Authorization)
  • Listener / REST interface configuration
  • Plugin path configuration

These options are automatically configured by AMQ Streams.

3.3.5.1. Kafka Connect configuration

Kafka Connect is configured using the config property in KafkaConnect.spec and KafkaConnectS2I.spec. This property contains the Kafka Connect configuration options as keys. The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • listeners
  • plugin.path
  • rest.
  • bootstrap.servers

When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Custer Operator log file. All other options are passed to Kafka Connect.

Important

The Cluster Operator does not validate keys or values in the config object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.

Certain options have default values:

  • group.id with default value connect-cluster
  • offset.storage.topic with default value connect-cluster-offsets
  • config.storage.topic with default value connect-cluster-configs
  • status.storage.topic with default value connect-cluster-status
  • key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • value.converter with default value org.apache.kafka.connect.json.JsonConverter

These options are automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties.

Example Kafka Connect configuration

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  config:
    group.id: my-connect-cluster
    offset.storage.topic: my-connect-cluster-offsets
    config.storage.topic: my-connect-cluster-configs
    status.storage.topic: my-connect-cluster-status
    key.converter: org.apache.kafka.connect.json.JsonConverter
    value.converter: org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable: true
    value.converter.schemas.enable: true
    config.storage.replication.factor: 3
    offset.storage.replication.factor: 3
    status.storage.replication.factor: 3
  # ...

3.3.5.2. Kafka Connect configuration for multiple instances

If you are running multiple instances of Kafka Connect, pay attention to the default configuration of the following properties:

# ...
  group.id: connect-cluster 1
  offset.storage.topic: connect-cluster-offsets 2
  config.storage.topic: connect-cluster-configs 3
  status.storage.topic: connect-cluster-status  4
# ...
1
Kafka Connect cluster group the instance belongs to.
2
Kafka topic that stores connector offsets.