Deploying and Managing AMQ Streams on OpenShift
Deploy and manage AMQ Streams 2.6 on OpenShift Container Platform
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Deployment overview
AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster.
This guide provides instructions for deploying and managing AMQ Streams. Deployment options and steps are covered using the example installation files included with AMQ Streams. While the guide highlights important configuration considerations, it does not cover all available options. For a deeper understanding of the Kafka component configuration options, refer to the AMQ Streams Custom Resource API Reference.
In addition to deployment instructions, the guide offers pre- and post-deployment guidance. It covers setting up and securing client access to your Kafka cluster. Furthermore, it explores additional deployment options such as metrics integration, distributed tracing, and cluster management tools like Cruise Control and the AMQ Streams Drain Cleaner. You’ll also find recommendations on managing AMQ Streams and fine-tuning Kafka configuration for optimal performance.
Upgrade instructions are provided for both AMQ Streams and Kafka, to help keep your deployment up to date.
AMQ Streams is designed to be compatible with all types of OpenShift clusters, irrespective of their distribution. Whether your deployment involves public or private clouds, or if you are setting up a local development environment, the instructions in this guide are applicable in all cases.
1.1. AMQ Streams custom resources
Deployment of Kafka components to an OpenShift cluster using AMQ Streams is highly configurable through the application of custom resources. These custom resources are created as instances of APIs added by Custom Resource Definitions (CRDs) to extend OpenShift resources.
CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with AMQ Streams for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the AMQ Streams distribution.
CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation.
1.1.1. AMQ Streams custom resource example
CRDs require a one-time installation in a cluster to define the schemas used to instantiate and manage AMQ Streams-specific resources.
After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification.
Depending on the cluster setup, installation typically requires cluster admin privileges.
Access to manage custom resources is limited to AMQ Streams administrators. For more information, see Section 4.6, “Designating AMQ Streams administrators”.
A CRD defines a new kind
of resource, such as kind:Kafka
, within an OpenShift cluster.
The Kubernetes API server allows custom resources to be created based on the kind
and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster.
Each AMQ Streams-specific custom resource conforms to the schema defined by the CRD for the resource’s kind
. The custom resources for AMQ Streams components have common configuration properties, which are defined under spec
.
To understand the relationship between a CRD and a custom resource, let’s look at a sample of the CRD for a Kafka topic.
Kafka topic CRD
apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # ... singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # ... subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 # ...
- 1
- The metadata for the topic CRD, its name and a label to identify the CRD.
- 2
- The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example,
oc get kafkatopic my-topic
oroc get kafkatopics
. - 3
- The shortname can be used in CLI commands. For example,
oc get kt
can be used as an abbreviation instead ofoc get kafkatopic
. - 4
- The information presented when using a
get
command on the custom resource. - 5
- The current status of the CRD as described in the schema reference for the resource.
- 6
- openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica.
You can identify the CRD YAML files supplied with the AMQ Streams installation files, because the file names contain an index number followed by ‘Crd’.
Here is a corresponding example of a KafkaTopic
custom resource.
Kafka topic custom resource
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: "2019-08-20T11:37:00.706Z" status: "True" type: Ready observedGeneration: 1 / ...
- 1
- The
kind
andapiVersion
identify the CRD of which the custom resource is an instance. - 2
- A label, applicable only to
KafkaTopic
andKafkaUser
resources, that defines the name of the Kafka cluster (which is same as the name of theKafka
resource) to which a topic or user belongs. - 3
- The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified.
- 4
- Status conditions for the
KafkaTopic
resource. Thetype
condition changed toReady
at thelastTransitionTime
.
Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API.
After a KafkaTopic
custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in AMQ Streams.
1.2. AMQ Streams operators
AMQ Streams operators are purpose-built with specialist operational knowledge to effectively manage Kafka on OpenShift. Each operator performs a distinct function.
- Cluster Operator
- The Cluster Operator handles the deployment and management of Apache Kafka clusters on OpenShift. It automates the setup of Kafka brokers, and other Kafka components and resources.
- Topic Operator
- The Topic Operator manages the creation, configuration, and deletion of topics within Kafka clusters.
- User Operator
- The User Operator manages Kafka users that require access to Kafka brokers.
When you deploy AMQ Streams, you first deploy the Cluster Operator. The Cluster Operator is then ready to handle the deployment of Kafka. You can also deploy the Topic Operator and User Operator using the Cluster Operator (recommended) or as standalone operators. You would use a standalone operator with a Kafka cluster that is not managed by the Cluster Operator.
The Topic Operator and User Operator are part of the Entity Operator. The Cluster Operator can deploy one or both operators based on the Entity Operator configuration.
To deploy the standalone operators, you need to set environment variables to connect to a Kafka cluster. These environment variables do not need to be set if you are deploying the operators using the Cluster Operator as they will be set by the Cluster Operator.
1.2.1. Watching AMQ Streams resources in OpenShift namespaces
Operators watch and manage AMQ Streams resources in OpenShift namespaces. The Cluster Operator can watch a single namespace, multiple namespaces, or all namespaces in an OpenShift cluster. The Topic Operator and User Operator can watch a single namespace.
-
The Cluster Operator watches for
Kafka
resources -
The Topic Operator watches for
KafkaTopic
resources -
The User Operator watches for
KafkaUser
resources
The Topic Operator and the User Operator can only watch a single Kafka cluster in a namespace. And they can only be connected to a single Kafka cluster.
If multiple Topic Operators watch the same namespace, name collisions and topic deletion can occur. This is because each Kafka cluster uses Kafka topics that have the same name (such as __consumer_offsets
). Make sure that only one Topic Operator watches a given namespace.
When using multiple User Operators with a single namespace, a user with a given username can exist in more than one Kafka cluster.
If you deploy the Topic Operator and User Operator using the Cluster Operator, they watch the Kafka cluster deployed by the Cluster Operator by default. You can also specify a namespace using watchedNamespace
in the operator configuration.
For a standalone deployment of each operator, you specify a namespace and connection to the Kafka cluster to watch in the configuration.
1.2.2. Managing RBAC resources
The Cluster Operator creates and manages role-based access control (RBAC) resources for AMQ Streams components that need access to OpenShift resources.
For the Cluster Operator to function, it needs permission within the OpenShift cluster to interact with Kafka resources, such as Kafka
and KafkaConnect
, as well as managed resources like ConfigMap
, Pod
, Deployment
, and Service
.
Permission is specified through the following OpenShift RBAC resources:
-
ServiceAccount
-
Role
andClusterRole
-
RoleBinding
andClusterRoleBinding
1.2.2.1. Delegating privileges to AMQ Streams components
The Cluster Operator runs under a service account called strimzi-cluster-operator
. It is assigned cluster roles that give it permission to create the RBAC resources for AMQ Streams components. Role bindings associate the cluster roles with the service account.
OpenShift prevents components operating under one ServiceAccount
from granting another ServiceAccount
privileges that the granting ServiceAccount
does not have. Because the Cluster Operator creates the RoleBinding
and ClusterRoleBinding
RBAC resources needed by the resources it manages, it requires a role that gives it the same privileges.
The following sections describe the RBAC resources required by the Cluster Operator.
1.2.2.2. ClusterRole
resources
The Cluster Operator uses ClusterRole
resources to provide the necessary access to resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the cluster roles.
Cluster administrator rights are only needed for the creation of ClusterRole
resources. The Cluster Operator will not run under a cluster admin account.
The RBAC resources follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate the cluster of the Kafka component.
All cluster roles are required by the Cluster Operator in order to delegate privileges.
Name | Description |
---|---|
| Access rights for namespace-scoped resources used by the Cluster Operator to deploy and manage the operands. |
| Access rights for cluster-scoped resources used by the Cluster Operator to deploy and manage the operands. |
| Access rights used by the Cluster Operator for leader election. |
| Access rights used by the Cluster Operator to watch and manage the AMQ Streams custom resources. |
| Access rights to allow Kafka brokers to get the topology labels from OpenShift worker nodes when rack-awareness is used. |
| Access rights used by the Topic and User Operators to manage Kafka users and topics. |
| Access rights to allow Kafka Connect, MirrorMaker (1 and 2), and Kafka Bridge to get the topology labels from OpenShift worker nodes when rack-awareness is used. |
1.2.2.3. ClusterRoleBinding
resources
The Cluster Operator uses ClusterRoleBinding
and RoleBinding
resources to associate its ClusterRole
with its ServiceAccount
. Cluster role bindings are required by cluster roles containing cluster-scoped resources.
Name | Description |
---|---|
|
Grants the Cluster Operator the rights from the |
|
Grants the Cluster Operator the rights from the |
|
Grants the Cluster Operator the rights from the |
Name | Description |
---|---|
|
Grants the Cluster Operator the rights from the |
|
Grants the Cluster Operator the rights from the |
|
Grants the Cluster Operator the rights from the |
|
Grants the Cluster Operator the rights from the |
1.2.2.4. ServiceAccount
resources
The Cluster Operator runs using the strimzi-cluster-operator
ServiceAccount
. This service account grants it the privileges it requires to manage the operands. The Cluster Operator creates additional ClusterRoleBinding
and RoleBinding
resources to delegate some of these RBAC rights to the operands.
Each of the operands uses its own service account created by the Cluster Operator. This allows the Cluster Operator to follow the principle of least privilege and give the operands only the access rights that are really need.
Name | Used by |
---|---|
| ZooKeeper pods |
| Kafka broker pods |
| Entity Operator |
| Cruise Control pods |
| Kafka Exporter pods |
| Kafka Connect pods |
| MirrorMaker pods |
| MirrorMaker 2 pods |
| Kafka Bridge pods |
1.3. Using the Kafka Bridge to connect with a Kafka cluster
You can use the AMQ Streams Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol.
When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface.
Additional resources
- For information on installing and using the Kafka Bridge, see Using the AMQ Streams Kafka Bridge.
1.4. Seamless FIPS support
Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. When running AMQ Streams on a FIPS-enabled OpenShift cluster, the OpenJDK used in AMQ Streams container images automatically switches to FIPS mode. From version 2.3, AMQ Streams can run on FIPS-enabled OpenShift clusters without any changes or special configuration. It uses only the FIPS-compliant security libraries from the OpenJDK.
Minimum password length
When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. From AMQ Streams 2.3, the default password length in AMQ Streams User Operator is set to 32 characters as well. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. You can do that, for example, by deleting the user secret and waiting for the User Operator to create a new password with the appropriate length.
If you are using FIPS-enabled OpenShift clusters, you may experience higher memory consumption compared to regular OpenShift clusters. To avoid any issues, we suggest increasing the memory request to at least 512Mi.
1.5. Document Conventions
User-replaced values
User-replaced values, also known as replaceables, are shown in with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace
is also used.
For example, the following code shows that <my_namespace>
must be replaced by the correct namespace name:
sed -i 's/namespace: .*/namespace: <my_namespace>' install/cluster-operator/*RoleBinding*.yaml
1.6. Additional resources
Chapter 2. AMQ Streams installation methods
You can install AMQ Streams on OpenShift 4.11 to 4.14 in two ways.
Installation method | Description |
---|---|
Download Red Hat AMQ Streams 2.6 OpenShift Installation and Example Files from the AMQ Streams software downloads page. Deploy the YAML installation artifacts to your OpenShift cluster using
You can also use the
| |
Use the AMQ Streams operator in the OperatorHub to deploy AMQ Streams to a single namespace or all namespaces. |
For the greatest flexibility, choose the installation artifacts method. The OperatorHub method provides a standard configuration and allows you to take advantage of automatic updates.
Installation of AMQ Streams using Helm is not supported.
Chapter 3. What is deployed with AMQ Streams
Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability.
A typical deployment incorporating Kafka components might include:
- Kafka cluster of broker nodes
- ZooKeeper cluster of replicated ZooKeeper instances
- Kafka Connect cluster for external data connections
- Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster
- Kafka Exporter to extract additional Kafka metrics data for monitoring
- Kafka Bridge to make HTTP-based requests to the Kafka cluster
- Cruise Control to rebalance topic partitions across broker nodes
Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect.
3.1. Order of deployment
The required order of deployment to an OpenShift cluster is as follows:
- Deploy the Cluster Operator to manage your Kafka cluster
- Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment
Optionally deploy:
- The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster
- Kafka Connect
- Kafka MirrorMaker
- Kafka Bridge
- Components for the monitoring of metrics
The Cluster Operator creates OpenShift resources for the components, such as Deployment
, Service
, and Pod
resources. The names of the OpenShift resources are appended with the name specified for a component when it’s deployed. For example, a Kafka cluster named my-kafka-cluster
has a service named my-kafka-cluster-kafka
.
Chapter 4. Preparing for your AMQ Streams deployment
Prepare for a deployment of AMQ Streams by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following:
- Ensuring you have the necessary prerequisites before deploying AMQ Streams
- Downloading the AMQ Streams release artifacts to facilitate your deployment
- Pushing the AMQ Streams container images into your own registry (if required)
- Setting up admin roles to enable configuration of custom resources used in the deployment
To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs.
4.1. Deployment prerequisites
To deploy AMQ Streams, you will need the following:
An OpenShift 4.11 to 4.14 cluster.
AMQ Streams is based on Strimzi 0.38.x.
-
The
oc
command-line tool is installed and configured to connect to the running cluster.
4.2. Operator deployment best practices
Potential issues can arise from installing more than one AMQ Streams operator in the same OpenShift cluster, especially when using different versions. Each AMQ Streams operator manages a set of resources in an OpenShift cluster. When you install multiple AMQ Streams operators, they may attempt to manage the same resources concurrently. This can lead to conflicts and unpredictable behavior within your cluster. Conflicts can still occur even if you deploy AMQ Streams operators in different namespaces within the same OpenShift cluster. Although namespaces provide some degree of resource isolation, certain resources managed by the AMQ Streams operator, such as Custom Resource Definitions (CRDs) and roles, have a cluster-wide scope.
Additionally, installing multiple operators with different versions can result in compatibility issues between the operators and the Kafka clusters they manage. Different versions of AMQ Streams operators may introduce changes, bug fixes, or improvements that are not backward-compatible.
To avoid the issues associated with installing multiple AMQ Streams operators in an OpenShift cluster, the following guidelines are recommended:
- Install the AMQ Streams operator in a separate namespace from the Kafka cluster and other Kafka components it manages, to ensure clear separation of resources and configurations.
- Use a single AMQ Streams operator to manage all your Kafka instances within an OpenShift cluster.
- Update the AMQ Streams operator and the supported Kafka version as often as possible to reflect the latest features and enhancements.
By following these best practices and ensuring consistent updates for a single AMQ Streams operator, you can enhance the stability of managing Kafka instances in an OpenShift cluster. This approach also enables you to make the most of AMQ Streams’s latest features and capabilities.
As AMQ Streams is based on Strimzi, the same issues can also arise when combining AMQ Streams operators with Strimzi operators in an OpenShift cluster.
4.3. Downloading AMQ Streams release artifacts
To use deployment files to install AMQ Streams, download and extract the files from the AMQ Streams software downloads page.
AMQ Streams release artifacts include sample YAML files to help you deploy the components of AMQ Streams to OpenShift, perform common operations, and configure your Kafka cluster.
Use oc
to deploy the Cluster Operator from the install/cluster-operator
folder of the downloaded ZIP file. For more information about deploying and configuring the Cluster Operator, see Section 6.2, “Deploying the Cluster Operator”.
In addition, if you want to use standalone installations of the Topic and User Operators with a Kafka cluster that is not managed by the AMQ Streams Cluster Operator, you can deploy them from the install/topic-operator
and install/user-operator
folders.
AMQ Streams container images are also available through the Red Hat Ecosystem Catalog. However, we recommend that you use the YAML files provided to deploy AMQ Streams.
4.4. Pushing container images to your own registry
Container images for AMQ Streams are available in the Red Hat Ecosystem Catalog. The installation YAML files provided by AMQ Streams will pull the images directly from the Red Hat Ecosystem Catalog.
If you do not have access to the Red Hat Ecosystem Catalog or want to use your own container repository, do the following:
- Pull all container images listed here
- Push them into your own registry
- Update the image names in the installation YAML files
Each Kafka version supported for the release has a separate image.
Container image | Namespace/Repository | Description |
---|---|---|
Kafka |
| AMQ Streams image for running Kafka, including:
|
Operator |
| AMQ Streams image for running the operators:
|
Kafka Bridge |
| AMQ Streams image for running the AMQ Streams Kafka Bridge |
AMQ Streams Drain Cleaner |
| AMQ Streams image for running the AMQ Streams Drain Cleaner |
4.5. Creating a pull secret for authentication to the container image registry
The installation YAML files provided by AMQ Streams pull container images directly from the Red Hat Ecosystem Catalog. If an AMQ Streams deployment requires authentication, configure authentication credentials in a secret and add it to the installation YAML.
Authentication is not usually required, but might be requested on certain platforms.
Prerequisites
- You need your Red Hat username and password or the login details from your Red Hat registry service account.
You can use your Red Hat subscription to create a registry service account from the Red Hat Customer Portal.
Procedure
Create a pull secret containing your login details and the container registry where the AMQ Streams image is pulled from:
oc create secret docker-registry <pull_secret_name> \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email>
Add your user name and password. The email address is optional.
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
deployment file to specify the pull secret using theSTRIMZI_IMAGE_PULL_SECRET
environment variable:apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: - name: STRIMZI_IMAGE_PULL_SECRETS value: "<pull_secret_name>" # ...
The secret applies to all pods created by the Cluster Operator.
4.6. Designating AMQ Streams administrators
AMQ Streams provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. AMQ Streams provides two cluster roles that you can use to assign these rights to other users:
-
strimzi-view
allows users to view and list AMQ Streams resources. -
strimzi-admin
allows users to also create, edit or delete AMQ Streams resources.
When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view
aggregates to the view
role, and strimzi-admin
aggregates to the edit
and admin
roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights.
The following procedure shows how to assign a strimzi-admin
role that allows non-cluster administrators to manage AMQ Streams resources.
A system administrator can designate AMQ Streams administrators after the Cluster Operator is deployed.
Prerequisites
- The AMQ Streams Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator.
Procedure
Create the
strimzi-view
andstrimzi-admin
cluster roles in OpenShift.oc create -f install/strimzi-admin
If needed, assign the roles that provide access rights to users that require them.
oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user=user1 --user=user2
Chapter 5. Installing AMQ Streams from the OperatorHub using the web console
Install the AMQ Streams operator from the OperatorHub in the OpenShift Container Platform web console.
The procedures in this section show how to:
5.1. Installing the AMQ Streams operator from the OperatorHub
You can install and subscribe to the AMQ Streams operator using the OperatorHub in the OpenShift Container Platform web console.
This procedure describes how to create a project and install the AMQ Streams operator to that project. A project is a representation of a namespace. For manageability, it is a good practice to use namespaces to separate functions.
Make sure you use the appropriate update channel. If you are on a supported version of OpenShift, installing AMQ Streams from the default stable channel is generally safe. However, we do not recommend enabling automatic updates on the stable channel. An automatic upgrade will skip any necessary steps prior to upgrade. Use automatic upgrades only on version-specific channels.
Prerequisites
-
Access to an OpenShift Container Platform web console using an account with
cluster-admin
orstrimzi-admin
permissions.
Procedure
Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation.
We use a project named
amq-streams-kafka
in this example.- Navigate to the Operators > OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the AMQ Streams operator.
The operator is located in the Streaming & Messaging category.
- Click AMQ Streams to display the operator information.
- Read the information about the operator and click Install.
On the Install Operator page, choose from the following installation and update options:
Update Channel: Choose the update channel for the operator.
- The (default) stable channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
- An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number.
- An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number.
Installation Mode: Choose the project you created to install the operator on a specific namespace.
You can install the AMQ Streams operator to all namespaces in the cluster (the default option) or a specific namespace. We recommend that you dedicate a specific namespace to the Kafka cluster and other AMQ Streams components.
- Update approval: By default, the AMQ Streams operator is automatically upgraded to the latest AMQ Streams version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades. For more information on operators, see the OpenShift documentation.
Click Install to install the operator to your selected namespace.
The AMQ Streams operator deploys the Cluster Operator, CRDs, and role-based access control (RBAC) resources to the selected namespace.
After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace.
The status will show as Succeeded.
You can now use the AMQ Streams operator to deploy Kafka components, starting with a Kafka cluster.
If you navigate to Workloads > Deployments, you can see the deployment details for the Cluster Operator and Entity Operator. The name of the Cluster Operator includes a version number: amq-streams-cluster-operator-<version>
. The name is different when deploying the Cluster Operator using the AMQ Streams installation artifacts. In this case, the name is strimzi-cluster-operator
.
5.2. Deploying Kafka components using the AMQ Streams operator
When installed on Openshift, the AMQ Streams operator makes Kafka components available for installation from the user interface.
The following Kafka components are available for installation:
- Kafka
- Kafka Connect
- Kafka MirrorMaker
- Kafka MirrorMaker 2
- Kafka Topic
- Kafka User
- Kafka Bridge
- Kafka Connector
- Kafka Rebalance
You select the component and create an instance. As a minimum, you create a Kafka instance. This procedure describes how to create a Kafka instance using the default settings. You can configure the default installation specification before you perform the installation.
The process is the same for creating instances of other Kafka components.
Prerequisites
- The AMQ Streams operator is installed on the OpenShift cluster.
Procedure
Navigate in the web console to the Operators > Installed Operators page and click AMQ Streams to display the operator details.
From Provided APIs, you can create instances of Kafka components.
Click Create instance under Kafka to create a Kafka instance.
By default, you’ll create a Kafka cluster called
my-cluster
with three Kafka broker nodes and three ZooKeeper nodes. The cluster uses ephemeral storage.Click Create to start the installation of Kafka.
Wait until the status changes to Ready.
Chapter 6. Deploying AMQ Streams using installation artifacts
Having prepared your environment for a deployment of AMQ Streams, you can deploy AMQ Streams to an OpenShift cluster. Use the installation files provided with the release artifacts.
AMQ Streams is based on Strimzi 0.38.x. You can deploy AMQ Streams 2.6 on OpenShift 4.11 to 4.14.
The steps to deploy AMQ Streams using the installation files are as follows:
- Deploy the Cluster Operator
Use the Cluster Operator to deploy the following:
Optionally, deploy the following Kafka components according to your requirements:
To run the commands in this guide, an OpenShift user must have the rights to manage role-based access control (RBAC) and CRDs.
6.1. Basic deployment path
You can set up a deployment where AMQ Streams manages a single Kafka cluster in the same namespace. You might use this configuration for development or testing. Or you can use AMQ Streams in a production environment to manage a number of Kafka clusters in different namespaces.
The first step for any deployment of AMQ Streams is to install the Cluster Operator using the install/cluster-operator
files.
A single command applies all the installation files in the cluster-operator
folder: oc apply -f ./install/cluster-operator
.
The command sets up everything you need to be able to create and manage a Kafka deployment, including the following:
-
Cluster Operator (
Deployment
,ConfigMap
) -
AMQ Streams CRDs (
CustomResourceDefinition
) -
RBAC resources (
ClusterRole
,ClusterRoleBinding
,RoleBinding
) -
Service account (
ServiceAccount
)
The basic deployment path is as follows:
- Download the release artifacts
- Create an OpenShift namespace in which to deploy the Cluster Operator
-
Update the
install/cluster-operator
files to use the namespace created for the Cluster Operator - Install the Cluster Operator to watch one, multiple, or all namespaces
-
Update the
- Create a Kafka cluster
After which, you can deploy other Kafka components and set up monitoring of your deployment.
6.2. Deploying the Cluster Operator
The Cluster Operator is responsible for deploying and managing Kafka clusters within an OpenShift cluster.
When the Cluster Operator is running, it starts to watch for updates of Kafka resources.
By default, a single replica of the Cluster Operator is deployed. You can add replicas with leader election so that additional Cluster Operators are on standby in case of disruption. For more information, see Section 8.5.3, “Running multiple Cluster Operator replicas with leader election”.
6.2.1. Specifying the namespaces the Cluster Operator watches
The Cluster Operator watches for updates in the namespaces where the Kafka resources are deployed. When you deploy the Cluster Operator, you specify which namespaces to watch in the OpenShift cluster. You can specify the following namespaces:
- A single selected namespace (the same namespace containing the Cluster Operator)
- Multiple selected namespaces
- All namespaces in the cluster
Watching multiple selected namespaces has the most impact on performance due to increased processing overhead. To optimize performance for namespace monitoring, it is generally recommended to either watch a single namespace or monitor the entire cluster. Watching a single namespace allows for focused monitoring of namespace-specific resources, while monitoring all namespaces provides a comprehensive view of the cluster’s resources across all namespaces.
The Cluster Operator watches for changes to the following resources:
-
Kafka
for the Kafka cluster. -
KafkaConnect
for the Kafka Connect cluster. -
KafkaConnector
for creating and managing connectors in a Kafka Connect cluster. -
KafkaMirrorMaker
for the Kafka MirrorMaker instance. -
KafkaMirrorMaker2
for the Kafka MirrorMaker 2 instance. -
KafkaBridge
for the Kafka Bridge instance. -
KafkaRebalance
for the Cruise Control optimization requests.
When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as Deployments, Pods, Services and ConfigMaps.
Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource.
Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption.
When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources.
While the Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster, the Topic Operator and User Operator watch for KafkaTopic
and KafkaUser
resources in a single namespace. For more information, see Section 1.2.1, “Watching AMQ Streams resources in OpenShift namespaces”.
6.2.2. Deploying the Cluster Operator to watch a single namespace
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources in a single namespace in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.2.3. Deploying the Cluster Operator to watch multiple namespaces
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across multiple namespaces in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to add a list of all the namespaces the Cluster Operator will watch to theSTRIMZI_NAMESPACE
environment variable.For example, in this procedure the Cluster Operator will watch the namespaces
watched-namespace-1
,watched-namespace-2
,watched-namespace-3
.apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.6.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3
For each namespace listed, install the
RoleBindings
.In this example, we replace
watched-namespace
in these commands with the namespaces listed in the previous step, repeating them forwatched-namespace-1
,watched-namespace-2
,watched-namespace-3
:oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.2.4. Deploying the Cluster Operator to watch all namespaces
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster.
When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to set the value of theSTRIMZI_NAMESPACE
environment variable to*
.apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.6.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ...
Create
ClusterRoleBindings
that grant cluster-wide access for all namespaces to the Cluster Operator.oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
Deploy the Cluster Operator to your OpenShift cluster.
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.3. Deploying Kafka
To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka
resource. AMQ Streams provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time.
After you have deployed the Cluster Operator, use a Kafka
resource to deploy the following components:
When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper.
If you are trying the preview of the node pools feature, you can deploy a Kafka cluster with one or more node pools. Node pools provide configuration for a set of Kafka nodes. By using node pools, nodes can have different configuration within the same Kafka cluster.
Node pools are not enabled by default, so you must enable the KafkaNodePools
feature gate before using them.
If you haven’t deployed a Kafka cluster as a Kafka
resource, you can’t use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by AMQ Streams, by deploying them as standalone components. You can also deploy and use other Kafka components with a Kafka cluster not managed by AMQ Streams.
6.3.1. Deploying the Kafka cluster
This procedure shows how to deploy a Kafka cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a Kafka
resource.
AMQ Streams provides the following example files you can use to create a Kafka cluster:
kafka-persistent.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes.
kafka-jbod.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes).
kafka-persistent-single.yaml
- Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node.
kafka-ephemeral.yaml
- Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes.
kafka-ephemeral-single.yaml
- Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node.
In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment.
- Ephemeral cluster
-
In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses
emptyDir
volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using anemptyDir
volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. - Persistent cluster
A persistent Kafka cluster uses persistent volumes to store ZooKeeper and Kafka data. A
PersistentVolume
is acquired using aPersistentVolumeClaim
to make it independent of the actual type of thePersistentVolume
. ThePersistentVolumeClaim
can use aStorageClass
to trigger automatic volume provisioning. When noStorageClass
is specified, OpenShift will try to use the defaultStorageClass
.The following examples show some common types of persistent volumes:
- If your OpenShift cluster runs on Amazon AWS, OpenShift can provision Amazon EBS volumes
- If your OpenShift cluster runs on Microsoft Azure, OpenShift can provision Azure Disk Storage volumes
- If your OpenShift cluster runs on Google Cloud, OpenShift can provision Persistent Disk volumes
- If your OpenShift cluster runs on bare metal, OpenShift can provision local persistent volumes
The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. The inter.broker.protocol.version
property for the Kafka config
must be the version supported by the specified Kafka version (spec.kafka.version
). The property represents the version of Kafka protocol used in a Kafka cluster.
From Kafka 3.0.0, when the inter.broker.protocol.version
is set to 3.0
or higher, the log.message.format.version
option is ignored and doesn’t need to be set.
An update to the inter.broker.protocol.version
is required when upgrading Kafka.
The example clusters are named my-cluster
by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name
property of the Kafka
resource in the relevant YAML file.
Default cluster name and specified Kafka versions
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.6.0 #... config: #... log.message.format.version: "3.6" inter.broker.protocol.version: "3.6" # ...
Prerequisites
Procedure
Create and deploy an ephemeral or persistent cluster.
To create and deploy an ephemeral cluster:
oc apply -f examples/kafka/kafka-ephemeral.yaml
To create and deploy a persistent cluster:
oc apply -f examples/kafka/kafka-persistent.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod names and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0
my-cluster
is the name of the Kafka cluster.A sequential index number starting with
0
identifies each Kafka and ZooKeeper pod created.With the default deployment, you create an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.3.2. (Preview) Deploying Kafka node pools
This procedure shows how to deploy Kafka node pools to your OpenShift cluster using the Cluster Operator. Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration. For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the kafka
resource.
The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools
feature gate before using them.
The deployment uses a YAML file to provide the specification to create a KafkaNodePool
resource. You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management.
KRaft mode is not ready for production in Apache Kafka or in AMQ Streams.
AMQ Streams provides the following example files that you can use to create a Kafka node pool:
kafka.yaml
- Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.
kafka-with-dual-role-kraft-nodes.yaml
- Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles.
kafka-with-kraft.yaml
- Deploys a Kafka cluster with one pool of controller nodes and one pool of broker nodes.
You don’t need to start using node pools right away. If you decide to use them, you can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool
resources or migrate your existing Kafka cluster.
Prerequisites
If you want to migrate an existing Kafka cluster to use node pools, see the steps to migrate existing Kafka clusters.
Procedure
Enable the
KafkaNodePools
feature gate from the command line:oc set env deployment/strimzi-cluster-operator STRIMZI_FEATURE_GATES="+KafkaNodePools"
Or by editing the Cluster Operator Deployment and updating the
STRIMZI_FEATURE_GATES
environment variable:env - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools
This updates the Cluster Operator.
If using KRaft mode, enable the
UseKRaft
feature gate as well.Create a node pool.
To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers:
oc apply -f examples/kafka/nodepools/kafka.yaml
To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes:
oc apply -f examples/kafka/nodepools/kafka-with-dual-role-kraft-nodes.yaml
To deploy a Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:
oc apply -f examples/kafka/nodepools/kafka-with-kraft.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the node pool names and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0
-
my-cluster
is the name of the Kafka cluster. pool-a
is the name of the node pool.A sequential index number starting with
0
identifies each Kafka pod created. If you are using ZooKeeper, you’ll also see the ZooKeeper pods.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.Information on the deployment is also shown in the status of the
KafkaNodePool
resource, including a list of IDs for nodes in the pool.NoteNode IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the next node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed.
-
Additional resources
6.3.3. Deploying the Topic Operator using the Cluster Operator
This procedure describes how to deploy the Topic Operator using the Cluster Operator. The Topic Operator can be deployed for use in either bidirectional mode or unidirectional mode. To learn more about bidirectional and unidirectional topic management, see Section 9.1, “Topic management modes”.
Unidirectional topic management is available as a preview. Unidirectional topic management is not enabled by default, so you must enable the UnidirectionalTopicOperator
feature gate to be able to use it.
You configure the entityOperator
property of the Kafka
resource to include the topicOperator
. By default, the Topic Operator watches for KafkaTopic
resources in the namespace of the Kafka cluster deployed by the Cluster Operator. You can also specify a namespace using watchedNamespace
in the Topic Operator spec
. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator.
If you use AMQ Streams to deploy multiple Kafka clusters into the same namespace, enable the Topic Operator for only one Kafka cluster or use the watchedNamespace
property to configure the Topic Operators to watch other namespaces.
If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component.
For more information about configuring the entityOperator
and topicOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includetopicOperator
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {}
Configure the Topic Operator
spec
using the properties described in theEntityTopicOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
6.3.4. Deploying the User Operator using the Cluster Operator
This procedure describes how to deploy the User Operator using the Cluster Operator.
You configure the entityOperator
property of the Kafka
resource to include the userOperator
. By default, the User Operator watches for KafkaUser
resources in the namespace of the Kafka cluster deployment. You can also specify a namespace using watchedNamespace
in the User Operator spec
. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator.
If you want to use the User Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the User Operator as a standalone component.
For more information about configuring the entityOperator
and userOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includeuserOperator
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {}
Configure the User Operator
spec
using the properties described inEntityUserOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
6.3.5. List of Kafka cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster.
Shared resources
<kafka_cluster_name>-cluster-ca
- Secret with the Cluster CA private key used to encrypt the cluster communication.
<kafka_cluster_name>-cluster-ca-cert
- Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers.
<kafka_cluster_name>-clients-ca
- Secret with the Clients CA private key used to sign user certificates
<kafka_cluster_name>-clients-ca-cert
- Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka users.
<kafka_cluster_name>-cluster-operator-certs
- Secret with Cluster operators keys for communication with Kafka and ZooKeeper.
ZooKeeper nodes
<kafka_cluster_name>-zookeeper
Name given to the following ZooKeeper resources:
- StrimziPodSet for managing the ZooKeeper node pods.
- Service account used by the ZooKeeper nodes.
- PodDisruptionBudget configured for the ZooKeeper nodes.
<kafka_cluster_name>-zookeeper-<pod_id>
- Pods created by the StrimziPodSet.
<kafka_cluster_name>-zookeeper-nodes
- Headless Service needed to have DNS resolve the ZooKeeper pods IP addresses directly.
<kafka_cluster_name>-zookeeper-client
- Service used by Kafka brokers to connect to ZooKeeper nodes as clients.
<kafka_cluster_name>-zookeeper-config
- ConfigMap that contains the ZooKeeper ancillary configuration, and is mounted as a volume by the ZooKeeper node pods.
<kafka_cluster_name>-zookeeper-nodes
- Secret with ZooKeeper node keys.
<kafka_cluster_name>-network-policy-zookeeper
- Network policy managing access to the ZooKeeper services.
data-<kafka_cluster_name>-zookeeper-<pod_id>
- Persistent Volume Claim for the volume used for storing data for a specific ZooKeeper node. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.
Kafka brokers
<kafka_cluster_name>-kafka
Name given to the following Kafka resources:
- StrimziPodSet for managing the Kafka broker pods.
- Service account used by the Kafka pods.
- PodDisruptionBudget configured for the Kafka brokers.
<kafka_cluster_name>-kafka-<pod_id>
Name given to the following Kafka resources:
- Pods created by the StrimziPodSet.
- ConfigMaps with Kafka broker configuration.
<kafka_cluster_name>-kafka-brokers
- Service needed to have DNS resolve the Kafka broker pods IP addresses directly.
<kafka_cluster_name>-kafka-bootstrap
- Service can be used as bootstrap servers for Kafka clients connecting from within the OpenShift cluster.
<kafka_cluster_name>-kafka-external-bootstrap
-
Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is
external
and port is9094
. <kafka_cluster_name>-kafka-<pod_id>
-
Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is
external
and port is9094
. <kafka_cluster_name>-kafka-external-bootstrap
-
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type
route
. The old route name will be used for backwards compatibility when the listener name isexternal
and port is9094
. <kafka_cluster_name>-kafka-<pod_id>
-
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type
route
. The old route name will be used for backwards compatibility when the listener name isexternal
and port is9094
. <kafka_cluster_name>-kafka-<listener_name>-bootstrap
- Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
<kafka_cluster_name>-kafka-<listener_name>-<pod_id>
- Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
<kafka_cluster_name>-kafka-<listener_name>-bootstrap
-
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type
route
. The new route name will be used for all other external listeners. <kafka_cluster_name>-kafka-<listener_name>-<pod_id>
-
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type
route
. The new route name will be used for all other external listeners. <kafka_cluster_name>-kafka-config
-
ConfigMap containing the Kafka ancillary configuration, which is mounted as a volume by the broker pods when the
UseStrimziPodSets
feature gate is disabled. <kafka_cluster_name>-kafka-brokers
- Secret with Kafka broker keys.
<kafka_cluster_name>-network-policy-kafka
- Network policy managing access to the Kafka services.
strimzi-namespace-name-<kafka_cluster_name>-kafka-init
- Cluster role binding used by the Kafka brokers.
<kafka_cluster_name>-jmx
- Secret with JMX username and password used to secure the Kafka broker port. This resource is created only when JMX is enabled in Kafka.
data-<kafka_cluster_name>-kafka-<pod_id>
- Persistent Volume Claim for the volume used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data.
data-<id>-<kafka_cluster_name>-kafka-<pod_id>
-
Persistent Volume Claim for the volume
id
used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.
(Preview) Kafka node pools
If you are using Kafka node pools, the resources created apply to the nodes managed in the node pools whether they are operating as brokers, controllers, or both. The naming convention includes the name of the Kafka cluster and the node pool: <kafka_cluster_name>-<pool_name>
.
<kafka_cluster_name>-<pool_name>
- Name given to the StrimziPodSet for managing the Kafka node pool.
<kafka_cluster_name>-<pool_name>-<pod_id>
Name given to the following Kafka node pool resources:
- Pods created by the StrimziPodSet.
- ConfigMaps with Kafka node configuration.
data-<kafka_cluster_name>-<pool_name>-<pod_id>
- Persistent Volume Claim for the volume used for storing data for a specific node. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data.
data-<id>-<kafka_cluster_name>-<pool_name>-<pod_id>
-
Persistent Volume Claim for the volume
id
used for storing data for a specific node. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.
Entity Operator
These resources are only created if the Entity Operator is deployed using the Cluster Operator.
<kafka_cluster_name>-entity-operator
Name given to the following Entity Operator resources:
- Deployment with Topic and User Operators.
- Service account used by the Entity Operator.
- Network policy managing access to the Entity Operator metrics.
<kafka_cluster_name>-entity-operator-<random_string>
- Pod created by the Entity Operator deployment.
<kafka_cluster_name>-entity-topic-operator-config
- ConfigMap with ancillary configuration for Topic Operators.
<kafka_cluster_name>-entity-user-operator-config
- ConfigMap with ancillary configuration for User Operators.
<kafka_cluster_name>-entity-topic-operator-certs
- Secret with Topic Operator keys for communication with Kafka and ZooKeeper.
<kafka_cluster_name>-entity-user-operator-certs
- Secret with User Operator keys for communication with Kafka and ZooKeeper.
strimzi-<kafka_cluster_name>-entity-topic-operator
- Role binding used by the Entity Topic Operator.
strimzi-<kafka_cluster_name>-entity-user-operator
- Role binding used by the Entity User Operator.
Kafka Exporter
These resources are only created if the Kafka Exporter is deployed using the Cluster Operator.
<kafka_cluster_name>-kafka-exporter
Name given to the following Kafka Exporter resources:
- Deployment with Kafka Exporter.
- Service used to collect consumer lag metrics.
- Service account used by the Kafka Exporter.
- Network policy managing access to the Kafka Exporter metrics.
<kafka_cluster_name>-kafka-exporter-<random_string>
- Pod created by the Kafka Exporter deployment.
Cruise Control
These resources are only created if Cruise Control was deployed using the Cluster Operator.
<kafka_cluster_name>-cruise-control
Name given to the following Cruise Control resources:
- Deployment with Cruise Control.
- Service used to communicate with Cruise Control.
- Service account used by the Cruise Control.
<kafka_cluster_name>-cruise-control-<random_string>
- Pod created by the Cruise Control deployment.
<kafka_cluster_name>-cruise-control-config
- ConfigMap that contains the Cruise Control ancillary configuration, and is mounted as a volume by the Cruise Control pods.
<kafka_cluster_name>-cruise-control-certs
- Secret with Cruise Control keys for communication with Kafka and ZooKeeper.
<kafka_cluster_name>-network-policy-cruise-control
- Network policy managing access to the Cruise Control service.
6.4. Deploying Kafka Connect
Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database or messaging system, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed.
In AMQ Streams, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by AMQ Streams.
Using the concept of connectors, Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability.
The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect
resource and connectors created using the KafkaConnector
resource.
In order to use Kafka Connect, you need to do the following.
The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context.
6.4.1. Deploying Kafka Connect to your OpenShift cluster
This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator.
A Kafka Connect cluster deployment is implemented with a configurable number of nodes (also called workers) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable.
The deployment uses a YAML file to provide the specification to create a KafkaConnect
resource.
AMQ Streams provides example configuration files. In this procedure, we use the following example file:
-
examples/connect/kafka-connect.yaml
If deploying Kafka Connect clusters to run in parallel, each instance must use unique names for internal Kafka Connect topics. To do this, configure each Kafka Connect instance to replace the defaults.
Prerequisites
Procedure
Deploy Kafka Connect to your OpenShift cluster. Use the
examples/connect/kafka-connect.yaml
file to deploy Kafka Connect.oc apply -f examples/connect/kafka-connect.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0
my-connect-cluster
is the name of the Kafka Connect cluster.A pod ID identifies each pod created.
With the default deployment, you create a single Kafka Connect pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.4.2. List of Kafka Connect cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <connect_cluster_name>-connect
Name given to the following Kafka Connect resources:
-
Deployment that creates the Kafka Connect worker node pods (when
StableConnectIdentities
feature gate is disabled). -
StrimziPodSet that creates the Kafka Connect worker node pods (when
StableConnectIdentities
feature gate is enabled). -
Headless service that provides stable DNS names to the Connect pods (when
StableConnectIdentities
feature gate is enabled). - Pod Disruption Budget configured for the Kafka Connect worker nodes.
-
Deployment that creates the Kafka Connect worker node pods (when
- <connect_cluster_name>-connect-<pod_id>
-
Pods created by the Kafka Connect StrimziPodSet (when
StableConnectIdentities
feature gate is enabled). - <connect_cluster_name>-connect-api
- Service which exposes the REST interface for managing the Kafka Connect cluster.
- <connect_cluster_name>-config
- ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods.
6.5. Adding Kafka Connect connectors
Kafka Connect uses connectors to integrate with other systems to stream data. A connector is an instance of a Kafka Connector
class, which can be one of the following type:
- Source connector
- A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages.
- Sink connector
- A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system.
Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems.
Add connector plugins to Kafka Connect in one of the following ways:
- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
After plugins have been added to the container image, you can start, stop, and manage connector instances in the following ways:
You can also create new connector instances using these options.
6.5.1. Building a new container image with connector plugins automatically
Configure Kafka Connect so that AMQ Streams automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins
property of the KafkaConnect
custom resource. AMQ Streams will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output
and automatically used in the Kafka Connect deployment.
Prerequisites
- The Cluster Operator must be deployed.
- A container registry.
You need to provide your own container registry where images can be pushed to, stored, and pulled from. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub.
Procedure
Configure the
KafkaConnect
custom resource by specifying the container registry in.spec.build.output
, and additional connectors in.spec.build.plugins
:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 #...
Create or update the resource:
$ oc apply -f <kafka_connect_configuration_file>
- Wait for the new container image to build, and for the Kafka Connect cluster to be deployed.
-
Use the Kafka Connect REST API or
KafkaConnector
custom resources to use the connector plugins you added.
Additional resources
6.5.2. Building a new container image with connector plugins from the Kafka Connect base image
Create a custom Docker image with connector plugins from the Kafka Connect base image. Add the custom image to the /opt/kafka/plugins
directory.
You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins.
At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins
directory.
Prerequisites
Procedure
Create a new
Dockerfile
usingregistry.redhat.io/amq-streams/kafka-36-rhel8:2.6.0
as the base image:FROM registry.redhat.io/amq-streams/kafka-36-rhel8:2.6.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
Example plugins file
$ tree ./my-plugins/ ./my-plugins/ ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ... ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # ... └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── # ...
The COPY command points to the plugin files to copy to the container image.
This example adds plugins for Debezium connectors (MongoDB, MySQL, and PostgreSQL), though not all files are listed for brevity. Debezium running in Kafka Connect looks the same as any other Kafka Connect task.
- Build the container image.
- Push your custom image to your container registry.
Point to the new container image.
You can point to the image in one of the following ways:
Edit the
KafkaConnect.spec.image
property of theKafkaConnect
custom resource.If set, this property overrides the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in the Cluster Operator.apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #...
-
Edit the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in theinstall/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to point to the new container image, and then reinstall the Cluster Operator.
6.5.3. Deploying KafkaConnector resources
Deploy KafkaConnector
resources to manage connectors. The KafkaConnector
custom resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. You don’t need to send HTTP requests to manage connectors, as with the Kafka Connect REST API. You manage a running connector instance by updating its corresponding KafkaConnector
resource, and then applying the updates. The Cluster Operator updates the configurations of the running connector instances. You remove a connector by deleting its corresponding KafkaConnector
.
KafkaConnector
resources must be deployed to the same namespace as the Kafka Connect cluster they link to.
In the configuration shown in this procedure, the autoRestart
feature is enabled (enabled: true
) for automatic restarts of failed connectors and tasks. You can also annotate the KafkaConnector
resource to restart a connector or restart a connector task manually.
Example connectors
You can use your own connectors or try the examples provided by AMQ Streams. Up until Apache Kafka 3.1.0, example file connector plugins were included with Apache Kafka. Starting from the 3.1.1 and 3.2.0 releases of Apache Kafka, the examples need to be added to the plugin path as any other connector.
AMQ Streams provides an example KafkaConnector
configuration file (examples/connect/source-connector.yaml
) for the example file connector plugins, which creates the following connector instances as KafkaConnector
resources:
-
A
FileStreamSourceConnector
instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic. -
A
FileStreamSinkConnector
instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink).
We use the example file to create connectors in this procedure.
The example connectors are not intended for use in a production environment.
Prerequisites
- A Kafka Connect deployment
- The Cluster Operator is running
Procedure
Add the
FileStreamSourceConnector
andFileStreamSinkConnector
plugins to Kafka Connect in one of the following ways:- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
Set the
strimzi.io/use-connector-resources annotation
totrue
in the Kafka Connect configuration.apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ...
With the
KafkaConnector
resources enabled, the Cluster Operator watches for them.Edit the
examples/connect/source-connector.yaml
file:Example KafkaConnector source connector configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: "/opt/kafka/LICENSE" 7 topic: my-topic 8 # ...
- 1
- Name of the
KafkaConnector
resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. - 2
- Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
- 3
- Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 4
- Maximum number of Kafka Connect tasks that the connector can create.
- 5
- Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the
maxRestarts
property. - 6
- Connector configuration as key-value pairs.
- 7
- Location of the external data file. In this example, we’re configuring the
FileStreamSourceConnector
to read from the/opt/kafka/LICENSE
file. - 8
- Kafka topic to publish the source data to.
Create the source
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/source-connector.yaml
Create an
examples/connect/sink-connector.yaml
file:touch examples/connect/sink-connector.yaml
Paste the following YAML into the
sink-connector.yaml
file:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: "/tmp/my-file" 3 topics: my-topic 4
- 1
- Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 2
- Connector configuration as key-value pairs.
- 3
- Temporary file to publish the source data to.
- 4
- Kafka topic to read the source data from.
Create the sink
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/sink-connector.yaml
Check that the connector resources were created:
oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector
Replace <my_connect_cluster> with the name of your Kafka Connect cluster.
In the container, execute
kafka-console-consumer.sh
to read the messages that were written to the topic by the source connector:oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap.NAMESPACE.svc:9092 --topic my-topic --from-beginning
Replace <my_kafka_cluster> with the name of your Kafka cluster.
Source and sink connector configuration options
The connector configuration is defined in the spec.config
property of the KafkaConnector
resource.
The FileStreamSourceConnector
and FileStreamSinkConnector
classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options.
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Source file to write messages to. If not specified, the standard input is used. |
| List | Null | The Kafka topic to publish data to. |
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Destination file to write messages to. If not specified, the standard output is used. |
| List | Null | One or more Kafka topics to read data from. |
| String | Null | A regular expression matching one or more Kafka topics to read data from. |
6.5.4. Exposing the Kafka Connect API
Use the Kafka Connect REST API as an alternative to using KafkaConnector
resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name>-connect-api:8083
, where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance.
The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation.
The strimzi.io/use-connector-resources
annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect
resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
You can add the connector configuration as a JSON object.
Example curl request to add connector configuration
curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" } }'
The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
The connection is insecure, so allow external access advisedly.
If you decide to create services, use the labels from the selector
of the <connect_cluster_name>-connect-api
service to configure the pods to which the service will route the traffic:
Selector configuration for the service
# ... selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #...
You must also create a NetworkPolicy
that allows HTTP requests from external clients.
Example NetworkPolicy to allow requests to the Kafka Connect API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-custom-connect-network-policy
spec:
ingress:
- from:
- podSelector: 1
matchLabels:
app: my-connector-manager
ports:
- port: 8083
protocol: TCP
podSelector:
matchLabels:
strimzi.io/cluster: my-connect-cluster
strimzi.io/kind: KafkaConnect
strimzi.io/name: my-connect-cluster-connect
policyTypes:
- Ingress
- 1
- The label of the pod that is allowed to connect to the API.
To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command.
6.5.5. Limiting access to the Kafka Connect API
It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. The Kafka Connect API provides extensive capabilities for altering connector configurations, which makes it all the more important to take security precautions. Someone with access to the Kafka Connect API could potentially obtain sensitive information that an administrator may assume is secure.
The Kafka Connect REST API can be accessed by anyone who has authenticated access to the OpenShift cluster and knows the endpoint URL, which includes the hostname/IP address and port number.
For example, suppose an organization uses a Kafka Connect cluster and connectors to stream sensitive data from a customer database to a central database. The administrator uses a configuration provider plugin to store sensitive information related to connecting to the customer database and the central database, such as database connection details and authentication credentials. The configuration provider protects this sensitive information from being exposed to unauthorized users. However, someone who has access to the Kafka Connect API can still obtain access to the customer database without the consent of the administrator. They can do this by setting up a fake database and configuring a connector to connect to it. They then modify the connector configuration to point to the customer database, but instead of sending the data to the central database, they send it to the fake database. By configuring the connector to connect to the fake database, the login details and credentials for connecting to the customer database are intercepted, even though they are stored securely in the configuration provider.
If you are using the KafkaConnector
custom resources, then by default the OpenShift RBAC rules permit only OpenShift cluster administrators to make changes to connectors. You can also designate non-cluster administrators to manage AMQ Streams resources. With KafkaConnector
resources enabled in your Kafka Connect configuration, changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. If you are not using the KafkaConnector
resource, the default RBAC rules do not limit access to the Kafka Connect API. If you want to limit direct access to the Kafka Connect REST API using OpenShift RBAC, you need to enable and use the KafkaConnector
resources.
For improved security, we recommend configuring the following properties for the Kafka Connect API:
org.apache.kafka.disallowed.login.modules
(Kafka 3.4 or later) Set the
org.apache.kafka.disallowed.login.modules
Java system property to prevent the use of insecure login modules. For example, specifyingcom.sun.security.auth.module.JndiLoginModule
prevents the use of the KafkaJndiLoginModule
.Example configuration for disallowing login modules
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule # ...
Only allow trusted login modules and follow the latest advice from Kafka for the version you are using. As a best practice, you should explicitly disallow insecure login modules in your Kafka Connect configuration by using the
org.apache.kafka.disallowed.login.modules
system property.connector.client.config.override.policy
Set the
connector.client.config.override.policy
property toNone
to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses.Example configuration to specify connector override policy
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: connector.client.config.override.policy: None # ...
6.5.6. Switching from using the Kafka Connect API to using KafkaConnector custom resources
You can switch from using the Kafka Connect API to using KafkaConnector
custom resources to manage your connectors. To make the switch, do the following in the order shown:
-
Deploy
KafkaConnector
resources with the configuration to create your connector instances. -
Enable
KafkaConnector
resources in your Kafka Connect configuration by setting thestrimzi.io/use-connector-resources
annotation totrue
.
If you enable KafkaConnector
resources before creating them, you delete all connectors.
To switch from using KafkaConnector
resources to using the Kafka Connect API, first remove the annotation that enables the KafkaConnector
resources from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
When making the switch, check the status of the KafkaConnect
resource. The value of metadata.generation
(the current version of the deployment) must match status.observedGeneration
(the latest reconciliation of the resource). When the Kafka Connect cluster is Ready
, you can delete the KafkaConnector
resources.
6.6. Deploying Kafka MirrorMaker
Kafka MirrorMaker replicates data between two or more Kafka clusters, within or across data centers. This process is called mirroring to avoid confusion with the concept of Kafka partition replication. MirrorMaker consumes messages from a source cluster and republishes those messages to a target cluster.
Data replication across clusters supports scenarios that require the following:
- Recovery of data in the event of a system failure
- Consolidation of data from multiple source clusters for centralized analysis
- Restriction of data access to a specific cluster
- Provision of data at a specific location to improve latency
6.6.1. Deploying Kafka MirrorMaker to your OpenShift cluster
This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker
or KafkaMirrorMaker2
resource depending on the version of MirrorMaker deployed. MirrorMaker 2 is based on Kafka Connect and uses its configuration properties.
Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker
custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker
resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2
custom resource with the IdentityReplicationPolicy
.
AMQ Streams provides example configuration files. In this procedure, we use the following example files:
-
examples/mirror-maker/kafka-mirror-maker.yaml
-
examples/mirror-maker/kafka-mirror-maker-2.yaml
If deploying MirrorMaker 2 clusters to run in parallel, using the same target Kafka cluster, each instance must use unique names for internal Kafka Connect topics. To do this, configure each MirrorMaker 2 instance to replace the defaults.
Prerequisites
Procedure
Deploy Kafka MirrorMaker to your OpenShift cluster:
For MirrorMaker:
oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml
For MirrorMaker 2:
oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1
my-mirror-maker
is the name of the Kafka MirrorMaker cluster.my-mm2-cluster
is the name of the Kafka MirrorMaker 2 cluster.A pod ID identifies each pod created.
With the default deployment, you install a single MirrorMaker or MirrorMaker 2 pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.6.2. List of Kafka MirrorMaker 2 cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <mirrormaker2_cluster_name>-mirrormaker2
Name given to the following MirrorMaker 2 resources:
- Deployment which is responsible for creating the MirrorMaker 2 pods.
- Service account used by the MirrorMaker 2 nodes.
- Pod Disruption Budget configured for the MirrorMaker2 worker nodes.
- <mirrormaker2_cluster_name>-mirrormaker2-config
- ConfigMap which contains ancillary configuration for the MirrorMaker2, and is mounted as a volume by the MirrorMaker 2 pods.
6.6.3. List of Kafka MirrorMaker cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <mirrormaker_cluster_name>-mirror-maker
Name given to the following MirrorMaker resources:
- Deployment which is responsible for creating the MirrorMaker pods.
- Service account used by the MirrorMaker nodes.
- Pod Disruption Budget configured for the MirrorMaker worker nodes.
- <mirrormaker_cluster_name>-mirror-maker-config
- ConfigMap which contains ancillary configuration for MirrorMaker, and is mounted as a volume by the MirrorMaker pods.
6.7. Deploying Kafka Bridge
Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster.
6.7.1. Deploying Kafka Bridge to your OpenShift cluster
This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaBridge
resource.
AMQ Streams provides example configuration files. In this procedure, we use the following example file:
-
examples/bridge/kafka-bridge.yaml
Prerequisites
Procedure
Deploy Kafka Bridge to your OpenShift cluster:
oc apply -f examples/bridge/kafka-bridge.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0
my-bridge
is the name of the Kafka Bridge cluster.A pod ID identifies each pod created.
With the default deployment, you install a single Kafka Bridge pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
Additional resources
6.7.2. Exposing the Kafka Bridge service to your local machine
Use port forwarding to expose the AMQ Streams Kafka Bridge service to your local machine on http://localhost:8080.
Port forwarding is only suitable for development and testing purposes.
Procedure
List the names of the pods in your OpenShift cluster:
oc get pods -o name pod/kafka-consumer # ... pod/my-bridge-bridge-<pod_id>
Connect to the Kafka Bridge pod on port
8080
:oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &
NoteIf port 8080 on your local machine is already in use, use an alternative HTTP port, such as
8008
.
API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod.
6.7.3. Accessing the Kafka Bridge outside of OpenShift
After deployment, the AMQ Streams Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name>-bridge-service
service to access the API.
If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
If you decide to create Services, use the labels from the selector
of the <kafka_bridge_name>-bridge-service
service to configure the pods to which the service will route the traffic:
# ...
selector:
strimzi.io/cluster: kafka-bridge-name 1
strimzi.io/kind: KafkaBridge
#...
- 1
- Name of the Kafka Bridge custom resource in your OpenShift cluster.
6.7.4. List of Kafka Bridge cluster resources
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <bridge_cluster_name>-bridge
- Deployment which is in charge to create the Kafka Bridge worker node pods.
- <bridge_cluster_name>-bridge-service
- Service which exposes the REST interface of the Kafka Bridge cluster.
- <bridge_cluster_name>-bridge-config
- ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods.
- <bridge_cluster_name>-bridge
- Pod Disruption Budget configured for the Kafka Bridge worker nodes.
6.8. Alternative standalone deployment options for AMQ Streams operators
You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator.
You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster.
6.8.1. Deploying the standalone Topic Operator
This procedure shows how to deploy the Topic Operator as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator.
A standalone deployment can operate with any Kafka cluster.
Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-topic-operator.yaml
deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The Topic Operator watches for KafkaTopic
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the Topic Operator configuration. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you want to use more than one Topic Operator, configure each of them to watch different namespaces. In this way, you can use Topic Operators with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the Topic Operator to connect to.
As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the
env
properties in theinstall/topic-operator/05-Deployment-strimzi-topic-operator.yaml
standalone deployment file.Example standalone Topic Operator deployment configuration
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_ZOOKEEPER_CONNECT 4 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 5 value: "18000" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 7 value: "6" - name: STRIMZI_LOG_LEVEL 8 value: INFO - name: STRIMZI_TLS_ENABLED 9 value: "false" - name: STRIMZI_JAVA_OPTS 10 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 11 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA 12 value: "false" - name: STRIMZI_TLS_AUTH_ENABLED 13 value: "false" - name: STRIMZI_SASL_ENABLED 14 value: "false" - name: STRIMZI_SASL_USERNAME 15 value: "admin" - name: STRIMZI_SASL_PASSWORD 16 value: "password" - name: STRIMZI_SASL_MECHANISM 17 value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL 18 value: "SSL"
- 1
- The OpenShift namespace for the Topic Operator to watch for
KafkaTopic
resources. Specify the namespace of the Kafka cluster. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The label to identify the
KafkaTopic
resources managed by the Topic Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaTopic
resource. If you deploy more than one Topic Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 4
- (ZooKeeper) The host and port pair of the address to connect to the ZooKeeper cluster. This must be the same ZooKeeper cluster that your Kafka cluster is using.
- 5
- (ZooKeeper) The ZooKeeper session timeout, in milliseconds. The default is
18000
(18 seconds). - 6
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 7
- The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential backoff. Consider increasing this value when topic creation takes more time due to the number of partitions or replicas. The default is
6
attempts. - 8
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 9
- Enables TLS support for encrypted communication with the Kafka brokers.
- 10
- (Optional) The Java options used by the JVM running the Topic Operator.
- 11
- (Optional) The debugging (
-D
) options set for the Topic Operator. - 12
- (Optional) Skips the generation of trust store certificates if TLS is enabled through
STRIMZI_TLS_ENABLED
. If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default isfalse
. - 13
- (Optional) Generates key store certificates for mTLS authentication. Setting this to
false
disables client authentication with mTLS to the Kafka brokers. The default istrue
. - 14
- (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is
false
. - 15
- (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 16
- (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 17
- (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. You can set the value toplain
,scram-sha-256
, orscram-sha-512
. - 18
- (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to
PLAINTEXT
,SSL
,SASL_PLAINTEXT
, orSASL_SSL
.
-
If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set
STRIMZI_PUBLIC_CA
totrue
. Set this property totrue
, for example, if you are using Amazon AWS MSK service. If you enabled mTLS with the
STRIMZI_TLS_ENABLED
environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster.Example mTLS configuration
# .... env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: "/path/to/truststore.p12" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: "TRUSTSTORE-PASSWORD" - name: STRIMZI_KEYSTORE_LOCATION 3 value: "/path/to/keystore.p12" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: "KEYSTORE-PASSWORD" # ...
Deploy the Topic Operator.
oc create -f install/topic-operator
Check the status of the deployment:
oc get deployments
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.8.1.1. (Preview) Deploying the standalone Topic Operator for unidirectional topic management
Unidirectional topic management maintains topics solely through KafkaTopic
resources. For more information on unidirectional topic management, see Section 9.1, “Topic management modes”.
If you want to try the preview of unidirectional topic management, follow these steps to deploy the standalone Topic Operator.
Procedure
Undeploy the current standalone Topic Operator.
Retain the
KafkaTopic
resources, which are picked up by the Topic Operator when it is deployed again.Edit the
Deployment
configuration for the standalone Topic Operator to remove any ZooKeeper-related environment variables:-
STRIMZI_ZOOKEEPER_CONNECT
-
STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS
-
TC_ZK_CONNECTION_TIMEOUT_MS
STRIMZI_USE_ZOOKEEPER_TOPIC_STORE
It is the presence or absence of the ZooKeeper variables that defines whether the unidirectional Topic Operator is used. Unidirectional topic management does not use ZooKeeper. If ZooKeeper environment variables are not present, the unidirectional Topic Operator is used. Otherwise, the bidirectional Topic Operator is used.
Other unused environment variables that can be removed if present:
-
STRIMZI_REASSIGN_THROTTLE
-
STRIMZI_REASSIGN_VERIFY_INTERVAL_MS
-
STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS
-
STRIMZI_TOPICS_PATH
-
STRIMZI_STORE_TOPIC
-
STRIMZI_STORE_NAME
-
STRIMZI_APPLICATION_ID
-
STRIMZI_STALE_RESULT_TIMEOUT_MS
-
(Optional) Set the
STRIMZI_USE_FINALIZERS
environment variable tofalse
:Additional configuration for unidirectional topic management
# ... env: - name: STRIMZI_USE_FINALIZERS value: "false"
Set this environment variable to
false
if you do not want to use finalizers to control topic deletion.Example standalone Topic Operator deployment configuration for unidirectional topic management
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" - name: STRIMZI_LOG_LEVEL value: INFO - name: STRIMZI_TLS_ENABLED value: "false" - name: STRIMZI_JAVA_OPTS value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA value: "false" - name: STRIMZI_TLS_AUTH_ENABLED value: "false" - name: STRIMZI_SASL_ENABLED value: "false" - name: STRIMZI_SASL_USERNAME value: "admin" - name: STRIMZI_SASL_PASSWORD value: "password" - name: STRIMZI_SASL_MECHANISM value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL value: "SSL" - name: STRIMZI_USE_FINALIZERS value: "true"
- Deploy the standalone Topic Operator in the standard way.
6.8.2. Deploying the standalone User Operator
This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator.
A standalone deployment can operate with any Kafka cluster.
Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-user-operator.yaml
deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The User Operator watches for KafkaUser
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the User Operator configuration. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use more than one User Operator, configure each of them to watch different namespaces. In this way, you can use the User Operator with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the User Operator to connect to.
As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the following
env
properties in theinstall/user-operator/05-Deployment-strimzi-user-operator.yaml
standalone deployment file.Example standalone User Operator deployment configuration
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-user-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: "true" - name: STRIMZI_CA_VALIDITY 12 value: "365" - name: STRIMZI_CA_RENEWAL 13 value: "30" - name: STRIMZI_JAVA_OPTS 14 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_SECRET_PREFIX 16 value: "kafka-" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: "true" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000
- 1
- The OpenShift namespace for the User Operator to watch for
KafkaUser
resources. Only one namespace can be specified. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA (certificate authority) that signs new user certificates for mTLS authentication. - 4
- The OpenShift
Secret
that contains the private key (ca.key
) value of the CA that signs new user certificates for mTLS authentication. - 5
- The label to identify the
KafkaUser
resources managed by the User Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaUser
resource. If you deploy more than one User Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 6
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 7
- The size of the controller event queue. The size of the queue should be at least as big as the maximal amount of users you expect the User Operator to operate. The default is
1024
. - 8
- The size of the worker pool for reconciling the users. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is50
. - 9
- The size of the worker pool for Kafka Admin API and OpenShift operations. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is4
. - 10
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 11
- Enables garbage collection (GC) logging. The default is
true
. - 12
- The validity period for the CA. The default is
365
days. - 13
- The renewal period for the CA. The renewal period is measured backwards from the expiry date of the current certificate. The default is
30
days to initiate certificate renewal before the old certificates expire. - 14
- (Optional) The Java options used by the JVM running the User Operator
- 15
- (Optional) The debugging (
-D
) options set for the User Operator - 16
- (Optional) Prefix for the names of OpenShift secrets created by the User Operator.
- 17
- (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to
false
, the User Operator will reject all resources withsimple
authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default istrue
. - 18
- (Optional) Semi-colon separated list of Cron Expressions defining the maintenance time windows during which the expiring user certificates will be renewed.
- 19
- (Optional) Configuration options for configuring the Kafka Admin client used by the User Operator in the properties format.
If you are using mTLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the next step.
Example mTLS configuration
# .... env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs # ..."
- 1
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA that signs Kafka broker certificates. - 2
- The OpenShift
Secret
that contains the certificate public key (entity-operator.crt
) and private key (entity-operator.key
) that is used for mTLS authentication against the Kafka cluster.
Deploy the User Operator.
oc create -f install/user-operator
Check the status of the deployment:
oc get deployments
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
Chapter 7. Enabling AMQ Streams feature gates
AMQ Streams operators use feature gates to enable or disable specific features and functions. By enabling a feature gate, you alter the behavior of the corresponding operator, thereby introducing the feature to your AMQ Streams deployment.
A feature gate might be enabled or disabled by default, depending on its level of maturity.
To modify a feature gate’s default state, use the STRIMZI_FEATURE_GATES
environment variable in the operator’s configuration. You can modify multiple feature gates using this single environment variable. Specify a comma-separated list of feature gate names and prefixes. A +
prefix enables the feature gate and a -
prefix disables it.
Example feature gate configuration that enables FeatureGate1
and disables FeatureGate2
env: - name: STRIMZI_FEATURE_GATES value: +FeatureGate1,-FeatureGate2
7.1. ControlPlaneListener feature gate
The ControlPlaneListener
feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. With ControlPlaneListener
enabled, the connections between the Kafka controller and brokers use an internal control plane listener on port 9090. Replication of data between brokers, as well as internal connections from AMQ Streams operators, Cruise Control, or the Kafka Exporter use the replication listener on port 9091.
With the ControlPlaneListener
feature gate permanently enabled, it is no longer possible to upgrade or downgrade directly between AMQ Streams 1.7 and earlier and AMQ Streams 2.3 and newer. You have to first upgrade or downgrade through one of the AMQ Streams versions in-between, disable the ControlPlaneListener
feature gate, and then downgrade or upgrade (with the feature gate enabled) to the target version.
7.2. ServiceAccountPatching feature gate
The ServiceAccountPatching
feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. With ServiceAccountPatching
enabled, the Cluster Operator always reconciles service accounts and updates them when needed. For example, when you change service account labels or annotations using the template
property of a custom resource, the operator automatically updates them on the existing service account resources.
7.3. UseStrimziPodSets feature gate
The UseStrimziPodSets
feature gate has moved to GA, which means it is now permanently enabled and cannot be disabled. Support for StatefulSets
has been removed and AMQ Streams is now always using StrimziPodSets
to manage Kafka and ZooKeeper pods.
With the UseStrimziPodSets
feature gate permanently enabled, it is no longer possible to downgrade directly from AMQ Streams 2.5 and newer to AMQ Streams 2.0 or earlier. You have to first downgrade through one of the AMQ Streams versions in-between, disable the UseStrimziPodSets
feature gate, and then downgrade to AMQ Streams 2.0 or earlier.
7.4. (Preview) UseKRaft feature gate
The UseKRaft
feature gate has a default state of disabled.
The UseKRaft
feature gate deploys the Kafka cluster in the KRaft (Kafka Raft metadata) mode without ZooKeeper. ZooKeeper and KRaft are mechanisms used to manage metadata and coordinate operations in Kafka clusters. KRaft mode eliminates the need for an external coordination service like ZooKeeper. In KRaft mode, Kafka nodes take on the roles of brokers, controllers, or both. They collectively manage the metadata, which is replicated across partitions. Controllers are responsible for coordinating operations and maintaining the cluster’s state.
This feature gate is currently intended only for development and testing.
KRaft mode is not ready for production in Apache Kafka or in AMQ Streams.
Enabling the UseKRaft
feature gate requires the KafkaNodePools
feature gate to be enabled as well. To deploy a Kafka cluster in KRaft mode, you must use the KafkaNodePool
resources. For more details and examples, see Section 6.3.2, “(Preview) Deploying Kafka node pools”. The Kafka
custom resource using KRaft mode must also have the annotation strimzi.io/kraft: enabled
.
When the UseKRaft
feature gate is enabled and such annotation is set, the Kafka cluster is deployed without ZooKeeper. The .spec.zookeeper
properties in the Kafka
custom resource are ignored, but still need to be present. The UseKRaft
feature gate provides an API that configures Kafka cluster nodes and their roles. The API is still in development and is expected to change before the KRaft mode is production-ready.
Currently, the KRaft mode in AMQ Streams has the following major limitations:
- Moving from Kafka clusters with ZooKeeper to KRaft clusters or the other way around is not supported.
- Controller-only nodes cannot undergo rolling updates or be updated individually.
- Upgrades and downgrades of Apache Kafka versions or the AMQ Streams operator are not supported. Users might need to delete the cluster, upgrade the operator and deploy a new Kafka cluster.
-
Only the Unidirectional Topic Operator is supported in KRaft mode. You can enable it using the
UnidirectionalTopicOperator
feature gate. The Bidirectional Topic Operator is not supported and when theUnidirectionalTopicOperator
feature gate is not enabled, thespec.entityOperator.topicOperator
property must be removed from theKafka
custom resource. -
JBOD storage is not supported. The
type: jbod
storage can be used, but the JBOD array can contain only one disk.
Enabling the UseKRaft feature gate
To enable the UseKRaft
feature gate, specify +UseKRaft,+KafkaNodePools
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration. The Kafka
custom resource using KRaft mode must also have the annotation strimzi.io/kraft: enabled
. If such annotation is set to disabled
, missing or any other value, the operator will handle the Kafka
custom resource as using ZooKeeper mode.
7.5. StableConnectIdentities feature gate
The StableConnectIdentities
feature gate has a default state of enabled.
The StableConnectIdentities
feature gate uses StrimziPodSet
resources to manage Kafka Connect and Kafka MirrorMaker 2 pods instead of using OpenShift Deployment
resources. StrimziPodSets
give the pods stable names and stable addresses, which do not change during rolling upgrades. This helps to minimize the number of rebalances of connector tasks.
Disabling the StableConnectIdentities
feature gate
To disable the StableConnectIdentities
feature gate, specify -StableConnectIdentities
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration.
The StableConnectIdentities
feature gate must be disabled when downgrading to AMQ Streams 2.3 and earlier versions.
7.6. (Preview) KafkaNodePools feature gate
The KafkaNodePools
feature gate has a default state of disabled.
The KafkaNodePools
feature gate introduces a new KafkaNodePool
custom resource that enables the configuration of different pools of Apache Kafka nodes.
A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings such as the number of replicas, storage configuration, and a list of assigned roles. You can assign the controller role, broker role, or both roles to all nodes in the pool in the .spec.roles
field. When used with a ZooKeeper-based Apache Kafka cluster, it must be set to the broker
role. When used with the UseKRaft
feature gate, it can be set to broker
, controller
, or both.
In addition, a node pool can have its own configuration of resource requests and limits, Java JVM options, and resource templates. Configuration options not set in the KafkaNodePool
resource are inherited from the Kafka
custom resource.
The KafkaNodePool
resources use a strimzi.io/cluster
label to indicate to which Kafka cluster they belong. The label must be set to the name of the Kafka
custom resource.
Examples of the KafkaNodePool
resources can be found in the example configuration files provided by AMQ Streams.
Enabling the KafkaNodePools feature gate
To enable the KafkaNodePools
feature gate, specify +KafkaNodePools
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration. The Kafka
custom resource using the node pools must also have the annotation strimzi.io/node-pools: enabled
.
7.7. (Preview) UnidirectionalTopicOperator feature gate
The UnidirectionalTopicOperator
feature gate has a default state of disabled.
The UnidirectionalTopicOperator
feature gate introduces a unidirectional topic management mode for creating Kafka topics using the KafkaTopic
resource. Unidirectional mode is compatible with using KRaft for cluster management. With unidirectional mode, you create Kafka topics using the KafkaTopic
resource, which are then managed by the Topic Operator. Any configuration changes to a topic outside the KafkaTopic
resource are reverted. For more information on topic management, see Section 9.1, “Topic management modes”.
Enabling the UnidirectionalTopicOperator feature gate
To enable the UnidirectionalTopicOperator
feature gate, specify +UnidirectionalTopicOperator
in the STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration. For the KafkaTopic
custom resource to use this feature, the strimzi.io/managed
annotation is set to true
by default.
7.8. Feature gate releases
Feature gates have three stages of maturity:
- Alpha — typically disabled by default
- Beta — typically enabled by default
- General Availability (GA) — typically always enabled
Alpha stage features might be experimental or unstable, subject to change, or not sufficiently tested for production use. Beta stage features are well tested and their functionality is not likely to change. GA stage features are stable and should not change in the future. Alpha and beta stage features are removed if they do not prove to be useful.
-
The
ControlPlaneListener
feature gate moved to GA stage in AMQ Streams 2.3. It is now permanently enabled and cannot be disabled. -
The
ServiceAccountPatching
feature gate moved to GA stage in AMQ Streams 2.3. It is now permanently enabled and cannot be disabled. -
The
UseStrimziPodSets
feature gate moved to GA stage in AMQ Streams 2.5 and the support for StatefulSets is completely removed. It is now permanently enabled and cannot be disabled. -
The
StableConnectIdentities
feature gate is in beta stage and is enabled by default. -
The
UseKRaft
feature gate is available for development only and does not currently have a planned release for moving to the beta phase. -
The
KafkaNodePools
feature gate is in alpha stage and is disabled by default. -
The
UnidirectionalTopicOperator
feature gate is in alpha stage and is disabled by default.
Feature gates might be removed when they reach GA. This means that the feature was incorporated into the AMQ Streams core features and can no longer be disabled.
Feature gate | Alpha | Beta | GA |
---|---|---|---|
| 1.8 | 2.0 | 2.3 |
| 1.8 | 2.0 | 2.3 |
| 2.1 | 2.3 | 2.5 |
| 2.2 | - | - |
| 2.4 | 2.6 | - |
| 2.5 | - | - |
| 2.5 | - | - |
If a feature gate is enabled, you may need to disable it before upgrading or downgrading from a specific AMQ Streams version (or first upgrade / downgrade to a version of AMQ Streams where it can be disabled). The following table shows which feature gates you need to disable when upgrading or downgrading AMQ Streams versions.
Disable Feature gate | Upgrading from AMQ Streams version | Downgrading to AMQ Streams version |
---|---|---|
| 1.7 and earlier | 1.7 and earlier |
| - | 2.0 and earlier |
| - | 2.3 and earlier |
Chapter 8. Configuring a deployment
Configure and manage an AMQ Streams deployment to your precise needs using AMQ Streams custom resources. AMQ Streams provides example custom resources with each release, allowing you to configure and create instances of supported Kafka components. Fine-tune your deployment by configuring custom resources to include additional features according to your specific requirements. For specific areas of configuration, namely metrics, logging, and external configuration for Kafka Connect connectors, you can also use ConfigMap
resources. By using a ConfigMap
resource to incorporate configuration, you centralize maintenance. You can also use configuration providers to load configuration from external sources, which we recommend for supplying the credentials for Kafka Connect connector configuration.
Use custom resources to configure and create instances of the following components:
- Kafka clusters
- Kafka Connect clusters
- Kafka MirrorMaker
- Kafka Bridge
- Cruise Control
You can also use custom resource configuration to manage your instances or modify your deployment to introduce additional features. This might include configuration that supports the following:
- (Preview) Specifying node pools
- Securing client access to Kafka brokers
- Accessing Kafka brokers from outside the cluster
- Creating topics
- Creating users (clients)
- Controlling feature gates
- Changing logging frequency
- Allocating resource limits and requests
- Introducing features, such as AMQ Streams Drain Cleaner, Cruise Control, or distributed tracing.
The AMQ Streams Custom Resource API Reference describes the properties you can use in your configuration.
Labels applied to a custom resource are also applied to the OpenShift resources making up its cluster. This provides a convenient mechanism for resources to be labeled as required.
Applying changes to a custom resource configuration file
You add configuration to a custom resource using spec
properties. After adding the configuration, you can use oc
to apply the changes to a custom resource configuration file:
oc apply -f <kafka_configuration_file>
8.1. Using example configuration files
Further enhance your deployment by incorporating additional supported configuration. Example configuration files are provided with the downloadable release artifacts from the AMQ Streams software downloads page.
The example files include only the essential properties and values for custom resources by default. You can download and apply the examples using the oc
command-line tool. The examples can serve as a starting point when building your own Kafka component configuration for deployment.
If you installed AMQ Streams using the Operator, you can still download the example files and use them to upload configuration.
The release artifacts include an examples
directory that contains the configuration examples.
Example configuration files provided with AMQ Streams
examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── nodepools 7 ├── cruise-control 8 ├── connect 9 └── bridge 10
- 1
KafkaUser
custom resource configuration, which is managed by the User Operator.- 2
KafkaTopic
custom resource configuration, which is managed by Topic Operator.- 3
- Authentication and authorization configuration for Kafka components. Includes example configuration for TLS and SCRAM-SHA-512 authentication. The Red Hat Single Sign-On example includes
Kafka
custom resource configuration and a Red Hat Single Sign-On realm specification. You can use the example to try Red Hat Single Sign-On authorization services. There is also an example with enabledoauth
authentication andkeycloak
authorization metrics. - 4
Kafka
custom resource configuration for a deployment of Mirror Maker. Includes example configuration for replication policy and synchronization frequency.- 5
- Metrics configuration, including Prometheus installation and Grafana dashboard files.
- 6
Kafka
custom resource configuration for a deployment of Kafka. Includes example configuration for an ephemeral or persistent single or multi-node deployment.- 7
- (Preview)
KafkaNodePool
configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper. - 8
Kafka
custom resource with a deployment configuration for Cruise Control. IncludesKafkaRebalance
custom resources to generate optimization proposals from Cruise Control, with example configurations to use the default or user optimization goals.- 9
KafkaConnect
andKafkaConnector
custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment.- 10
KafkaBridge
custom resource configuration for a deployment of Kafka Bridge.
8.2. Configuring Kafka
Update the spec
properties of the Kafka
custom resource to configure your Kafka deployment.
As well as configuring Kafka, you can add configuration for ZooKeeper and the AMQ Streams Operators. Common configuration properties, such as logging and healthchecks, are configured independently for each component.
Configuration options that are particularly important include the following:
- Resource requests (CPU / Memory)
- JVM options for maximum and minimum memory allocation
- Listeners for connecting clients to Kafka brokers (and authentication of clients)
- Authentication
- Storage
- Rack awareness
- Metrics
- Cruise Control for cluster rebalancing
For a deeper understanding of the Kafka cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
Kafka versions
The inter.broker.protocol.version
property for the Kafka config
must be the version supported by the specified Kafka version (spec.kafka.version
). The property represents the version of Kafka protocol used in a Kafka cluster.
From Kafka 3.0.0, when the inter.broker.protocol.version
is set to 3.0
or higher, the log.message.format.version
option is ignored and doesn’t need to be set.
An update to the inter.broker.protocol.version
is required when upgrading your Kafka version. For more information, see Upgrading Kafka.
Managing TLS certificates
When deploying Kafka, the Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster. If required, you can manually renew the cluster and clients CA certificates before their renewal period starts. You can also replace the keys used by the cluster and clients CA certificates. For more information, see Renewing CA certificates manually and Replacing private keys.
Example Kafka
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.6.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external1 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.6" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # ... zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: "2" limits: memory: 8Gi cpu: "2" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # ... entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" kafkaExporter: 31 # ... cruiseControl: 32 # ...
- 1
- The number of replica nodes.
- 2
- Kafka version, which can be changed to a supported version by following the upgrade procedure.
- 3
- Kafka loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
key in the ConfigMap. For the Kafkakafka.root.logger.level
logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 4
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 5
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 6
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka.
- 7
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 8
- Listeners configure how clients connect to the Kafka cluster via bootstrap addresses. Listeners are configured as internal or external listeners for connection from inside or outside the OpenShift cluster.
- 9
- Name to identify the listener. Must be unique within the Kafka cluster.
- 10
- Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients.
- 11
- Listener type specified as
internal
orcluster-ip
(to expose Kafka using per-brokerClusterIP
services), or for external listeners, asroute
(OpenShift only),loadbalancer
,nodeport
oringress
(Kubernetes only). - 12
- Enables TLS encryption for each listener. Default is
false
. TLS encryption has to be enabled, by setting it totrue
, forroute
andingress
type listeners. - 13
- Defines whether the fully-qualified DNS names including the cluster service suffix (usually
.cluster.local
) are assigned. - 14
- Listener authentication mechanism specified as mTLS, SCRAM-SHA-512, or token-based OAuth 2.0.
- 15
- External listener configuration specifies how the Kafka cluster is exposed outside OpenShift, such as through a
route
,loadbalancer
ornodeport
. - 16
- Optional configuration for a Kafka listener certificate managed by an external CA (certificate authority). The
brokerCertChainAndKey
specifies aSecret
that contains a server certificate and a private key. You can configure Kafka listener certificates on any listener with enabled TLS encryption. - 17
- Authorization enables simple, OAUTH 2.0, or OPA authorization on the Kafka broker. Simple authorization uses the
AclAuthorizer
andStandardAuthorizer
Kafka plugins. - 18
- Broker configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
- 19
- Storage size for persistent volumes may be increased and additional volumes may be added to JBOD storage.
- 20
- Persistent storage has additional configuration options, such as a storage
id
andclass
for dynamic volume provisioning. - 21
- Rack awareness configuration to spread replicas across different racks, data centers, or availability zones. The
topologyKey
must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standardtopology.kubernetes.io/zone
label. - 22
- Prometheus metrics enabled. In this example, metrics are configured for the Prometheus JMX Exporter (the default metrics exporter).
- 23
- Rules for exporting metrics in Prometheus format to a Grafana dashboard through the Prometheus JMX Exporter, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under
metricsConfig.valueFrom.configMapKeyRef.key
. - 24
- ZooKeeper-specific configuration, which contains properties similar to the Kafka configuration.
- 25
- The number of ZooKeeper nodes. ZooKeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven. The majority of nodes must be available in order to maintain an effective quorum. If the ZooKeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available ZooKeeper cluster is crucial for AMQ Streams.
- 26
- ZooKeeper loggers and log levels.
- 27
- Entity Operator configuration, which specifies the configuration for the Topic Operator and User Operator.
- 28
- Entity Operator TLS sidecar configuration. Entity Operator uses the TLS sidecar for secure communication with ZooKeeper.
- 29
- Specified Topic Operator loggers and log levels. This example uses
inline
logging. - 30
- Specified User Operator loggers and log levels.
- 31
- Kafka Exporter configuration. Kafka Exporter is an optional component for extracting metrics data from Kafka brokers, in particular consumer lag data. For Kafka Exporter to be able to work properly, consumer groups need to be in use.
- 32
- Optional configuration for Cruise Control, which is used to rebalance the Kafka cluster.
8.2.1. Setting limits on brokers using the Kafka Static Quota plugin
Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by configuring the Kafka
resource. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers.
You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps.
Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit.
For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty.
Prerequisites
- The Cluster Operator that manages the Kafka cluster is running.
Procedure
Add the plugin properties to the
config
of theKafka
resource.The plugin properties are shown in this example configuration.
Example Kafka Static Quota plugin configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6
- 1
- Loads the Kafka Static Quota plugin.
- 2
- Sets the producer byte-rate threshold. 1 MBps in this example.
- 3
- Sets the consumer byte-rate threshold. 1 MBps in this example.
- 4
- Sets the lower soft limit for storage. 400 GB in this example.
- 5
- Sets the higher hard limit for storage. 500 GB in this example.
- 6
- Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check.
Update the resource.
oc apply -f <kafka_configuration_file>
Additional resources
8.2.2. Default ZooKeeper configuration values
When deploying ZooKeeper with AMQ Streams, some of the default configuration set by AMQ Streams differs from the standard ZooKeeper defaults. This is because AMQ Streams sets a number of ZooKeeper properties with values that are optimized for running ZooKeeper within an OpenShift environment.
The default configuration for key ZooKeeper properties in AMQ Streams is as follows:
Property | Default value | Description |
---|---|---|
| 2000 | The length of a single tick in milliseconds, which determines the length of a session timeout. |
| 5 | The maximum number of ticks that a follower is allowed to fall behind the leader in a ZooKeeper cluster. |
| 2 | The maximum number of ticks that a follower is allowed to be out of sync with the leader in a ZooKeeper cluster. |
| 1 |
Enables the |
| false | Flag to disable the ZooKeeper admin server. The admin server is not used by AMQ Streams. |
Modifying these default values as zookeeper.config
in the Kafka
custom resource may impact the behavior and performance of your ZooKeeper cluster.
8.3. (Preview) Configuring node pools
Update the spec
properties of the KafkaNodePool
custom resource to configure a node pool deployment.
The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools
feature gate before using them.
A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation.
Optionally, you can also specify values for the following properties:
-
resources
to specify memory and cpu requests and limits -
template
to specify custom configuration for pods and other OpenShift resources -
jvmOptions
to specify custom JVM configuration for heap size, runtime and other options
The Kafka
resource represents the configuration for all nodes in the Kafka cluster. The KafkaNodePool
resource represents the configuration for nodes only in the node pool. If a configuration property is not specified in KafkaNodePool
, it is inherited from the Kafka
resource. Configuration specified in the KafkaNodePool
resource takes precedence if set in both resources. For example, if both the node pool and Kafka configuration includes jvmOptions
, the values specified in the node pool configuration are used. When -Xmx: 1024m
is set in KafkaNodePool.spec.jvmOptions
and -Xms: 512m
is set in Kafka.spec.kafka.jvmOptions
, the node uses the value from its node pool configuration.
Properties from Kafka
and KafkaNodePool
schemas are not combined. To clarify, if KafkaNodePool.spec.template
includes only podSet.metadata.labels
, and Kafka.spec.kafka.template
includes podSet.metadata.annotations
and pod.metadata.labels
, the template values from the Kafka configuration are ignored since there is a template value in the node pool configuration.
Node pools can be used with Kafka clusters that operate in KRaft mode (using Kafka Raft metadata) or use ZooKeeper for cluster management. If you are using KRaft mode, you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. If you are using ZooKeeper, nodes must be set as brokers only.
KRaft mode is not ready for production in Apache Kafka or in AMQ Streams.
For a deeper understanding of the node pool configuration options, refer to the AMQ Streams Custom Resource API Reference.
While the KafkaNodePools
feature gate that enables node pools is in alpha phase, replica and storage configuration properties in the KafkaNodePool
resource must also be present in the Kafka
resource. The configuration in the Kafka
resource is ignored when node pools are used. Similarly, ZooKeeper configuration properties must also be present in the Kafka
resource when using KRaft mode. These properties are also ignored.
Example configuration for a node pool in a cluster using ZooKeeper
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: - broker 4 storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12"
- 1
- Unique name for the node pool.
- 2
- The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster.
- 3
- Number of replicas for the nodes.
- 4
- Roles for the nodes in the node pool, which can only be
broker
when using Kafka with ZooKeeper. - 5
- Storage specification for the nodes.
- 6
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed.
Example configuration for a node pool in a cluster using KRaft mode
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: kraft-dual-role
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles: 1
- controller
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 20Gi
deleteClaim: false
resources:
requests:
memory: 64Gi
cpu: "8"
limits:
memory: 64Gi
cpu: "12"
- 1
- Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers.
The configuration for the Kafka
resource must be suitable for KRaft mode. Currently, KRaft mode has a number of limitations.
8.3.1. (Preview) Assigning IDs to node pools for scaling operations
This procedure describes how to use annotations for advanced node ID handling by the Cluster Operator when performing scaling operations on node pools. You specify the node IDs to use, rather than the Cluster Operator using the next ID in sequence. Management of node IDs in this way gives greater control.
To add a range of IDs, you assign the following annotations to the KafkaNodePool
resource:
-
strimzi.io/next-node-ids
to add a range of IDs that are used for new brokers -
strimzi.io/remove-node-ids
to add a range of IDs for removing existing brokers
You can specify an array of individual node IDs, ID ranges, or a combination of both. For example, you can specify the following range of IDs: [0, 1, 2, 10-20, 30]
for scaling up the Kafka node pool. This format allows you to specify a combination of individual node IDs (0
, 1
, 2
, 30
) as well as a range of IDs (10-20
).
In a typical scenario, you might specify a range of IDs for scaling up and a single node ID to remove a specific node when scaling down.
In this procedure, we add the scaling annotations to node pools as follows:
-
pool-a
is assigned a range of IDs for scaling up -
pool-b
is assigned a range of IDs for scaling down
During the scaling operation, IDs are used as follows:
- Scale up picks up the lowest available ID in the range for the new node.
- Scale down removes the node with the highest available ID in the range.
If there are gaps in the sequence of node IDs assigned in the node pool, the next node to be added is assigned an ID that fills the gap.
The annotations don’t need to be updated after every scaling operation. Any unused IDs are still valid for the next scaling event.
The Cluster Operator allows you to specify a range of IDs in either ascending or descending order, so you can define them in the order the nodes are scaled. For example, when scaling up, you can specify a range such as [1000-1999]
, and the new nodes are assigned the next lowest IDs: 1000
, 1001
, 1002
, 1003
, and so on. Conversely, when scaling down, you can specify a range like [1999-1000]
, ensuring that nodes with the next highest IDs are removed: 1003
, 1002
, 1001
, 1000
, and so on.
If you don’t specify an ID range using the annotations, the Cluster Operator follows its default behavior for handling IDs during scaling operations. Node IDs start at 0 (zero) and run sequentially across the Kafka cluster. The next lowest ID is assigned to a new node. Gaps to node IDs are filled across the cluster. This means that they might not run sequentially within a node pool. The default behavior for scaling up is to add the next lowest available node ID across the cluster; and for scaling down, it is to remove the node in the node pool with the highest available node ID. The default approach is also applied if the assigned range of IDs is misformatted, the scaling up range runs out of IDs, or the scaling down range does not apply to any in-use nodes.
Prerequisites
- The Cluster Operator must be deployed.
-
(Optional) Use the
reserved.broker-max.id
configuration property to extend the allowable range for node IDs within your node pools.
By default, Apache Kafka restricts node IDs to numbers ranging from 0 to 999. To use node ID values greater than 999, add the reserved.broker-max.id
configuration property to the Kafka
custom resource and specify the required maximum node ID value.
In this example, the maximum node ID is set at 10000. Node IDs can then be assigned up to that value.
Example configuration for the maximum node ID number
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: config: reserved.broker.max.id: 10000 # ...
Procedure
Annotate the node pool with the IDs to use when scaling up or scaling down, as shown in the following examples.
IDs for scaling up are assigned to node pool
pool-a
:Assigning IDs for scaling up
oc annotate kafkanodepool pool-a strimzi.io/next-node-ids="[0,1,2,10-20,30]"
The lowest available ID from this range is used when adding a node to
pool-a
.IDs for scaling down are assigned to node pool
pool-b
:Assigning IDs for scaling down
oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[60-50,9,8,7]"
The highest available ID from this range is removed when scaling down
pool-b
.NoteIf you want to remove a specific node, you can assign a single node ID to the scaling down annotation:
oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[3]"
.You can now scale the node pool.
For more information, see the following:
On reconciliation, a warning is given if the annotations are misformatted.
After you have performed the scaling operation, you can remove the annotation if it’s no longer needed.
Removing the annotation for scaling up
oc annotate kafkanodepool pool-a strimzi.io/next-node-ids-
Removing the annotation for scaling down
oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids-
8.3.2. (Preview) Adding nodes to a node pool
This procedure describes how to scale up a node pool to add new nodes.
In this procedure, we start with three nodes for node pool pool-a
:
Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0
Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-3
, which has a node ID of 3
.
During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID.
Prerequisites
- The Cluster Operator must be deployed.
- Cruise Control is deployed with Kafka.
(Optional) For scale up operations, you can specify the node IDs to use in the operation.
If you have assigned a range of node IDs for the operation, the ID of the node being added is determined by the sequence of nodes given. If you have assigned a single node ID, a node is added with the specified ID. Otherwise, the lowest available node ID across the cluster is used.
Procedure
Create a new node in the node pool.
For example, node pool
pool-a
has three replicas. We add a node by increasing the number of replicas:oc scale kafkanodepool pool-a --replicas=4
Check the status of the deployment and wait for the pods in the node pool to be created and have a status of
READY
.oc get pods -n <my_cluster_operator_namespace>
Output shows four Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0
Reassign the partitions after increasing the number of nodes in the node pool.
After scaling up a node pool, you can use the Cruise Control
add-brokers
mode to move partition replicas from existing brokers to the newly added brokers.
8.3.3. (Preview) Removing nodes from a node pool
This procedure describes how to scale down a node pool to remove nodes.
In this procedure, we start with four nodes for node pool pool-a
:
Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0 my-cluster-pool-a-3 1/1 Running 0
Node IDs are appended to the name of the node on creation. We remove node my-cluster-pool-a-3
, which has a node ID of 3
.
During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID.
Prerequisites
- The Cluster Operator must be deployed.
- Cruise Control is deployed with Kafka.
(Optional) For scale down operations, you can specify the node IDs to use in the operation.
If you have assigned a range of node IDs for the operation, the ID of the node being removed is determined by the sequence of nodes given. If you have assigned a single node ID, the node with the specified ID is removed. Otherwise, the node with the highest available ID in the node pool is removed.
Procedure
Reassign the partitions before decreasing the number of nodes in the node pool.
Before scaling down a node pool, you can use the Cruise Control
remove-brokers
mode to move partition replicas off the brokers that are going to be removed.After the reassignment process is complete, and the node being removed has no live partitions, reduce the number of Kafka nodes in the node pool.
For example, node pool
pool-a
has four replicas. We remove a node by decreasing the number of replicas:oc scale kafkanodepool pool-a --replicas=3
Output shows three Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0
8.3.4. (Preview) Moving nodes between node pools
This procedure describes how to move nodes between source and target Kafka node pools without downtime. You create a new node on the target node pool and reassign partitions to move data from the old node on the source node pool. When the replicas on the new node are in-sync, you can delete the old node.
In this procedure, we start with two node pools:
-
pool-a
with three replicas is the target node pool -
pool-b
with four replicas is the source node pool
We scale up pool-a
, and reassign partitions and scale down pool-b
, which results in the following:
-
pool-a
with four replicas -
pool-b
with three replicas
During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID.
Prerequisites
- The Cluster Operator must be deployed.
- Cruise Control is deployed with Kafka.
(Optional) For scale up and scale down operations, you can specify the range of node IDs to use.
If you have assigned node IDs for the operation, the ID of the node being added or removed is determined by the sequence of nodes given. Otherwise, the lowest available node ID across the cluster is used when adding nodes; and the node with the highest available ID in the node pool is removed.
Procedure
Create a new node in the target node pool.
For example, node pool
pool-a
has three replicas. We add a node by increasing the number of replicas:oc scale kafkanodepool pool-a --replicas=4
Check the status of the deployment and wait for the pods in the node pool to be created and have a status of
READY
.oc get pods -n <my_cluster_operator_namespace>
Output shows four Kafka nodes in the target node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0 my-cluster-pool-a-5 1/1 Running 0
Node IDs are appended to the name of the node on creation. We add node
my-cluster-pool-a-5
, which has a node ID of5
.Reassign the partitions from the old node to the new node.
Before scaling down the source node pool, you can use the Cruise Control
remove-brokers
mode to move partition replicas off the brokers that are going to be removed.After the reassignment process is complete, reduce the number of Kafka nodes in the source node pool.
For example, node pool
pool-b
has four replicas. We remove a node by decreasing the number of replicas:oc scale kafkanodepool pool-b --replicas=3
The node with the highest ID within a pool is removed.
Output shows three Kafka nodes in the source node pool
NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-6 1/1 Running 0
8.3.5. (Preview) Managing storage using node pools
Storage management in AMQ Streams is usually straightforward, and requires little change when set up, but there might be situations where you need to modify your storage configurations. Node pools simplify this process, because you can set up separate node pools that specify your new storage requirements.
In this procedure we create and manage storage for a node pool called pool-a
containing three nodes. We show how to change the storage class (volumes.class
) that defines the type of persistent storage it uses. You can use the same steps to change the storage size (volumes.size
).
We strongly recommend using block storage. AMQ Streams is only tested for use with block storage.
Prerequisites
- The Cluster Operator must be deployed.
- Cruise Control is deployed with Kafka.
- For storage that uses persistent volume claims for dynamic volume allocation, storage classes are defined and available in the OpenShift cluster that correspond to the storage solutions you need.
Procedure
Create the node pool with its own storage settings.
For example, node pool
pool-a
uses JBOD storage with persistent volumes:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: gp2-ebs # ...
Nodes in
pool-a
are configured to use Amazon EBS (Elastic Block Store) GP2 volumes.-
Apply the node pool configuration for
pool-a
. Check the status of the deployment and wait for the pods in
pool-a
to be created and have a status ofREADY
.oc get pods -n <my_cluster_operator_namespace>
Output shows three Kafka nodes in the node pool
NAME READY STATUS RESTARTS my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-2 1/1 Running 0
To migrate to a new storage class, create a new node pool with the required storage configuration:
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-b labels: strimzi.io/cluster: my-cluster spec: roles: - broker replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 1Ti class: gp3-ebs # ...
Nodes in
pool-b
are configured to use Amazon EBS (Elastic Block Store) GP3 volumes.-
Apply the node pool configuration for
pool-b
. -
Check the status of the deployment and wait for the pods in
pool-b
to be created and have a status ofREADY
. Reassign the partitions from
pool-a
topool-b
.When migrating to a new storage configuration, you can use the Cruise Control
remove-brokers
mode to move partition replicas off the brokers that are going to be removed.After the reassignment process is complete, delete the old node pool:
oc delete kafkanodepool pool-a
8.3.6. (Preview) Managing storage affinity using node pools
In situations where storage resources, such as local persistent volumes, are constrained to specific worker nodes, or availability zones, configuring storage affinity helps to schedule pods to use the right nodes.
Node pools allow you to configure affinity independently. In this procedure, we create and manage storage affinity for two availability zones: zone-1
and zone-2
.
You can configure node pools for separate availability zones, but use the same storage class. We define an all-zones
persistent storage class representing the storage resources available in each zone.
We also use the .spec.template.pod
properties to configure the node affinity and schedule Kafka pods on zone-1
and zone-2
worker nodes.
The storage class and affinity is specified in node pools representing the nodes in each availability zone:
-
pool-zone-1
-
pool-zone-2
.
Prerequisites
- The Cluster Operator must be deployed.
- If you are not familiar with the concepts of affinity, see the Kubernetes node and pod affinity documentation.
Procedure
Define the storage class for use with each availability zone:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: all-zones provisioner: kubernetes.io/my-storage parameters: type: ssd volumeBindingMode: WaitForFirstConsumer
Create node pools representing the two availability zones, specifying the
all-zones
storage class and the affinity for each zone:Node pool configuration for zone-1
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-1 labels: strimzi.io/cluster: my-cluster spec: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-1 # ...
Node pool configuration for zone-2
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-zone-2 labels: strimzi.io/cluster: my-cluster spec: replicas: 4 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 500Gi class: all-zones template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-2 # ...
- Apply the node pool configuration.
Check the status of the deployment and wait for the pods in the node pools to be created and have a status of
READY
.oc get pods -n <my_cluster_operator_namespace>
Output shows 3 Kafka nodes in
pool-zone-1
and 4 Kafka nodes inpool-zone-2
:NAME READY STATUS RESTARTS my-cluster-pool-zone-1-kafka-0 1/1 Running 0 my-cluster-pool-zone-1-kafka-1 1/1 Running 0 my-cluster-pool-zone-1-kafka-2 1/1 Running 0 my-cluster-pool-zone-2-kafka-3 1/1 Running 0 my-cluster-pool-zone-2-kafka-4 1/1 Running 0 my-cluster-pool-zone-2-kafka-5 1/1 Running 0 my-cluster-pool-zone-2-kafka-6 1/1 Running 0
8.3.7. (Preview) Migrating existing Kafka clusters to use Kafka node pools
This procedure describes how to migrate existing Kafka clusters to use Kafka node pools. After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool.
While the KafkaNodePools
feature gate that enables node pools is in alpha phase, replica and storage configuration in the KafkaNodePool
resource must also be present in the Kafka
resource. The configuration is ignored when node pools are being used.
Prerequisites
Procedure
Create a new
KafkaNodePool
resource.-
Name the resource
kafka
. -
Point a
strimzi.io/cluster
label to your existingKafka
resource. - Set the replica count and storage configuration to match your current Kafka cluster.
-
Set the roles to
broker
.
Example configuration for a node pool used in migrating a Kafka cluster
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false
WarningTo migrate a cluster while preserving its data along with the names of its nodes and resources, the node pool name must be
kafka
, and thestrimzi.io/cluster
label must use the name of the Kafka resource. Otherwise, nodes and resources are created with new names, including the persistent volume storage used by the nodes. Consequently, your previous data may not be available.-
Name the resource
Apply the
KafkaNodePool
resource:oc apply -f <node_pool_configuration_file>
By applying this resource, you switch Kafka to using node pools.
There is no change or rolling update and resources are identical to how they were before.
Update the
STRIMZI_FEATURE_GATES
environment variable in the Cluster Operator configuration to include+KafkaNodePools
.env: - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools
After restarting, the Cluster Operator logs a warning indicating that the Kafka node pool has been added but is not yet integrated with the Cluster Operator. This is an expected part of the process.
Enable the
KafkaNodePools
feature gate in theKafka
resource using thestrimzi.io/node-pools: enabled
annotation.Example configuration for a node pool in a cluster using ZooKeeper
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: # ... zookeeper: # ...
Apply the
Kafka
resource:oc apply -f <kafka_configuration_file>
There is no change or rolling update. The resources remain identical to how they were before.
8.4. Configuring the Entity Operator
Use the entityOperator
property in Kafka.spec
to configure the Entity Operator. The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. It comprises the following operators:
- Topic Operator to manage Kafka topics
- User Operator to manage Kafka users
By configuring the Kafka
resource, the Cluster Operator can deploy the Entity Operator, including one or both operators. Once deployed, the operators are automatically configured to handle the topics and users of the Kafka cluster.
Each operator can only monitor a single namespace. For more information, see Section 1.2.1, “Watching AMQ Streams resources in OpenShift namespaces”.
The entityOperator
property supports several sub-properties:
-
tlsSidecar
-
topicOperator
-
userOperator
-
template
The tlsSidecar
property contains the configuration of the TLS sidecar container, which is used to communicate with ZooKeeper.
The template
property contains the configuration of the Entity Operator pod, such as labels, annotations, affinity, and tolerations. For more information on configuring templates, see Section 8.16, “Customizing OpenShift resources”.
The topicOperator
property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator.
The userOperator
property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator.
For more information on the properties used to configure the Entity Operator, see the EntityOperatorSpec
schema reference.
Example of basic configuration enabling both operators
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {}
If an empty object ({}
) is used for the topicOperator
and userOperator
, all properties use their default values.
When both topicOperator
and userOperator
properties are missing, the Entity Operator is not deployed.
8.4.1. Configuring the Topic Operator
Use topicOperator
properties in Kafka.spec.entityOperator
to configure the Topic Operator.
If you are using the preview of unidirectional topic management, the following properties are not used and will be ignored: Kafka.spec.entityOperator.topicOperator.zookeeperSessionTimeoutSeconds
and Kafka.spec.entityOperator.topicOperator.topicMetadataMaxAttempts
. For more information on unidirectional topic management, refer to Section 9.1, “Topic management modes”.
The following properties are supported:
watchedNamespace
-
The OpenShift namespace in which the Topic Operator watches for
KafkaTopic
resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds
-
The interval between periodic reconciliations in seconds. Default
120
. zookeeperSessionTimeoutSeconds
-
The ZooKeeper session timeout in seconds. Default
18
. topicMetadataMaxAttempts
-
The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation might take more time due to the number of partitions or replicas. Default
6
. image
-
The
image
property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring theimage
property`. resources
-
The
resources
property configures the amount of resources allocated to the Topic Operator. You can specify requests and limits formemory
andcpu
resources. The requests should be enough to ensure a stable performance of the operator. logging
-
The
logging
property configures the logging of the Topic Operator. To learn more, refer to the information provided on Topic Operator logging.
Example Topic Operator configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ...
8.4.2. Configuring the User Operator
Use userOperator
properties in Kafka.spec.entityOperator
to configure the User Operator. The following properties are supported:
watchedNamespace
-
The OpenShift namespace in which the User Operator watches for
KafkaUser
resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds
-
The interval between periodic reconciliations in seconds. Default
120
. image
-
The
image
property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring theimage
property`. resources
-
The
resources
property configures the amount of resources allocated to the User Operator. You can specify requests and limits formemory
andcpu
resources. The requests should be enough to ensure a stable performance of the operator. logging
-
The
logging
property configures the logging of the User Operator. To learn more, refer to the information provided on User Operator logging. secretPrefix
-
The
secretPrefix
property adds a prefix to the name of all Secrets created from the KafkaUser resource. For example,secretPrefix: kafka-
would prefix all Secret names withkafka-
. So a KafkaUser namedmy-user
would create a Secret namedkafka-my-user
.
Example User Operator configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ...
8.5. Configuring the Cluster Operator
Use environment variables to configure the Cluster Operator. Specify the environment variables for the container image of the Cluster Operator in its Deployment
configuration file.
The Deployment
configuration file provided with the AMQ Streams release artifacts is install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
.
You can use the following environment variables to configure the Cluster Operator. If you are running Cluster Operator replicas in standby mode, there are additional environment variables for enabling leader election.
STRIMZI_NAMESPACE
A comma-separated list of namespaces that the operator operates in. When not set, set to empty string, or set to
*
, the Cluster Operator operates in all namespaces.The Cluster Operator deployment might use the downward API to set this automatically to the namespace the Cluster Operator is deployed in.
Example configuration for Cluster Operator namespaces
env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
STRIMZI_FULL_RECONCILIATION_INTERVAL_MS
- Optional, default is 120000 ms. The interval between periodic reconciliations, in milliseconds.
STRIMZI_OPERATION_TIMEOUT_MS
- Optional, default 300000 ms. The timeout for internal operations, in milliseconds. Increase this value when using AMQ Streams on clusters where regular OpenShift operations take longer than usual (because of slow downloading of Docker images, for example).
STRIMZI_ZOOKEEPER_ADMIN_SESSION_TIMEOUT_MS
-
Optional, default 10000 ms. The session timeout for the Cluster Operator’s ZooKeeper admin client, in milliseconds. Increase the value if ZooKeeper requests from the Cluster Operator are regularly failing due to timeout issues. There is a maximum allowed session time set on the ZooKeeper server side via the
maxSessionTimeout
config. By default, the maximum session timeout value is 20 times the defaulttickTime
(whose default is 2000) at 40000 ms. If you require a higher timeout, change themaxSessionTimeout
ZooKeeper server configuration value. STRIMZI_OPERATIONS_THREAD_POOL_SIZE
- Optional, default 10. The worker thread pool size, which is used for various asynchronous and blocking operations that are run by the Cluster Operator.
STRIMZI_OPERATOR_NAME
- Optional, defaults to the pod’s hostname. The operator name identifies the AMQ Streams instance when emitting OpenShift events.
STRIMZI_OPERATOR_NAMESPACE
The name of the namespace where the Cluster Operator is running. Do not configure this variable manually. Use the downward API.
env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
STRIMZI_OPERATOR_NAMESPACE_LABELS
Optional. The labels of the namespace where the AMQ Streams Cluster Operator is running. Use namespace labels to configure the namespace selector in network policies. Network policies allow the AMQ Streams Cluster Operator access only to the operands from the namespace with these labels. When not set, the namespace selector in network policies is configured to allow access to the Cluster Operator from any namespace in the OpenShift cluster.
env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2
STRIMZI_LABELS_EXCLUSION_PATTERN
Optional, default regex pattern is
^app.kubernetes.io/(?!part-of).*
. The regex exclusion pattern used to filter labels propagation from the main custom resource to its subresources. The labels exclusion filter is not applied to labels in template sections such asspec.kafka.template.pod.metadata.labels
.env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: "^key1.*"
STRIMZI_CUSTOM_{COMPONENT_NAME}_LABELS
Optional. One or more custom labels to apply to all the pods created by the
{COMPONENT_NAME}
custom resource. The Cluster Operator labels the pods when the custom resource is created or is next reconciled.Labels can be applied to the following components:
-
KAFKA
-
KAFKA_CONNECT
-
KAFKA_CONNECT_BUILD
-
ZOOKEEPER
-
ENTITY_OPERATOR
-
KAFKA_MIRROR_MAKER2
-
KAFKA_MIRROR_MAKER
-
CRUISE_CONTROL
-
KAFKA_BRIDGE
-
KAFKA_EXPORTER
-
STRIMZI_CUSTOM_RESOURCE_SELECTOR
Optional. The label selector to filter the custom resources handled by the Cluster Operator. The operator will operate only on those custom resources that have the specified labels set. Resources without these labels will not be seen by the operator. The label selector applies to
Kafka
,KafkaConnect
,KafkaBridge
,KafkaMirrorMaker
, andKafkaMirrorMaker2
resources.KafkaRebalance
andKafkaConnector
resources are operated only when their corresponding Kafka and Kafka Connect clusters have the matching labels.env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2
STRIMZI_KAFKA_IMAGES
-
Required. The mapping from the Kafka version to the corresponding Docker image containing a Kafka broker for that version. The required syntax is whitespace or comma-separated
<version>=<image>
pairs. For example3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.6.0, 3.6.0=registry.redhat.io/amq-streams/kafka-36-rhel8:2.6.0
. This is used when aKafka.spec.kafka.version
property is specified but not theKafka.spec.kafka.image
in theKafka
resource. STRIMZI_DEFAULT_KAFKA_INIT_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.6.0
. The image name to use as default for the init container if no image is specified as thekafka-init-image
in theKafka
resource. The init container is started before the broker for initial configuration work, such as rack support. STRIMZI_KAFKA_CONNECT_IMAGES
-
Required. The mapping from the Kafka version to the corresponding Docker image of Kafka Connect for that version. The required syntax is whitespace or comma-separated
<version>=<image>
pairs. For example3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.6.0, 3.6.0=registry.redhat.io/amq-streams/kafka-36-rhel8:2.6.0
. This is used when aKafkaConnect.spec.version
property is specified but not theKafkaConnect.spec.image
. STRIMZI_KAFKA_MIRROR_MAKER_IMAGES
-
Required. The mapping from the Kafka version to the corresponding Docker image of MirrorMaker for that version. The required syntax is whitespace or comma-separated
<version>=<image>
pairs. For example3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.6.0, 3.6.0=registry.redhat.io/amq-streams/kafka-36-rhel8:2.6.0
. This is used when aKafkaMirrorMaker.spec.version
property is specified but not theKafkaMirrorMaker.spec.image
. STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.6.0
. The image name to use as the default when deploying the Topic Operator if no image is specified as theKafka.spec.entityOperator.topicOperator.image
in theKafka
resource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.6.0
. The image name to use as the default when deploying the User Operator if no image is specified as theKafka.spec.entityOperator.userOperator.image
in theKafka
resource. STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
-
Optional, default
registry.redhat.io/amq-streams/kafka-36-rhel8:2.6.0
. The image name to use as the default when deploying the sidecar container for the Entity Operator if no image is specified as theKafka.spec.entityOperator.tlsSidecar.image
in theKafka
resource. The sidecar provides TLS support. STRIMZI_IMAGE_PULL_POLICY
-
Optional. The
ImagePullPolicy
that is applied to containers in all pods managed by the Cluster Operator. The valid values areAlways
,IfNotPresent
, andNever
. If not specified, the OpenShift defaults are used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS
-
Optional. A comma-separated list of
Secret
names. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are specified in theimagePullSecrets
property for all pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSION
Optional. Overrides the OpenShift version information detected from the API server.
Example configuration for OpenShift version override
env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64
KUBERNETES_SERVICE_DNS_DOMAIN
Optional. Overrides the default OpenShift DNS domain name suffix.
By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix
cluster.local
.For example, for broker kafka-0:
<cluster-name>-kafka-0.<cluster-name>-kafka-brokers.<namespace>.svc.cluster.local
The DNS domain name is added to the Kafka broker certificates used for hostname verification.
If you are using a different DNS domain name suffix in your cluster, change the
KUBERNETES_SERVICE_DNS_DOMAIN
environment variable from the default to the one you are using in order to establish a connection with the Kafka brokers.STRIMZI_CONNECT_BUILD_TIMEOUT_MS
- Optional, default 300000 ms. The timeout for building new Kafka Connect images with additional connectors, in milliseconds. Consider increasing this value when using AMQ Streams to build container images containing many connectors or using a slow container registry.
STRIMZI_NETWORK_POLICY_GENERATION
Optional, default
true
. Network policy for resources. Network policies allow connections between Kafka components.Set this environment variable to
false
to disable network policy generation. You might do this, for example, if you want to use custom network policies. Custom network policies allow more control over maintaining the connections between components.STRIMZI_DNS_CACHE_TTL
-
Optional, default
30
. Number of seconds to cache successful name lookups in local DNS resolver. Any negative value means cache forever. Zero means do not cache, which can be useful for avoiding connection errors due to long caching policies being applied. STRIMZI_POD_SET_RECONCILIATION_ONLY
-
Optional, default
false
. When set totrue
, the Cluster Operator reconciles only theStrimziPodSet
resources and any changes to the other custom resources (Kafka
,KafkaConnect
, and so on) are ignored. This mode is useful for ensuring that your pods are recreated if needed, but no other changes happen to the clusters. STRIMZI_FEATURE_GATES
- Optional. Enables or disables the features and functionality controlled by feature gates.
STRIMZI_POD_SECURITY_PROVIDER_CLASS
-
Optional. Configuration for the pluggable
PodSecurityProvider
class, which can be used to provide the security context configuration for Pods and containers.
8.5.1. Restricting access to the Cluster Operator using network policy
Use the STRIMZI_OPERATOR_NAMESPACE_LABELS
environment variable to establish network policy for the Cluster Operator using namespace labels.
The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. By default, the STRIMZI_OPERATOR_NAMESPACE
environment variable is configured to use the downward API to find the namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required and allowed by AMQ Streams.
If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the OpenShift cluster is allowed access to the Cluster Operator unless network policy is configured. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified.
Network policy configured for the Cluster Operator deployment
#... env: # ... - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #...
8.5.2. Configuring periodic reconciliation by the Cluster Operator
Use the STRIMZI_FULL_RECONCILIATION_INTERVAL_MS
variable to set the time interval for periodic reconciliations by the Cluster Operator. Replace its value with the required interval in milliseconds.
Reconciliation period configured for the Cluster Operator deployment
#... env: # ... - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" #...
The Cluster Operator reacts to all notifications about applicable cluster resources received from the OpenShift cluster. If the operator is not running, or if a notification is not received for any reason, resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the resources with the current cluster deployments in order to have a consistent state across all of them.
Additional resources
8.5.3. Running multiple Cluster Operator replicas with leader election
The default Cluster Operator configuration enables leader election to run multiple parallel replicas of the Cluster Operator. One replica is elected as the active leader and operates the deployed resources. The other replicas run in standby mode. When the leader stops or fails, one of the standby replicas is elected as the new leader and starts operating the deployed resources.
By default, AMQ Streams runs with a single Cluster Operator replica that is always the leader replica. When a single Cluster Operator replica stops or fails, OpenShift starts a new replica.
Running the Cluster Operator with multiple replicas is not essential. But it’s useful to have replicas on standby in case of large-scale disruptions caused by major failure. For example, suppose multiple worker nodes or an entire availability zone fails. This failure might cause the Cluster Operator pod and many Kafka pods to go down at the same time. If subsequent pod scheduling causes congestion through lack of resources, this can delay operations when running a single Cluster Operator.
8.5.3.1. Enabling leader election for Cluster Operator replicas
Configure leader election environment variables when running additional Cluster Operator replicas. The following environment variables are supported:
STRIMZI_LEADER_ELECTION_ENABLED
-
Optional, disabled (
false
) by default. Enables or disables leader election, which allows additional Cluster Operator replicas to run on standby.
Leader election is disabled by default. It is only enabled when applying this environment variable on installation.
STRIMZI_LEADER_ELECTION_LEASE_NAME
-
Required when leader election is enabled. The name of the OpenShift
Lease
resource that is used for the leader election. STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE
Required when leader election is enabled. The namespace where the OpenShift
Lease
resource used for leader election is created. You can use the downward API to configure it to the namespace where the Cluster Operator is deployed.env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
STRIMZI_LEADER_ELECTION_IDENTITY
Required when leader election is enabled. Configures the identity of a given Cluster Operator instance used during the leader election. The identity must be unique for each operator instance. You can use the downward API to configure it to the name of the pod where the Cluster Operator is deployed.
env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name
STRIMZI_LEADER_ELECTION_LEASE_DURATION_MS
- Optional, default 15000 ms. Specifies the duration the acquired lease is valid.
STRIMZI_LEADER_ELECTION_RENEW_DEADLINE_MS
- Optional, default 10000 ms. Specifies the period the leader should try to maintain leadership.
STRIMZI_LEADER_ELECTION_RETRY_PERIOD_MS
- Optional, default 2000 ms. Specifies the frequency of updates to the lease lock by the leader.
8.5.3.2. Configuring Cluster Operator replicas
To run additional Cluster Operator replicas in standby mode, you will need to increase the number of replicas and enable leader election. To configure leader election, use the leader election environment variables.
To make the required changes, configure the following Cluster Operator installation files located in install/cluster-operator/
:
- 060-Deployment-strimzi-cluster-operator.yaml
- 022-ClusterRole-strimzi-cluster-operator-role.yaml
- 022-RoleBinding-strimzi-cluster-operator.yaml
Leader election has its own ClusterRole
and RoleBinding
RBAC resources that target the namespace where the Cluster Operator is running, rather than the namespace it is watching.
The default deployment configuration creates a Lease
resource called strimzi-cluster-operator
in the same namespace as the Cluster Operator. The Cluster Operator uses leases to manage leader election. The RBAC resources provide the permissions to use the Lease
resource. If you use a different Lease
name or namespace, update the ClusterRole
and RoleBinding
files accordingly.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the Deployment
resource that is used to deploy the Cluster Operator, which is defined in the 060-Deployment-strimzi-cluster-operator.yaml
file.
Change the
replicas
property from the default (1) to a value that matches the required number of replicas.Increasing the number of Cluster Operator replicas
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3
Check that the leader election
env
properties are set.If they are not set, configure them.
To enable leader election,
STRIMZI_LEADER_ELECTION_ENABLED
must be set totrue
(default).In this example, the name of the lease is changed to
my-strimzi-cluster-operator
.Configuring leader election environment variables for the Cluster Operator
# ... spec containers: - name: strimzi-cluster-operator # ... env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: "true" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: "my-strimzi-cluster-operator" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name
For a description of the available environment variables, see Section 8.5.3.1, “Enabling leader election for Cluster Operator replicas”.
If you specified a different name or namespace for the
Lease
resource used in leader election, update the RBAC resources.(optional) Edit the
ClusterRole
resource in the022-ClusterRole-strimzi-cluster-operator-role.yaml
file.Update
resourceNames
with the name of theLease
resource.Updating the ClusterRole references to the lease
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator # ...
(optional) Edit the
RoleBinding
resource in the022-RoleBinding-strimzi-cluster-operator.yaml
file.Update
subjects.name
andsubjects.namespace
with the name of theLease
resource and the namespace where it was created.Updating the RoleBinding references to the lease
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject # ...
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n myproject
Check the status of the deployment:
oc get deployments -n myproject
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows the correct number of replicas.
8.5.4. Configuring Cluster Operator HTTP proxy settings
If you are running a Kafka cluster behind a HTTP proxy, you can still pass data in and out of the cluster. For example, you can run Kafka Connect with connectors that push and pull data from outside the proxy. Or you can use a proxy to connect with an authorization server.
Configure the Cluster Operator deployment to specify the proxy environment variables. The Cluster Operator accepts standard proxy configuration (HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
) as environment variables. The proxy settings are applied to all AMQ Streams containers.
The format for a proxy address is http://<ip_address>:<port_number>. To set up a proxy with a name and password, the format is http://<username>:<password>@<ip-address>:<port_number>.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
To add proxy environment variables to the Cluster Operator, update its
Deployment
configuration (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
).Example proxy configuration for the Cluster Operator
apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "HTTP_PROXY" value: "http://proxy.com" 1 - name: "HTTPS_PROXY" value: "https://proxy.com" 2 - name: "NO_PROXY" value: "internal.com, other.domain.com" 3 # ...
Alternatively, edit the
Deployment
directly:oc edit deployment strimzi-cluster-operator
If you updated the YAML file instead of editing the
Deployment
directly, apply the changes:oc create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
Additional resources
8.5.5. Disabling FIPS mode using Cluster Operator configuration
AMQ Streams automatically switches to FIPS mode when running on a FIPS-enabled OpenShift cluster. Disable FIPS mode by setting the FIPS_MODE
environment variable to disabled
in the deployment configuration for the Cluster Operator. With FIPS mode disabled, AMQ Streams automatically disables FIPS in the OpenJDK for all components. With FIPS mode disabled, AMQ Streams is not FIPS compliant. The AMQ Streams operators, as well as all operands, run in the same way as if they were running on an OpenShift cluster without FIPS enabled.
Procedure
To disable the FIPS mode in the Cluster Operator, update its
Deployment
configuration (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
) and add theFIPS_MODE
environment variable.Example FIPS configuration for the Cluster Operator
apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "FIPS_MODE" value: "disabled" 1 # ...
- 1
- Disables the FIPS mode.
Alternatively, edit the
Deployment
directly:oc edit deployment strimzi-cluster-operator
If you updated the YAML file instead of editing the
Deployment
directly, apply the changes:oc apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
8.6. Configuring Kafka Connect
Update the spec
properties of the KafkaConnect
custom resource to configure your Kafka Connect deployment.
Use Kafka Connect to set up external data connections to your Kafka cluster. Use the properties of the KafkaConnect
resource to configure your Kafka Connect deployment.
For a deeper understanding of the Kafka Connect cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
KafkaConnector configuration
KafkaConnector
resources allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way.
In your Kafka Connect configuration, you enable KafkaConnectors for a Kafka Connect cluster by adding the strimzi.io/use-connector-resources
annotation. You can also add a build
configuration so that AMQ Streams automatically builds a container image with the connector plugins you require for your data connections. External configuration for Kafka Connect connectors is specified through the externalConfiguration
property.
To manage connectors, you can use use KafkaConnector
custom resources or the Kafka Connect REST API. KafkaConnector
resources must be deployed to the same namespace as the Kafka Connect cluster they link to. For more information on using these methods to create, reconfigure, or delete connectors, see Adding connectors.
Connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself. ConfigMaps and Secrets are standard OpenShift resources used for storing configurations and confidential data. You can use ConfigMaps and Secrets to configure certain elements of a connector. You can then reference the configuration values in HTTP REST commands, which keeps the configuration separate and more secure, if needed. This method applies especially to confidential data, such as usernames, passwords, or certificates.
Handling high volumes of messages
You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages.
Example KafkaConnect
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 21
- 1
- Use
KafkaConnect
. - 2
- Enables KafkaConnectors for the Kafka Connect cluster.
- 3
- The number of replica nodes for the workers that run tasks.
- 4
- Authentication for the Kafka Connect cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, Kafka Connect connects to Kafka brokers using a plain text connection.
- 5
- Bootstrap server for connection to the Kafka cluster.
- 6
- TLS encryption with key names under which TLS certificates are stored in X.509 format for the cluster. If certificates are stored in the same secret, it can be listed multiple times.
- 7
- Kafka Connect configuration of workers (not connectors). Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
- 8
- Build configuration properties for building a container image with connector plugins automatically.
- 9
- (Required) Configuration of the container registry where new images are pushed.
- 10
- (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one
artifact
. - 11
- External configuration for connectors using environment variables, as shown here, or volumes. You can also use configuration provider plugins to load configuration values from external sources.
- 12
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 13
- Specified Kafka Connect loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. For the Kafka Connectlog4j.rootLogger
logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 14
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 15
- Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under
metricsConfig.valueFrom.configMapKeyRef.key
. - 16
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka Connect.
- 17
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 18
- SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The
topologyKey
must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standardtopology.kubernetes.io/zone
label. To consume from the closest replica, enable theRackAwareReplicaSelector
in the Kafka broker configuration. - 19
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 20
- Environment variables are set for distributed tracing.
- 21
- Distributed tracing is enabled by using OpenTelemetry.
8.6.1. Configuring Kafka Connect for multiple instances
By default, AMQ Streams configures the group ID and names of the internal topics used by Kafka Connect. When running multiple instances of Kafka Connect, you must change these default settings using the following config
properties:
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ...
Values for the three topics must be the same for all instances with the same group.id
.
Unless you modify these default settings, each instance connecting to the same Kafka cluster is deployed with the same values. In practice, this means all instances form a cluster and use the same internal topics.
Multiple instances attempting to use the same internal topics will cause unexpected errors, so you must change the values of these properties for each instance.
8.6.2. Configuring Kafka Connect user authorization
When using authorization in Kafka, a Kafka Connect user requires read/write access to the cluster group and internal topics of Kafka Connect. This procedure outlines how access is granted using simple
authorization and ACLs.
Properties for the Kafka Connect cluster group ID and internal topics are configured by AMQ Streams by default. Alternatively, you can define them explicitly in the spec
of the KafkaConnect
resource. This is useful when configuring Kafka Connect for multiple instances, as the values for the group ID and topics must differ when running multiple Kafka Connect instances.
Simple authorization uses ACL rules managed by the Kafka AclAuthorizer
and StandardAuthorizer
plugins to ensure appropriate access levels. For more information on configuring a KafkaUser
resource to use simple authorization, see the AclRule
schema reference.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
authorization
property in theKafkaUser
resource to provide access rights to the user.Access rights are configured for the Kafka Connect topics and cluster group using
literal
name values. The following table shows the default names configured for the topics and cluster group ID.Table 8.2. Names for the access rights configuration Property Name offset.storage.topic
connect-cluster-offsets
status.storage.topic
connect-cluster-status
config.storage.topic
connect-cluster-configs
group
connect-cluster
In this example configuration, the default names are used to specify access rights. If you are using different names for a Kafka Connect instance, use those names in the ACLs configuration.
Example configuration for simple authorization
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: "*" # cluster group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: "*"
Create or update the resource.
oc apply -f KAFKA-USER-CONFIG-FILE
8.6.3. Manually stopping or pausing Kafka Connect connectors
If you are using KafkaConnector
resources to configure connectors, use the state
configuration to either stop or pause a connector. In contrast to the paused state, where the connector and tasks remain instantiated, stopping a connector retains only the configuration, with no active processes. Stopping a connector from running may be more suitable for longer durations than just pausing. While a paused connector is quicker to resume, a stopped connector has the advantages of freeing up memory and resources.
The state
configuration replaces the (deprecated) pause
configuration in the KafkaConnectorSpec
schema, which allows pauses on connectors. If you were previously using the pause
configuration to pause connectors, we encourage you to transition to using the state
configuration only to avoid conflicts.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaConnector
custom resource that controls the connector you want to pause or stop:oc get KafkaConnector
Edit the
KafkaConnector
resource to stop or pause the connector.Example configuration for stopping a Kafka Connect connector
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: file: "/opt/kafka/LICENSE" topic: my-topic state: stopped # ...
Change the
state
configuration tostopped
orpaused
. The default state for the connector when this property is not set isrunning
.Apply the changes to the
KafkaConnector
configuration.You can resume the connector by changing
state
torunning
or removing the configuration.
Alternatively, you can expose the Kafka Connect API and use the stop
and pause
endpoints to stop a connector from running. For example, PUT /connectors/<connector_name>/stop
. You can then use the resume
endpoint to restart it.
8.6.4. Manually restarting Kafka Connect connectors
If you are using KafkaConnector
resources to manage connectors, use the strimzi.io/restart
annotation to manually trigger a restart of a connector.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaConnector
custom resource that controls the Kafka connector you want to restart:oc get KafkaConnector
Restart the connector by annotating the
KafkaConnector
resource in OpenShift:oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=true
The
restart
annotation is set totrue
.Wait for the next reconciliation to occur (every two minutes by default).
The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the
KafkaConnector
custom resource.
8.6.5. Manually restarting Kafka Connect connector tasks
If you are using KafkaConnector
resources to manage connectors, use the strimzi.io/restart-task
annotation to manually trigger a restart of a connector task.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaConnector
custom resource that controls the Kafka connector task you want to restart:oc get KafkaConnector
Find the ID of the task to be restarted from the
KafkaConnector
custom resource:oc describe KafkaConnector <kafka_connector_name>
Task IDs are non-negative integers, starting from 0.
Use the ID to restart the connector task by annotating the
KafkaConnector
resource in OpenShift:oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=0
In this example, task
0
is restarted.Wait for the next reconciliation to occur (every two minutes by default).
The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the
KafkaConnector
custom resource.
8.7. Configuring Kafka MirrorMaker 2
Update the spec
properties of the KafkaMirrorMaker2
custom resource to configure your MirrorMaker 2 deployment. MirrorMaker 2 uses source cluster configuration for data consumption and target cluster configuration for data output.
MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters.
You configure MirrorMaker 2 to define the Kafka Connect deployment, including the connection details of the source and target clusters, and then run a set of MirrorMaker 2 connectors to make the connection.
MirrorMaker 2 supports topic configuration synchronization between the source and target clusters. You specify source topics in the MirrorMaker 2 configuration. MirrorMaker 2 monitors the source topics. MirrorMaker 2 detects and propagates changes to the source topics to the remote topics. Changes might include automatically creating missing topics and partitions.
In most cases you write to local topics and read from remote topics. Though write operations are not prevented on remote topics, they should be avoided.
The configuration must specify:
- Each Kafka cluster
- Connection information for each cluster, including authentication
The replication flow and direction
- Cluster to cluster
- Topic to topic
For a deeper understanding of the Kafka MirrorMaker 2 cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
MirrorMaker 2 resource configuration differs from the previous version of MirrorMaker, which is now deprecated. There is currently no legacy support, so any resources must be manually converted into the new format.
Default configuration
MirrorMaker 2 provides default configuration values for properties such as replication factors. A minimal configuration, with defaults left unchanged, would be something like this example:
Minimal configuration for MirrorMaker 2
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.6.0 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: {}
You can configure access control for source and target clusters using mTLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication for the source and target cluster.
You can specify the topics and consumer groups you wish to replicate from a source cluster in the KafkaMirrorMaker2
resource. You use the topicsPattern
and groupsPattern
properties to do this. You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set the topicsPattern
and groupsPattern
properties. You can also replicate all topics and consumer groups by using ".*"
as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster.
Handling high volumes of messages
You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages.
Example KafkaMirrorMaker2
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.6.0 1 replicas: 3 2 connectCluster: "my-cluster-target" 3 clusters: 4 - alias: "my-cluster-source" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: "my-cluster-target" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: "my-cluster-source" 15 targetCluster: "my-cluster-target" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: "false" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" topicsPattern: "topic1|topic2|topic3" 33 groupsPattern: "group1|group2|group3" 34 resources: 35 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey
- 1
- The Kafka Connect and MirrorMaker 2 version, which will always be the same.
- 2
- The number of replica nodes for the workers that run tasks.
- 3
- Kafka cluster alias for Kafka Connect, which must specify the target Kafka cluster. The Kafka cluster is used by Kafka Connect for its internal topics.
- 4
- Specification for the Kafka clusters being synchronized.
- 5
- Cluster alias for the source Kafka cluster.
- 6
- Authentication for the source cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
- 7
- Bootstrap server for connection to the source Kafka cluster.
- 8
- TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times.
- 9
- Cluster alias for the target Kafka cluster.
- 10
- Authentication for the target Kafka cluster is configured in the same way as for the source Kafka cluster.
- 11
- Bootstrap server for connection to the target Kafka cluster.
- 12
- Kafka Connect configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
- 13
- TLS encryption for the target Kafka cluster is configured in the same way as for the source Kafka cluster.
- 14
- MirrorMaker 2 connectors.
- 15
- Cluster alias for the source cluster used by the MirrorMaker 2 connectors.
- 16
- Cluster alias for the target cluster used by the MirrorMaker 2 connectors.
- 17
- Configuration for the
MirrorSourceConnector
that creates remote topics. Theconfig
overrides the default configuration options. - 18
- The maximum number of tasks that the connector may create. Tasks handle the data replication and run in parallel. If the infrastructure supports the processing overhead, increasing this value can improve throughput. Kafka Connect distributes the tasks between members of the cluster. If there are more tasks than workers, workers are assigned multiple tasks. For sink connectors, aim to have one task for each topic partition consumed. For source connectors, the number of tasks that can run in parallel may also depend on the external system. The connector creates fewer than the maximum number of tasks if it cannot achieve the parallelism.
- 19
- Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the
maxRestarts
property. - 20
- Replication factor for mirrored topics created at the target cluster.
- 21
- Replication factor for the
MirrorSourceConnector
offset-syncs
internal topic that maps the offsets of the source and target clusters. - 22
- When ACL rules synchronization is enabled, ACLs are applied to synchronized topics. The default is
true
. This feature is not compatible with the User Operator. If you are using the User Operator, set this property tofalse
. - 23
- Optional setting to change the frequency of checks for new topics. The default is for a check every 10 minutes.
- 24
- Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. The property must be specified for all connectors. For bidirectional (active/active) replication, use the
DefaultReplicationPolicy
class to automatically rename remote topics and specify thereplication.policy.separator
property for all connectors to add a custom separator. - 25
- Configuration for the
MirrorHeartbeatConnector
that performs connectivity checks. Theconfig
overrides the default configuration options. - 26
- Replication factor for the heartbeat topic created at the target cluster.
- 27
- Configuration for the
MirrorCheckpointConnector
that tracks offsets. Theconfig
overrides the default configuration options. - 28
- Replication factor for the checkpoints topic created at the target cluster.
- 29
- Optional setting to change the frequency of checks for new consumer groups. The default is for a check every 10 minutes.
- 30
- Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default.
- 31
- If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization.
- 32
- Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks.
- 33
- Topic replication from the source cluster defined as a comma-separated list or regular expression pattern. The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. Here we request three topics by name.
- 34
- Consumer group replication from the source cluster defined as a comma-separated list or regular expression pattern. The checkpoint connector replicates the specified consumer groups. Here we request three consumer groups by name.
- 35
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 36
- Specified Kafka Connect loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. For the Kafka Connectlog4j.rootLogger
logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 37
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 38
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker.
- 39
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 40
- SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The
topologyKey
must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standardtopology.kubernetes.io/zone
label. To consume from the closest replica, enable theRackAwareReplicaSelector
in the Kafka broker configuration. - 41
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 42
- Environment variables are set for distributed tracing.
- 43
- Distributed tracing is enabled by using OpenTelemetry.
- 44
- External configuration for an OpenShift Secret mounted to Kafka MirrorMaker as an environment variable. You can also use configuration provider plugins to load configuration values from external sources.
8.7.1. Configuring active/active or active/passive modes
You can use MirrorMaker 2 in active/passive or active/active cluster configurations.
- active/active cluster configuration
- An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster.
- active/passive cluster configuration
- An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure.
The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination.
8.7.1.1. Bidirectional replication (active/active)
The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration.
Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic.
Figure 8.1. Topic renaming

By flagging the originating cluster, topics are not replicated back to that cluster.
The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster.
8.7.1.2. Unidirectional replication (active/passive)
The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration.
You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics.
You can override automatic renaming by adding IdentityReplicationPolicy
to the source connector configuration. With this configuration applied, topics retain their original names.
8.7.2. Configuring MirrorMaker 2 for multiple instances
By default, AMQ Streams configures the group ID and names of the internal topics used by the Kafka Connect framework that MirrorMaker 2 runs on. When running multiple instances of MirrorMaker 2, and they share the same connectCluster
value, you must change these default settings using the following config
properties:
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-target" config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ...
Values for the three topics must be the same for all instances with the same group.id
.
The connectCluster
setting specifies the alias of the target Kafka cluster used by Kafka Connect for its internal topics. As a result, modifications to the connectCluster
, group ID, and internal topic naming configuration are specific to the target Kafka cluster. You don’t need to make changes if two MirrorMaker 2 instances are using the same source Kafka cluster or in an active-active mode where each MirrorMaker 2 instance has a different connectCluster
setting and target cluster.
However, if multiple MirrorMaker 2 instances share the same connectCluster
, each instance connecting to the same target Kafka cluster is deployed with the same values. In practice, this means all instances form a cluster and use the same internal topics.
Multiple instances attempting to use the same internal topics will cause unexpected errors, so you must change the values of these properties for each instance.
8.7.3. Configuring MirrorMaker 2 connectors
Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters.
MirrorMaker 2 consists of the following connectors:
MirrorSourceConnector
-
The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the
MirrorCheckpointConnector
to run. MirrorCheckpointConnector
- The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster.
MirrorHeartbeatConnector
- The heartbeat connector periodically checks connectivity between the source and target cluster.
The following table describes connector properties and the connectors you configure to use them.
Property | sourceConnector | checkpointConnector | heartbeatConnector |
---|---|---|---|
| ✓ | ✓ | ✓ |
| ✓ | ✓ | ✓ |
| ✓ | ✓ | ✓ |
| ✓ | ✓ | |
| ✓ | ✓ | |
| ✓ | ✓ | |
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ | ||
| ✓ |
8.7.3.1. Changing the location of the consumer group offsets topic
MirrorMaker 2 tracks offsets for consumer groups using internal topics.
offset-syncs
topic-
The
offset-syncs
topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints
topic-
The
checkpoints
topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group.
As they are used internally by MirrorMaker 2, you do not interact directly with these topics.
MirrorCheckpointConnector
emits checkpoints for offset tracking. Offsets for the checkpoints
topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover.
The location of the offset-syncs
topic is the source
cluster by default. You can use the offset-syncs.topic.location
connector configuration to change this to the target
cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs
topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster.
8.7.3.2. Synchronizing consumer group offsets
The __consumer_offsets
topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster.
Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position.
To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled
to the checkpoint connector configuration, and setting the property to true
. Synchronization is disabled by default.
When using the IdentityReplicationPolicy
in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics.
Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID
error is returned.
If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds
and emit.checkpoints.interval.seconds
to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds
property, which is performed every 10 minutes by default.
Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages.
If you have an application written in Java, you can use the RemoteClusterUtils.java
utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints
topic.
8.7.3.3. Deciding when to use the heartbeat connector
The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat
topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat
topic is located on the target cluster, which allows it to do the following:
- Identify all source clusters it is mirroring data from
- Verify the liveness and latency of the mirroring process
This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it’s not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration.
8.7.3.4. Aligning the configuration of MirrorMaker 2 connectors
To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors:
-
replication.policy.class
-
replication.policy.separator
-
offset-syncs.topic.location
-
topic.filter.class
For example, the value for replication.policy.class
must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it’s essential to keep all relevant connectors configured with the same settings.
8.7.4. Configuring MirrorMaker 2 connector producers and consumers
MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings.
For example, you can increase the batch.size
for the source producer that sends topics to the target Kafka cluster to better accommodate large volumes of messages.
Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change.
The following tables describe the producers and consumers for each of the connectors and where you can add configuration.
Type | Description | Configuration |
---|---|---|
Producer | Sends topic messages to the target Kafka cluster. Consider tuning the configuration of this producer when it is handling large volumes of data. |
|
Producer |
Writes to the |
|
Consumer | Retrieves topic messages from the source Kafka cluster. |
|
Type | Description | Configuration |
---|---|---|
Producer | Emits consumer offset checkpoints. |
|
Consumer |
Loads the |
|
You can set offset-syncs.topic.location
to target
to use the target Kafka cluster as the location of the offset-syncs
topic.
Type | Description | Configuration |
---|---|---|
Producer | Emits heartbeats. |
|
The following example shows how you configure the producers and consumers.
Example configuration for connector producers and consumers
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.6.0 # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # ... checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # ... heartbeatConnector: config: producer.override.request.timeout.ms: 30000 # ...
8.7.5. Specifying a maximum number of data replication tasks
Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups.
Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don’t need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks.
You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasksMax
property. Without specifying a maximum number of tasks, the default setting is a single task.
The heartbeat connector always uses a single task.
The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasksMax
. For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process.
If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups.
Increasing the number of tasks for the source connector is useful when you have a large number of partitions.
Increasing the number of tasks for the source connector
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 # ...
Increasing the number of tasks for the checkpoint connector is useful when you have a large number of consumer groups.
Increasing the number of tasks for the checkpoint connector
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" checkpointConnector: tasksMax: 10 # ...
By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds
configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance.
8.7.5.1. Checking connector task operations
If you are using Prometheus and Grafana to monitor your deployment, you can check MirrorMaker 2 performance. The example MirrorMaker 2 Grafana dashboard provided with AMQ Streams shows the following metrics related to tasks and latency.
- The number of tasks
- Replication latency
- Offset synchronization latency
Additional resources
8.7.6. Synchronizing ACL rules for remote topics
When using MirrorMaker 2 with AMQ Streams, it is possible to synchronize ACL rules for remote topics. However, this feature is only available if you are not using the User Operator.
If you are using type: simple
authorization without the User Operator, the ACL rules that manage access to brokers also apply to remote topics. This means that users who have read access to a source topic can also read its remote equivalent.
OAuth 2.0 authorization does not support access to remote topics in this way.
8.7.7. Securing a Kafka MirrorMaker 2 deployment
This procedure describes in outline the configuration required to secure a MirrorMaker 2 deployment.
You need separate configuration for the source Kafka cluster and the target Kafka cluster. You also need separate user configuration to provide the credentials required for MirrorMaker to connect to the source and target Kafka clusters.
For the Kafka clusters, you specify internal listeners for secure connections within an OpenShift cluster and external listeners for connections outside the OpenShift cluster.
You can configure authentication and authorization mechanisms. The security options implemented for the source and target Kafka clusters must be compatible with the security options implemented for MirrorMaker 2.
After you have created the cluster and user authentication credentials, you specify them in your MirrorMaker configuration for secure connections.
In this procedure, the certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates. You can also configure your listener to use a Kafka listener certificate managed by an external CA (certificate authority).
Before you start
Before starting this procedure, take a look at the example configuration files provided by AMQ Streams. They include examples for securing a deployment of MirrorMaker 2 using mTLS or SCRAM-SHA-512 authentication. The examples specify internal listeners for connecting within an OpenShift cluster.
The examples also provide the configuration for full authorization, including the ACLs that allow user operations on the source and target Kafka clusters.
When configuring user access to source and target Kafka clusters, ACLs must grant access rights to internal MirrorMaker 2 connectors and read/write access to the cluster group and internal topics used by the underlying Kafka Connect framework in the target cluster. If you’ve renamed the cluster group or internal topics, such as when configuring MirrorMaker 2 for multiple instances, use those names in the ACLs configuration.
Simple authorization uses ACL rules managed by the Kafka AclAuthorizer
and StandardAuthorizer
plugins to ensure appropriate access levels. For more information on configuring a KafkaUser
resource to use simple authorization, see the AclRule
schema reference.
Prerequisites
- AMQ Streams is running
- Separate namespaces for source and target clusters
The procedure assumes that the source and target Kafka clusters are installed to separate namespaces. If you want to use the Topic Operator, you’ll need to do this. The Topic Operator only watches a single cluster in a specified namespace.
By separating the clusters into namespaces, you will need to copy the cluster secrets so they can be accessed outside the namespace. You need to reference the secrets in the MirrorMaker configuration.
Procedure
Configure two
Kafka
resources, one to secure the source Kafka cluster and one to secure the target Kafka cluster.You can add listener configuration for authentication and enable authorization.
In this example, an internal listener is configured for a Kafka cluster with TLS encryption and mTLS authentication. Kafka
simple
authorization is enabled.Example source Kafka cluster configuration with TLS encryption and mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.6.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.6" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}
Example target Kafka cluster configuration with TLS encryption and mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.6.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.6" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}
Create or update the
Kafka
resources in separate namespaces.oc apply -f <kafka_configuration_file> -n <namespace>
The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster.
The certificates are created in the secret
<cluster_name>-cluster-ca-cert
.Configure two
KafkaUser
resources, one for a user of the source Kafka cluster and one for a user of the target Kafka cluster.-
Configure the same authentication and authorization types as the corresponding source and target Kafka cluster. For example, if you used
tls
authentication and thesimple
authorization type in theKafka
configuration for the source Kafka cluster, use the same in theKafkaUser
configuration. - Configure the ACLs needed by MirrorMaker 2 to allow operations on the source and target Kafka clusters.
Example source user configuration for mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: "*" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read
Example target user configuration for mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # cluster group - resource: type: group name: mirrormaker2-cluster operations: - Read # access to config.storage.topic - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write # access to status.storage.topic - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write # access to offset.storage.topic - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: "*" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write
NoteYou can use a certificate issued outside the User Operator by setting
type
totls-external
. For more information, see theKafkaUserSpec
schema reference.-
Configure the same authentication and authorization types as the corresponding source and target Kafka cluster. For example, if you used
Create or update a
KafkaUser
resource in each of the namespaces you created for the source and target Kafka clusters.oc apply -f <kafka_user_configuration_file> -n <namespace>
The User Operator creates the users representing the client (MirrorMaker), and the security credentials used for client authentication, based on the chosen authentication type.
The User Operator creates a new secret with the same name as the
KafkaUser
resource. The secret contains a private and public key for mTLS authentication. The public key is contained in a user certificate, which is signed by the clients CA.Configure a
KafkaMirrorMaker2
resource with the authentication details to connect to the source and target Kafka clusters.Example MirrorMaker 2 configuration with TLS encryption and mTLS authentication
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.6.0 replicas: 1 connectCluster: "my-target-cluster" clusters: - alias: "my-source-cluster" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: "my-target-cluster" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: "my-source-cluster" targetCluster: "my-target-cluster" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: "false" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: "true" topicsPattern: "topic1|topic2|topic3" groupsPattern: "group1|group2|group3"
- 1
- The TLS certificates for the source Kafka cluster. If they are in a separate namespace, copy the cluster secrets from the namespace of the Kafka cluster.
- 2
- The user authentication for accessing the source Kafka cluster using the TLS mechanism.
- 3
- The TLS certificates for the target Kafka cluster.
- 4
- The user authentication for accessing the target Kafka cluster.
Create or update the
KafkaMirrorMaker2
resource in the same namespace as the target Kafka cluster.oc apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>
8.7.8. Manually stopping or pausing MirrorMaker 2 connectors
If you are using KafkaMirrorMaker2
resources to configure internal MirrorMaker connectors, use the state
configuration to either stop or pause a connector. In contrast to the paused state, where the connector and tasks remain instantiated, stopping a connector retains only the configuration, with no active processes. Stopping a connector from running may be more suitable for longer durations than just pausing. While a paused connector is quicker to resume, a stopped connector has the advantages of freeing up memory and resources.
The state
configuration replaces the (deprecated) pause
configuration in the KafkaMirrorMaker2ConnectorSpec
schema, which allows pauses on connectors. If you were previously using the pause
configuration to pause connectors, we encourage you to transition to using the state
configuration only to avoid conflicts.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaMirrorMaker2
custom resource that controls the MirrorMaker 2 connector you want to pause or stop:oc get KafkaMirrorMaker2
Edit the
KafkaMirrorMaker2
resource to stop or pause the connector.Example configuration for stopping a MirrorMaker 2 connector
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.6.0 replicas: 3 connectCluster: "my-cluster-target" clusters: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 autoRestart: enabled: true state: stopped # ...
Change the
state
configuration tostopped
orpaused
. The default state for the connector when this property is not set isrunning
.Apply the changes to the
KafkaMirrorMaker2
configuration.You can resume the connector by changing
state
torunning
or removing the configuration.
Alternatively, you can expose the Kafka Connect API and use the stop
and pause
endpoints to stop a connector from running. For example, PUT /connectors/<connector_name>/stop
. You can then use the resume
endpoint to restart it.
8.7.9. Manually restarting MirrorMaker 2 connectors
Use the strimzi.io/restart-connector
annotation to manually trigger a restart of a MirrorMaker 2 connector.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaMirrorMaker2
custom resource that controls the Kafka MirrorMaker 2 connector you want to restart:oc get KafkaMirrorMaker2
Find the name of the Kafka MirrorMaker 2 connector to be restarted from the
KafkaMirrorMaker2
custom resource:oc describe KafkaMirrorMaker2 <mirrormaker_cluster_name>
Use the name of the connector to restart the connector by annotating the
KafkaMirrorMaker2
resource in OpenShift:oc annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> "strimzi.io/restart-connector=<mirrormaker_connector_name>"
In this example, connector
my-connector
in themy-mirror-maker-2
cluster is restarted:oc annotate KafkaMirrorMaker2 my-mirror-maker-2 "strimzi.io/restart-connector=my-connector"
Wait for the next reconciliation to occur (every two minutes by default).
The MirrorMaker 2 connector is restarted, as long as the annotation was detected by the reconciliation process. When MirrorMaker 2 accepts the request, the annotation is removed from the
KafkaMirrorMaker2
custom resource.
8.7.10. Manually restarting MirrorMaker 2 connector tasks
Use the strimzi.io/restart-connector-task
annotation to manually trigger a restart of a MirrorMaker 2 connector.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaMirrorMaker2
custom resource that controls the MirrorMaker 2 connector task you want to restart:oc get KafkaMirrorMaker2
Find the name of the connector and the ID of the task to be restarted from the
KafkaMirrorMaker2
custom resource:oc describe KafkaMirrorMaker2 <mirrormaker_cluster_name>
Task IDs are non-negative integers, starting from 0.
Use the name and ID to restart the connector task by annotating the
KafkaMirrorMaker2
resource in OpenShift:oc annotate KafkaMirrorMaker2 <mirrormaker_cluster_name> "strimzi.io/restart-connector-task=<mirrormaker_connector_name>:<task_id>"
In this example, task
0
for connectormy-connector
in themy-mirror-maker-2
cluster is restarted:oc annotate KafkaMirrorMaker2 my-mirror-maker-2 "strimzi.io/restart-connector-task=my-connector:0"
Wait for the next reconciliation to occur (every two minutes by default).
The MirrorMaker 2 connector task is restarted, as long as the annotation was detected by the reconciliation process. When MirrorMaker 2 accepts the request, the annotation is removed from the
KafkaMirrorMaker2
custom resource.
8.8. Configuring Kafka MirrorMaker (deprecated)
Update the spec
properties of the KafkaMirrorMaker
custom resource to configure your Kafka MirrorMaker deployment.
You can configure access control for producers and consumers using TLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication on the consumer and producer side.
For a deeper understanding of the Kafka MirrorMaker cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker
custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker
resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2
custom resource with the IdentityReplicationPolicy
.
Example KafkaMirrorMaker
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: "my-group" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: "my-topic|other-topic" 10 resources: 11 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: 19 type: opentelemetry
- 1
- The number of replica nodes.
- 2
- Bootstrap servers for consumer and producer.
- 3
- Group ID for the consumer.
- 4
- The number of consumer streams.
- 5
- The offset auto-commit interval in milliseconds.
- 6
- TLS encryption with key names under which TLS certificates are stored in X.509 format for consumer or producer. If certificates are stored in the same secret, it can be listed multiple times.
- 7
- Authentication for consumer or producer, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN.
- 8
- Kafka configuration options for consumer and producer.
- 9
- If the
abortOnSendFailure
property is set totrue
, Kafka MirrorMaker will exit and the container will restart following a send failure for a message. - 10
- A list of included topics mirrored from source to target Kafka cluster.
- 11
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 12
- Specified loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. MirrorMaker has a single logger calledmirrormaker.root.logger
. You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 13
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 14
- Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under
metricsConfig.valueFrom.configMapKeyRef.key
. - 15
- JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker.
- 16
- ADVANCED OPTION: Container image configuration, which is recommended only in special situations.
- 17
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 18
- Environment variables are set for distributed tracing.
- 19
- Distributed tracing is enabled by using OpenTelemetry.Warning
With the
abortOnSendFailure
property set tofalse
, the producer attempts to send the next message in a topic. The original message might be lost, as there is no attempt to resend a failed message.
8.9. Configuring the Kafka Bridge
Update the spec
properties of the KafkaBridge
custom resource to configure your Kafka Bridge deployment.
In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances.
For a deeper understanding of the Kafka Bridge cluster configuration options, refer to the AMQ Streams Custom Resource API Reference.
Example KafkaBridge
custom resource configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name>-cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: DEBUG jvmOptions: 11 "-Xmx": "1g" "-Xms": "1g" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 16
- 1
- The number of replica nodes.
- 2
- Bootstrap server for connection to the target Kafka cluster. Use the name of the Kafka cluster as the <cluster_name>.
- 3
- TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times.
- 4
- Authentication for the Kafka Bridge cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, the Kafka Bridge connects to Kafka brokers without authentication.
- 5
- HTTP access to Kafka brokers.
- 6
- CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster.
- 7
- Consumer configuration options.
- 8
- Producer configuration options.
- 9
- Requests for reservation of supported resources, currently
cpu
andmemory
, and limits to specify the maximum resources that can be consumed. - 10
- Specified Kafka Bridge loggers and log levels added directly (
inline
) or indirectly (external
) through a ConfigMap. A custom Log4j configuration must be placed under thelog4j.properties
orlog4j2.properties
key in the ConfigMap. For the Kafka Bridge loggers, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. - 11
- JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge.
- 12
- Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- 13
- Optional: Container image configuration, which is recommended only in special situations.
- 14
- Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
- 15
- Environment variables are set for distributed tracing.
- 16
- Distributed tracing is enabled by using OpenTelemetry.
Additional resources
8.10. Configuring Kafka and ZooKeeper storage
AMQ Streams provides flexibility in configuring the data storage options of Kafka and ZooKeeper.
The supported storage types are:
- Ephemeral (Recommended for development only)
- Persistent
- JBOD (Kafka only; not available for ZooKeeper)
To configure storage, you specify storage
properties in the custom resource of the component. The storage type is set using the storage.type
property.
You can also use the preview of the node pools feature for advanced storage management of the Kafka cluster. You can specify storage configuration unique to each node pool used in the cluster. The same storage properties available to the Kafka
resource are also available to the KafkaNodePool
pool resource.
The storage-related schema references provide more information on the storage configuration properties:
The storage type cannot be changed after a Kafka cluster is deployed.
8.10.1. Data storage considerations
For AMQ Streams to work well, an efficient data storage infrastructure is essential. We strongly recommend using block storage. AMQ Streams is only tested for use with block storage. File storage, such as NFS, is not tested and there is no guarantee it will work.
Choose one of the following options for your block storage:
- A cloud-based block storage solution, such as Amazon Elastic Block Store (EBS)
- Persistent storage using local persistent volumes
- Storage Area Network (SAN) volumes accessed by a protocol such as Fibre Channel or iSCSI
AMQ Streams does not require OpenShift raw block volumes.
8.10.1.1. File systems
Kafka uses a file system for storing messages. AMQ Streams is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. Consider the underlying architecture and requirements of your deployment when choosing and setting up your file system.
For more information, refer to Filesystem Selection in the Kafka documentation.
8.10.1.2. Disk usage
Use separate disks for Apache Kafka and ZooKeeper.
Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access.
You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication.
8.10.2. Ephemeral storage
Ephemeral data storage is transient. All pods on a node share a local ephemeral storage space. Data is retained for as long as the pod that uses it is running. The data is lost when a pod is deleted. Although a pod can recover data in a highly available environment.
Because of its transient nature, ephemeral storage is only recommended for development and testing.
Ephemeral storage uses emptyDir
volumes to store data. An emptyDir
volume is created when a pod is assigned to a node. You can set the total amount of storage for the emptyDir
using the sizeLimit
property .
Ephemeral storage is not suitable for single-node ZooKeeper clusters or Kafka topics with a replication factor of 1.
To use ephemeral storage, you set the storage type configuration in the Kafka
or ZooKeeper
resource to ephemeral
. If you are using the preview of the node pools feature, you can also specify ephemeral
in the storage configuration of individual node pools.
Example ephemeral storage configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: ephemeral # ... zookeeper: storage: type: ephemeral # ...
8.10.2.1. Mount path of Kafka log directories
The ephemeral volume is used by Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data/kafka-logIDX
Where IDX
is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0
.
8.10.3. Persistent storage
Persistent data storage retains data in the event of system disruption. For pods that use persistent data storage, data is persisted across pod failures and restarts. Because of its permanent nature, persistent storage is recommended for production environments.
To use persistent storage in AMQ Streams, you specify persistent-claim
in the storage configuration of the Kafka
or ZooKeeper
resources. If you are using the preview of the node pools feature, you can also specify persistent-claim
in the storage configuration of individual node pools.
You configure the resource so that pods use Persistent Volume Claims (PVCs) to make storage requests on persistent volumes (PVs). PVs represent storage volumes that are created on demand and are independent of the pods that use them. The PVC requests the amount of storage required when a pod is being created. The underlying storage infrastructure of the PV does not need to be understood. If a PV matches the storage criteria, the PVC is bound to the PV.
You have two options for specifying the storage type:
storage.type: persistent-claim
-
If you choose
persistent-claim
as the storage type, a single persistent storage volume is defined. storage.type: jbod
-
When you select
jbod
as the storage type, you have the flexibility to define an array of persistent storage volumes using unique IDs.
In a production environment, it is recommended to configure the following:
-
For Kafka or node pools, set
storage.type
tojbod
with one or more persistent volumes. -
For ZooKeeper, set
storage.type
aspersistent-claim
for a single persistent volume.
Persistent storage also has the following configuration options:
id
(optional)-
A storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. Default is
0
. size
(required)- The size of the persistent volume claim, for example, "1000Gi".
class
(optional)- PVCs can request different types of persistent storage by specifying a StorageClass. Storage classes define storage profiles and dynamically provision PVs based on that profile. If a storage class is not specified, the storage class marked as default in the OpenShift cluster is used. Persistent storage options might include SAN storage types or local persistent volumes.
selector
(optional)- Configuration to specify a specific PV. Provides key:value pairs representing the labels of the volume selected.
deleteClaim
(optional)-
Boolean value to specify whether the PVC is deleted when the cluster is uninstalled. Default is
false
.
Increasing the size of persistent volumes in an existing AMQ Streams cluster is only supported in OpenShift versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. For other versions of OpenShift and storage classes that do not support volume expansion, you must decide the necessary storage size before deploying the cluster. Decreasing the size of existing persistent volumes is not possible.
Example persistent storage configuration for Kafka and ZooKeeper
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: storage: type: persistent-claim size: 1000Gi # ...
Example persistent storage configuration with specific storage class
# ... storage: type: persistent-claim size: 500Gi class: my-storage-class # ...
Use a selector
to specify a labeled persistent volume that provides certain features, such as an SSD.
Example persistent storage configuration with selector
# ... storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true # ...
8.10.3.1. Storage class overrides
Instead of using the default storage class, you can specify a different storage class for one or more Kafka or ZooKeeper nodes. This is useful, for example, when storage classes are restricted to different availability zones or data centers. You can use the overrides
field for this purpose.
In this example, the default storage class is named my-storage-class
:
Example storage configuration with class overrides
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # ... kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... # ... zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ...
As a result of the configured overrides
property, the volumes use the following storage classes:
-
The persistent volumes of ZooKeeper node 0 use
my-storage-class-zone-1a
. -
The persistent volumes of ZooKeeper node 1 use
my-storage-class-zone-1b
. -
The persistent volumes of ZooKeeper node 2 use
my-storage-class-zone-1c
. -
The persistent volumes of Kafka broker 0 use
my-storage-class-zone-1a
. -
The persistent volumes of Kafka broker 1 use
my-storage-class-zone-1b
. -
The persistent volumes of Kafka broker 2 use
my-storage-class-zone-1c
.
The overrides
property is currently used only to override the storage class
. Overrides for other storage configuration properties is not currently supported.
8.10.3.2. PVC resources for persistent storage
When persistent storage is used, it creates PVCs with the following names:
data-cluster-name-kafka-idx
-
PVC for the volume used for storing data for the Kafka broker pod
idx
. data-cluster-name-zookeeper-idx
-
PVC for the volume used for storing data for the ZooKeeper node pod
idx
.
8.10.3.3. Mount path of Kafka log directories
The persistent volume is used by the Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data/kafka-logIDX
Where IDX
is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0
.
8.10.4. Resizing persistent volumes
Persistent volumes used by a cluster can be resized without any risk of data loss, as long as the storage infrastructure supports it. Following a configuration update to change the size of the storage, AMQ Streams instructs the storage infrastructure to make the change. Storage expansion is supported in AMQ Streams clusters that use persistent-claim volumes.
Storage reduction is only possible when using multiple disks per broker. You can remove a disk after moving all partitions on the disk to other volumes within the same broker (intra-broker) or to other brokers within the same cluster (intra-cluster).
You cannot decrease the size of persistent volumes because it is not currently supported in OpenShift.
Prerequisites
- An OpenShift cluster with support for volume resizing.
- The Cluster Operator is running.
- A Kafka cluster using persistent volumes created using a storage class that supports volume expansion.
Procedure
Edit the
Kafka
resource for your cluster.Change the
size
property to increase the size of the persistent volume allocated to a Kafka cluster, a ZooKeeper cluster, or both.-
For Kafka clusters, update the
size
property underspec.kafka.storage
. -
For ZooKeeper clusters, update the
size
property underspec.zookeeper.storage
.
Kafka configuration to increase the volume size to
2000Gi
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: persistent-claim size: 2000Gi class: my-storage-class # ... zookeeper: # ...
-
For Kafka clusters, update the
Create or update the resource:
oc apply -f <kafka_configuration_file>
OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically.
Verify that the storage capacity has increased for the relevant pods on the cluster:
oc get pv
Kafka broker pods with increased storage
NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1
The output shows the names of each PVC associated with a broker pod.
Additional resources
- For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes.
8.10.5. JBOD storage
JBOD storage allows you to configure your Kafka cluster to use multiple disks or volumes. This approach provides increased data storage capacity for Kafka brokers, and can lead to performance improvements. A JBOD configuration is defined by one or more volumes, each of which can be either ephemeral or persistent. The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. For example, you cannot decrease the size of a persistent storage volume after it has been provisioned, nor can you change the value of sizeLimit
when the type is ephemeral
.
JBOD storage is supported for Kafka only, not for ZooKeeper.
To use JBOD storage, you set the storage type configuration in the Kafka
resource to jbod
. If you are using the preview of the node pools feature, you can also specify jbod
in the storage configuration of individual node pools.
The volumes
property allows you to describe the disks that make up your JBOD storage array or configuration.
Example JBOD storage configuration
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false # ...
The IDs cannot be changed once the JBOD volumes are created. You can add or remove volumes from the JBOD configuration.
8.10.5.1. PVC resource for JBOD storage
When persistent storage is used to declare JBOD volumes, it creates a PVC with the following name:
data-id-cluster-name-kafka-idx
-
PVC for the volume used for storing data for the Kafka broker pod
idx
. Theid
is the ID of the volume used for storing data for Kafka broker pod.
8.10.5.2. Mount path of Kafka log directories
The JBOD volumes are used by Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data-id/kafka-logidx
Where id
is the ID of the volume used for storing data for Kafka broker pod idx
. For example /var/lib/kafka/data-0/kafka-log0
.
8.10.6. Adding volumes to JBOD storage
This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type.
When adding a new volume under an id
which was already used in the past and removed, you have to make sure that the previously used PersistentVolumeClaims
have been deleted.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- A Kafka cluster with JBOD storage
Procedure
Edit the
spec.kafka.storage.volumes
property in theKafka
resource. Add the new volumes to thevolumes
array. For example, add the new volume with id2
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ...
Create or update the resource:
oc apply -f <kafka_configuration_file>
Create new topics or reassign existing partitions to the new disks.
TipCruise Control is an effective tool for reassigning partitions. To perform an intra-broker disk balance, you set
rebalanceDisk
totrue
under theKafkaRebalance.spec
.
8.10.7. Removing volumes from JBOD storage
This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. The JBOD storage always has to contain at least one volume.
To avoid data loss, you have to move all partitions before removing the volumes.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- A Kafka cluster with JBOD storage with two or more volumes
Procedure
Reassign all partitions from the disks which are you going to remove. Any data in partitions still assigned to the disks which are going to be removed might be lost.
TipYou can use the
kafka-reassign-partitions.sh
tool to reassign the partitions.Edit the
spec.kafka.storage.volumes
property in theKafka
resource. Remove one or more volumes from thevolumes
array. For example, remove the volumes with ids1
and2
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ...
Create or update the resource:
oc apply -f <kafka_configuration_file>
8.11. Configuring CPU and memory resource limits and requests
By default, the AMQ Streams Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands. Ensuring an adequate allocation of resources is crucial for maintaining stability and achieving optimal performance in Kafka. The ideal resource allocation depends on your specific requirements and use cases.
It is recommended to configure CPU and memory resources for each container by setting appropriate requests and limits.
8.12. Configuring pod scheduling
To avoid performance degradation caused by resource conflicts between applications scheduled on the same OpenShift node, you can schedule Kafka pods separately from critical workloads. This can be achieved by either selecting specific nodes or dedicating a set of nodes exclusively for Kafka.
8.12.1. Specifying affinity, tolerations, and topology spread constraints
Use affinity, tolerations and topology spread constraints to schedule the pods of kafka resources onto nodes. Affinity, tolerations and topology spread constraints are configured using the affinity
, tolerations
, and topologySpreadConstraint
properties in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaBridge.spec.template.pod
-
KafkaMirrorMaker.spec.template.pod
-
KafkaMirrorMaker2.spec.template.pod
The format of the affinity
, tolerations
, and topologySpreadConstraint
properties follows the OpenShift specification. The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
Additional resources
8.12.1.1. Use pod anti-affinity to avoid critical applications sharing nodes
Use pod anti-affinity to ensure that critical applications are never scheduled on the same disk. When running a Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share nodes with other workloads, such as databases.
8.12.1.2. Use node affinity to schedule workloads onto specific nodes
The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.
OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type
or custom labels to select the right node.
8.12.1.3. Use node affinity and tolerations for dedicated nodes
Use taints to create dedicated nodes, then schedule Kafka pods on the dedicated nodes by configuring node affinity and tolerations.
Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.
Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.
8.12.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node
Many Kafka brokers or ZooKeeper nodes can run on the same OpenShift worker node. If the worker node fails, they will all become unavailable at the same time. To improve reliability, you can use podAntiAffinity
configuration to schedule each Kafka broker or ZooKeeper node on a different OpenShift worker node.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. To make sure that no worker nodes are shared by Kafka brokers or ZooKeeper nodes, use thestrimzi.io/name
label. Set thetopologyKey
tokubernetes.io/hostname
to specify that the selected pods are not scheduled on nodes with the same hostname. This will still allow the same worker node to be shared by a single Kafka broker and a single ZooKeeper node. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME-kafka topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME-zookeeper topologyKey: "kubernetes.io/hostname" # ...
Where
CLUSTER-NAME
is the name of your Kafka custom resource.If you even want to make sure that a Kafka broker and ZooKeeper node do not share the same worker node, use the
strimzi.io/cluster
label. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ...
Where
CLUSTER-NAME
is the name of your Kafka custom resource.Create or update the resource.
oc apply -f <kafka_configuration_file>
8.12.3. Configuring pod anti-affinity in Kafka components
Pod anti-affinity configuration helps with the stability and performance of Kafka brokers. By using podAntiAffinity
, OpenShift will not schedule Kafka brokers on the same nodes as other workloads. Typically, you want to avoid Kafka running on the same worker node as other network or storage intensive applications such as databases, storage or other messaging platforms.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. ThetopologyKey
should be set tokubernetes.io/hostname
to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ...
Create or update the resource.
This can be done using
oc apply
:oc apply -f <kafka_configuration_file>
8.12.4. Configuring node affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Label the nodes where AMQ Streams components should be scheduled.
This can be done using
oc label
:oc label node NAME-OF-NODE node-type=fast-network
Alternatively, some of the existing labels might be reused.
Edit the
affinity
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ...
Create or update the resource.
This can be done using
oc apply
:oc apply -f <kafka_configuration_file>
8.12.5. Setting up dedicated nodes and scheduling pods on them
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Select the nodes which should be used as dedicated.
- Make sure there are no workloads scheduled on these nodes.
Set the taints on the selected nodes:
This can be done using
oc adm taint
:oc adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule
Additionally, add a label to the selected nodes as well.
This can be done using
oc label
:oc label node NAME-OF-NODE dedicated=Kafka
Edit the
affinity
andtolerations
properties in the resource specifying the cluster deployment.For example:
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ...
Create or update the resource.
This can be done using
oc apply
:oc apply -f <kafka_configuration_file>
8.13. Configuring logging levels
Configure logging levels in the custom resources of Kafka components and AMQ Streams operators. You can specify the logging levels directly in the spec.logging
property of the custom resource. Or you can define the logging properties in a ConfigMap that’s referenced in the custom resource using the configMapKeyRef
property.
The advantages of using a ConfigMap are that the logging properties are maintained in one place and are accessible to more than one resource. You can also reuse the ConfigMap for more than one resource. If you are using a ConfigMap to specify loggers for AMQ Streams Operators, you can also append the logging specification to add filters.
You specify a logging type
in your logging specification:
-
inline
when specifying logging levels directly -
external
when referencing a ConfigMap
Example inline
logging configuration
spec: # ... logging: type: inline loggers: kafka.root.logger.level: INFO
Example external
logging configuration
spec: # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key
Values for the name
and key
of the ConfigMap are mandatory. Default logging is used if the name
or key
is not set.
8.13.1. Logging options for Kafka components and operators
For more information on configuring logging for specific Kafka components or operators, see the following sections.
Kafka component logging
Operator logging
8.13.2. Creating a ConfigMap for logging
To use a ConfigMap to define logging properties, you create the ConfigMap and then reference it as part of the logging definition in the spec
of a resource.
The ConfigMap must contain the appropriate logging configuration.
-
log4j.properties
for Kafka components, ZooKeeper, and the Kafka Bridge -
log4j2.properties
for the Topic Operator and User Operator
The configuration must be placed under these properties.
In this procedure a ConfigMap defines a root logger for a Kafka resource.
Procedure
Create the ConfigMap.
You can create the ConfigMap as a YAML file or from a properties file.
ConfigMap example with a root logger definition for Kafka:
kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level="INFO"
If you are using a properties file, specify the file at the command line:
oc create configmap logging-configmap --from-file=log4j.properties
The properties file defines the logging configuration:
# Define the logger kafka.root.logger.level="INFO" # ...
Define external logging in the
spec
of the resource, setting thelogging.valueFrom.configMapKeyRef.name
to the name of the ConfigMap andlogging.valueFrom.configMapKeyRef.key
to the key in this ConfigMap.spec: # ... logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties
Create or update the resource.
oc apply -f <kafka_configuration_file>
8.13.3. Configuring Cluster Operator logging
Cluster Operator logging is configured through a ConfigMap
named strimzi-cluster-operator
. A ConfigMap
containing logging configuration is created when installing the Cluster Operator. This ConfigMap
is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
. You configure Cluster Operator logging by changing the data.log4j2.properties
values in this ConfigMap
.
To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml
file and then run the following command:
oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
Alternatively, edit the ConfigMap
directly:
oc edit configmap strimzi-cluster-operator
With this ConfigMap, you can control various aspects of logging, including the root logger level, log output format, and log levels for different components. The monitorInterval
setting, determines how often the logging configuration is reloaded. You can also control the logging levels for the Kafka AdminClient
, ZooKeeper ZKTrustManager
, Netty, and the OkHttp client. Netty is a framework used in AMQ Streams for network communication, and OkHttp is a library used for making HTTP requests.
If the ConfigMap
is missing when the Cluster Operator is deployed, the default logging values are used.
If the ConfigMap
is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap
to load a new logging configuration.
Do not remove the monitorInterval
option from the ConfigMap
.
8.13.4. Adding logging filters to AMQ Streams operators
If you are using a ConfigMap to configure the (log4j2) logging levels for AMQ Streams operators, you can also define logging filters to limit what’s returned in the log.
Logging filters are useful when you have a large number of logging messages. Suppose you set the log level for the logger as DEBUG (rootLogger.level="DEBUG"
). Logging filters reduce the number of logs returned for the logger at that level, so you can focus on a specific resource. When the filter is set, only log messages matching the filter are logged.
Filters use markers to specify what to include in the log. You specify a kind, namespace and name for the marker. For example, if a Kafka cluster is failing, you can isolate the logs by specifying the kind as Kafka
, and use the namespace and name of the failing cluster.
This example shows a marker filter for a Kafka cluster named my-kafka-cluster
.
Basic logging filter configuration
rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4
You can create one or more filters. Here, the log is filtered for two Kafka clusters.
Multiple logging filter configuration
appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)
Adding filters to the Cluster Operator
To add filters to the Cluster Operator, update its logging ConfigMap YAML file (install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
).
Procedure
Update the
050-ConfigMap-strimzi-cluster-operator.yaml
file to add the filter properties to the ConfigMap.In this example, the filter properties return logs only for the
my-kafka-cluster
Kafka cluster:kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: #... appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)
Alternatively, edit the
ConfigMap
directly:oc edit configmap strimzi-cluster-operator
If you updated the YAML file instead of editing the
ConfigMap
directly, apply the changes by deploying the ConfigMap:oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml
Adding filters to the Topic Operator or User Operator
To add filters to the Topic Operator or User Operator, create or edit a logging ConfigMap.
In this procedure a logging ConfigMap is created with filters for the Topic Operator. The same approach is used for the User Operator.
Procedure
Create the ConfigMap.
You can create the ConfigMap as a YAML file or from a properties file.
In this example, the filter properties return logs only for the
my-topic
topic:kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)
If you are using a properties file, specify the file at the command line:
oc create configmap logging-configmap --from-file=log4j2.properties
The properties file defines the logging configuration:
# Define the logger rootLogger.level="INFO" # Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) # ...
Define external logging in the
spec
of the resource, setting thelogging.valueFrom.configMapKeyRef.name
to the name of the ConfigMap andlogging.valueFrom.configMapKeyRef.key
to the key in this ConfigMap.For the Topic Operator, logging is specified in the
topicOperator
configuration of theKafka
resource.spec: # ... entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties
- Apply the changes by deploying the Cluster Operator:
create -f install/cluster-operator -n my-cluster-operator-namespace
Additional resources
8.14. Using ConfigMaps to add configuration
Add specific configuration to your AMQ Streams deployment using ConfigMap
resources. ConfigMaps use key-value pairs to store non-confidential data. Configuration data added to ConfigMaps is maintained in one place and can be reused amongst components.
ConfigMaps can only store the following types of configuration data:
- Logging configuration
- Metrics configuration
- External configuration for Kafka Connect connectors
You can’t use ConfigMaps for other areas of configuration.
When you configure a component, you can add a reference to a ConfigMap using the configMapKeyRef
property.
For example, you can use configMapKeyRef
to reference a ConfigMap that provides configuration for logging. You might use a ConfigMap to pass a Log4j configuration file. You add the reference to the logging
configuration.
Example ConfigMap for logging
spec: # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key
To use a ConfigMap for metrics configuration, you add a reference to the metricsConfig
configuration of the component in the same way.
ExternalConfiguration
properties make data from a ConfigMap (or Secret) mounted to a pod available as environment variables or volumes. You can use external configuration data for the connectors used by Kafka Connect. The data might be related to an external data source, providing the values needed for the connector to communicate with that data source.
For example, you can use the configMapKeyRef
property to pass configuration data from a ConfigMap as an environment variable.
Example ConfigMap providing environment variable values
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key
If you are using ConfigMaps that are managed externally, use configuration providers to load the data in the ConfigMaps.
8.14.1. Naming custom ConfigMaps
AMQ Streams creates its own ConfigMaps and other resources when it is deployed to OpenShift. The ConfigMaps contain data necessary for running components. The ConfigMaps created by AMQ Streams must not be edited.
Make sure that any custom ConfigMaps you create do not have the same name as these default ConfigMaps. If they have the same name, they will be overwritten. For example, if your ConfigMap has the same name as the ConfigMap for the Kafka cluster, it will be overwritten when there is an update to the Kafka cluster.
Additional resources
8.15. Loading configuration values from external sources
Use configuration providers to load configuration data from external sources. The providers operate independently of AMQ Streams. You can use them to load configuration data for all Kafka components, including producers and consumers. You reference the external source in the configuration of the component and provide access rights. The provider loads data without needing to restart the Kafka component or extracting files, even when referencing a new external source. For example, use providers to supply the credentials for the Kafka Connect connector configuration. The configuration must include any access rights to the external source.
8.15.1. Enabling configuration providers
You can enable one or more configuration providers using the config.providers
properties in the spec
configuration of a component.
Example configuration to enable a configuration provider
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider # ...
- KubernetesSecretConfigProvider
- Loads configuration data from OpenShift secrets. You specify the name of the secret and the key within the secret where the configuration data is stored. This provider is useful for storing sensitive configuration data like passwords or other user credentials.
- KubernetesConfigMapConfigProvider
- Loads configuration data from OpenShift config maps. You specify the name of the config map and the key within the config map where the configuration data is stored. This provider is useful for storing non-sensitive configuration data.
- EnvVarConfigProvider
- Loads configuration data from environment variables. You specify the name of the environment variable where the configuration data is stored. This provider is useful for configuring applications running in containers, for example, to load certificates or JAAS configuration from environment variables mapped from secrets.
- FileConfigProvider
- Loads configuration data from a file. You specify the path to the file where the configuration data is stored. This provider is useful for loading configuration data from files that are mounted into containers.
- DirectoryConfigProvider
- Loads configuration data from files within a directory. You specify the path to the directory where the configuration files are stored. This provider is useful for loading multiple configuration files and for organizing configuration data into separate files.
To use KubernetesSecretConfigProvider
and KubernetesConfigMapConfigProvider
, which are part of the OpenShift Configuration Provider plugin, you must set up access rights to the namespace that contains the configuration file.
You can use the other providers without setting up access rights. You can supply connector configuration for Kafka Connect or MirrorMaker 2 in this way by doing the following:
- Mount config maps or secrets into the Kafka Connect pod as environment variables or volumes
-
Enable
EnvVarConfigProvider
,FileConfigProvider
, orDirectoryConfigProvider
in the Kafka Connect or MirrorMaker 2 configuration -
Pass connector configuration using the
externalConfiguration
property in thespec
of theKafkaConnect
orKafkaMirrorMaker2
resource
Using providers help prevent the passing of restricted information through the Kafka Connect REST interface. You can use this approach in the following scenarios:
- Mounting environment variables with the values a connector uses to connect and communicate with a data source
- Mounting a properties file with values that are used to configure Kafka Connect connectors
- Mounting files in a directory that contains values for the TLS truststore and keystore used by a connector
A restart is required when using a new Secret
or ConfigMap
for a connector, which can disrupt other connectors.
Additional resources
8.15.2. Loading configuration values from secrets or config maps
Use the KubernetesSecretConfigProvider
to provide configuration properties from a secret or the KubernetesConfigMapConfigProvider
to provide configuration properties from a config map.
In this procedure, a config map provides configuration properties for a connector. The properties are specified as key values of the config map. The config map is mounted into the Kafka Connect pod as a volume.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a config map containing the connector configuration.
Example config map with connector properties
apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2
Procedure
Configure the
KafkaConnect
resource.-
Enable the
KubernetesConfigMapConfigProvider
The specification shown here can support loading values from config maps and secrets.
Example Kafka Connect configuration to use config maps and secrets
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 # ...
- 1
- The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from
config.providers
, taking the formconfig.providers.${alias}.class
. - 2
KubernetesConfigMapConfigProvider
provides values from config maps.- 3
KubernetesSecretConfigProvider
provides values from secrets.
-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Create a role that permits access to the values in the external config map.
Example role to access values from a config map
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-connector-configuration"] verbs: ["get"] # ...
The rule gives the role permission to access the
my-connector-configuration
config map.Create a role binding to permit access to the namespace that contains the config map.
Example role binding to access the namespace that contains the config map
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io # ...
The role binding gives the role permission to access the
my-project
namespace.The service account must be the same one used by the Kafka Connect deployment. The service account name format is
<cluster_name>-connect
, where<cluster_name>
is the name of theKafkaConnect
custom resource.Reference the config map in the connector configuration.
Example connector configuration referencing the config map
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: ${configmaps:my-project/my-connector-configuration:option1} # ... # ...
The placeholder structure is
configmaps:<path_and_file_name>:<property>
.KubernetesConfigMapConfigProvider
reads and extracts theoption1
property value from the external config map.
8.15.3. Loading configuration values from environment variables
Use the EnvVarConfigProvider
to provide configuration properties as environment variables. Environment variables can contain values from config maps or secrets.
In this procedure, environment variables provide configuration properties for a connector to communicate with Amazon AWS. The connector must be able to read the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
. The values of the environment variables are derived from a secret mounted into the Kafka Connect pod.
The names of user-defined environment variables cannot start with KAFKA_
or STRIMZI_
.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a secret containing the connector configuration.
Example secret with values for environment variables
apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=
Procedure
Configure the
KafkaConnect
resource.-
Enable the
EnvVarConfigProvider
-
Specify the environment variables using the
externalConfiguration
property.
Example Kafka Connect configuration to use external environment variables
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env 1 config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider 2 # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # ...
- 1
- The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from
config.providers
, taking the formconfig.providers.${alias}.class
. - 2
EnvVarConfigProvider
provides values from environment variables.- 3
- The environment variable takes a value from the secret.
- 4
- The name of the secret containing the environment variable.
- 5
- The name of the key stored in the secret.
NoteThe
secretKeyRef
property references keys in a secret. If you are using a config map instead of a secret, use theconfigMapKeyRef
property.-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Reference the environment variable in the connector configuration.
Example connector configuration referencing the environment variable
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: ${env:AWS_ACCESS_KEY_ID} option: ${env:AWS_SECRET_ACCESS_KEY} # ... # ...
The placeholder structure is
env:<environment_variable_name>
.EnvVarConfigProvider
reads and extracts the environment variable values from the mounted secret.
8.15.4. Loading configuration values from a file within a directory
Use the FileConfigProvider
to provide configuration properties from a file within a directory. Files can be config maps or secrets.
In this procedure, a file provides configuration properties for a connector. A database name and password are specified as properties of a secret. The secret is mounted to the Kafka Connect pod as a volume. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name>
.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a secret containing the connector configuration.
Example secret with database properties
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password
Procedure
Configure the
KafkaConnect
resource.-
Enable the
FileConfigProvider
-
Specify the file using the
externalConfiguration
property.
Example Kafka Connect configuration to use an external property file
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4
- 1
- The alias for the configuration provider is used to define other configuration parameters.
- 2
FileConfigProvider
provides values from properties files. The parameter uses the alias fromconfig.providers
, taking the formconfig.providers.${alias}.class
.- 3
- The name of the volume containing the secret.
- 4
- The name of the secret.
-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Reference the file properties in the connector configuration as placeholders.
Example connector configuration referencing the file
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: "3306" database.user: "${file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}" database.password: "${file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}" database.server.id: "184054" #...
The placeholder structure is
file:<path_and_file_name>:<property>
.FileConfigProvider
reads and extracts the database username and password property values from the mounted secret.
8.15.5. Loading configuration values from multiple files within a directory
Use the DirectoryConfigProvider
to provide configuration properties from multiple files within a directory. Files can be config maps or secrets.
In this procedure, a secret provides the TLS keystore and truststore user credentials for a connector. The credentials are in separate files. The secrets are mounted into the Kafka Connect pod as volumes. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name>
.
Prerequisites
- A Kafka cluster is running.
- The Cluster Operator is running.
- You have a secret containing the user credentials.
Example secret with user credentials
apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store
The my-user
secret provides the keystore credentials (user.crt
and user.key
) for the connector.
The <cluster_name>-cluster-ca-cert
secret generated when deploying the Kafka cluster provides the cluster CA certificate as truststore credentials (ca.crt
).
Procedure
Configure the
KafkaConnect
resource.-
Enable the
DirectoryConfigProvider
-
Specify the files using the
externalConfiguration
property.
Example Kafka Connect configuration to use external property files
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 #... externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6
- 1
- The alias for the configuration provider is used to define other configuration parameters.
- 2
DirectoryConfigProvider
provides values from files in a directory. The parameter uses the alias fromconfig.providers
, taking the formconfig.providers.${alias}.class
.- 3
- The names of the volumes containing the secrets.
- 4
- The name of the secret for the cluster CA certificate to supply truststore configuration.
- 5
- The name of the secret for the user to supply keystore configuration.
-
Enable the
Create or update the resource to enable the provider.
oc apply -f <kafka_connect_configuration_file>
Reference the file properties in the connector configuration as placeholders.
Example connector configuration referencing the files