Chapter 6. Deploying AMQ Streams using installation artifacts
Having prepared your environment for a deployment of AMQ Streams, you can deploy AMQ Streams to an OpenShift cluster. Use the installation files provided with the release artifacts.
AMQ Streams is based on Strimzi 0.34.x. You can deploy AMQ Streams 2.4 on OpenShift 4.10 to 4.13.
The steps to deploy AMQ Streams using the installation files are as follows:
- Deploy the Cluster Operator
Use the Cluster Operator to deploy the following:
Optionally, deploy the following Kafka components according to your requirements:
To run the commands in this guide, an OpenShift user must have the rights to manage role-based access control (RBAC) and CRDs.
6.1. Basic deployment path
You can set up a deployment where AMQ Streams manages a single Kafka cluster in the same namespace. You might use this configuration for development or testing. Or you can use AMQ Streams in a production environment to manage a number of Kafka clusters in different namespaces.
The first step for any deployment of AMQ Streams is to install the Cluster Operator using the install/cluster-operator
files.
A single command applies all the installation files in the cluster-operator
folder: oc apply -f ./install/cluster-operator
.
The command sets up everything you need to be able to create and manage a Kafka deployment, including the following:
-
Cluster Operator (
Deployment
,ConfigMap
) -
AMQ Streams CRDs (
CustomResourceDefinition
) -
RBAC resources (
ClusterRole
,ClusterRoleBinding
,RoleBinding
) -
Service account (
ServiceAccount
)
The basic deployment path is as follows:
- Download the release artifacts
- Create an OpenShift namespace in which to deploy the Cluster Operator
-
Update the
install/cluster-operator
files to use the namespace created for the Cluster Operator - Install the Cluster Operator to watch one, multiple, or all namespaces
-
Update the
- Create a Kafka cluster
After which, you can deploy other Kafka components and set up monitoring of your deployment.
6.2. Deploying the Cluster Operator
The Cluster Operator is responsible for deploying and managing Kafka clusters within an OpenShift cluster.
When the Cluster Operator is running, it starts to watch for updates of Kafka resources.
By default, a single replica of the Cluster Operator is deployed. You can add replicas with leader election so that additional Cluster Operators are on standby in case of disruption. For more information, see Section 13.2.5, “Running multiple Cluster Operator replicas with leader election”.
6.2.1. Specifying the namespaces the Cluster Operator watches
The Cluster Operator watches for updates in the namespaces where the Kafka resources are deployed. When you deploy the Cluster Operator, you specify which namespaces to watch. You can specify the following namespaces:
- A single namespace (the same namespace containing the Cluster Operator)
- Multiple namespaces
- All namespaces
The Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster. The Topic Operator and User Operator watch for KafkaTopic
and KafkaUser
resources in a single namespace. For more information, see Section 13.1, “Watching namespaces with AMQ Streams operators”.
The Cluster Operator watches for changes to the following resources:
-
Kafka
for the Kafka cluster. -
KafkaConnect
for the Kafka Connect cluster. -
KafkaConnector
for creating and managing connectors in a Kafka Connect cluster. -
KafkaMirrorMaker
for the Kafka MirrorMaker instance. -
KafkaMirrorMaker2
for the Kafka MirrorMaker 2 instance. -
KafkaBridge
for the Kafka Bridge instance. -
KafkaRebalance
for the Cruise Control optimization requests.
When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as StatefulSets, Services and ConfigMaps.
Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource.
Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption.
When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources.
6.2.2. Deploying the Cluster Operator to watch a single namespace
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources in a single namespace in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.2.3. Deploying the Cluster Operator to watch multiple namespaces
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across multiple namespaces in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to add a list of all the namespaces the Cluster Operator will watch to theSTRIMZI_NAMESPACE
environment variable.For example, in this procedure the Cluster Operator will watch the namespaces
watched-namespace-1
,watched-namespace-2
,watched-namespace-3
.apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.4.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3
For each namespace listed, install the
RoleBindings
.In this example, we replace
watched-namespace
in these commands with the namespaces listed in the previous step, repeating them forwatched-namespace-1
,watched-namespace-2
,watched-namespace-3
:oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>
Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.2.4. Deploying the Cluster Operator to watch all namespaces
This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster.
When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to set the value of theSTRIMZI_NAMESPACE
environment variable to*
.apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.4.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ...
Create
ClusterRoleBindings
that grant cluster-wide access for all namespaces to the Cluster Operator.oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
Deploy the Cluster Operator to your OpenShift cluster.
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.3. Deploying Kafka
To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka
resource. AMQ Streams provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time.
After you have deployed the Cluster Operator, use a Kafka
resource to deploy the following components:
When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper.
If you haven’t deployed a Kafka cluster as a Kafka
resource, you can’t use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by AMQ Streams, by deploying them as standalone components. You can also deploy and use other Kafka components with a Kafka cluster not managed by AMQ Streams.
6.3.1. Deploying the Kafka cluster
This procedure shows how to deploy a Kafka cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a Kafka
resource.
AMQ Streams provides the following example files you can use to create a Kafka cluster:
kafka-persistent.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes.
kafka-jbod.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes).
kafka-persistent-single.yaml
- Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node.
kafka-ephemeral.yaml
- Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes.
kafka-ephemeral-single.yaml
- Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node.
In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment.
- Ephemeral cluster
-
In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses
emptyDir
volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using anemptyDir
volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. - Persistent cluster
A persistent Kafka cluster uses persistent volumes to store ZooKeeper and Kafka data. A
PersistentVolume
is acquired using aPersistentVolumeClaim
to make it independent of the actual type of thePersistentVolume
. ThePersistentVolumeClaim
can use aStorageClass
to trigger automatic volume provisioning. When noStorageClass
is specified, OpenShift will try to use the defaultStorageClass
.The following examples show some common types of persistent volumes:
- If your OpenShift cluster runs on Amazon AWS, OpenShift can provision Amazon EBS volumes
- If your OpenShift cluster runs on Microsoft Azure, OpenShift can provision Azure Disk Storage volumes
- If your OpenShift cluster runs on Google Cloud, OpenShift can provision Persistent Disk volumes
- If your OpenShift cluster runs on bare metal, OpenShift can provision local persistent volumes
The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. The inter.broker.protocol.version
property for the Kafka config
must be the version supported by the specified Kafka version (spec.kafka.version
). The property represents the version of Kafka protocol used in a Kafka cluster.
From Kafka 3.0.0, when the inter.broker.protocol.version
is set to 3.0
or higher, the log.message.format.version
option is ignored and doesn’t need to be set.
An update to the inter.broker.protocol.version
is required when upgrading Kafka.
The example clusters are named my-cluster
by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name
property of the Kafka
resource in the relevant YAML file.
Default cluster name and specified Kafka versions
apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.4.0 #... config: #... log.message.format.version: "3.4" inter.broker.protocol.version: "3.4" # ...
Prerequisites
Procedure
Create and deploy an ephemeral or persistent cluster.
To create and deploy an ephemeral cluster:
oc apply -f examples/kafka/kafka-ephemeral.yaml
To create and deploy a persistent cluster:
oc apply -f examples/kafka/kafka-persistent.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod names and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0
my-cluster
is the name of the Kafka cluster.A sequential index number starting with
0
identifies each Kafka and ZooKeeper pod created.With the default deployment, you create an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
shows asRunning
.
Additional resources
6.3.2. Deploying the Topic Operator using the Cluster Operator
This procedure describes how to deploy the Topic Operator using the Cluster Operator.
You configure the entityOperator
property of the Kafka
resource to include the topicOperator
. By default, the Topic Operator watches for KafkaTopic
resources in the namespace of the Kafka cluster deployed by the Cluster Operator. You can also specify a namespace using watchedNamespace
in the Topic Operator spec
. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator.
If you use AMQ Streams to deploy multiple Kafka clusters into the same namespace, enable the Topic Operator for only one Kafka cluster or use the watchedNamespace
property to configure the Topic Operators to watch other namespaces.
If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component.
For more information about configuring the entityOperator
and topicOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includetopicOperator
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {}
Configure the Topic Operator
spec
using the properties described in theEntityTopicOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
shows asRunning
.
6.3.3. Deploying the User Operator using the Cluster Operator
This procedure describes how to deploy the User Operator using the Cluster Operator.
You configure the entityOperator
property of the Kafka
resource to include the userOperator
. By default, the User Operator watches for KafkaUser
resources in the namespace of the Kafka cluster deployment. You can also specify a namespace using watchedNamespace
in the User Operator spec
. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator.
If you want to use the User Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the User Operator as a standalone component.
For more information about configuring the entityOperator
and userOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includeuserOperator
:apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {}
Configure the User Operator
spec
using the properties described inEntityUserOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
shows asRunning
.
6.4. Deploying Kafka Connect
Kafka Connect is a tool for streaming data between Apache Kafka and other systems. For example, Kafka Connect might integrate Kafka with external databases or storage and messaging systems.
In AMQ Streams, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by AMQ Streams.
Using the concept of connectors, Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability.
The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect
resource and connectors created using the KafkaConnector
resource.
In order to use Kafka Connect, you need to do the following.
The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context.
6.4.1. Deploying Kafka Connect to your OpenShift cluster
This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator.
A Kafka Connect cluster deployment is implemented with a configurable number of nodes (also called workers) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable.
The deployment uses a YAML file to provide the specification to create a KafkaConnect
resource.
AMQ Streams provides example configuration files. In this procedure, we use the following example file:
-
examples/connect/kafka-connect.yaml
Prerequisites
Procedure
Deploy Kafka Connect to your OpenShift cluster. Use the
examples/connect/kafka-connect.yaml
file to deploy Kafka Connect.oc apply -f examples/connect/kafka-connect.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0
my-connect-cluster
is the name of the Kafka Connect cluster.A pod ID identifies each pod created.
With the default deployment, you create a single Kafka Connect pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
shows asRunning
.
Additional resources
6.4.2. Configuring Kafka Connect for multiple instances
If you are running multiple instances of Kafka Connect, you have to change the default configuration of the following config
properties:
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 # ... # ...
Values for the three topics must be the same for all Kafka Connect instances with the same group.id
.
Unless you change the default settings, each Kafka Connect instance connecting to the same Kafka cluster is deployed with the same values. What happens, in effect, is all instances are coupled to run in a cluster and use the same topics.
If multiple Kafka Connect clusters try to use the same topics, Kafka Connect will not work as expected and generate errors.
If you wish to run multiple Kafka Connect instances, change the values of these properties for each instance.
6.4.3. Adding connectors
Kafka Connect uses connectors to integrate with other systems to stream data. A connector is an instance of a Kafka Connector
class, which can be one of the following type:
- Source connector
- A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages.
- Sink connector
- A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system.
Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems.
Add connector plugins to Kafka Connect in one of the following ways:
- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
After plugins have been added to the container image, you can start, stop, and manage connector instances in the following ways:
You can also create new connector instances using these options.
6.4.3.1. Building a new container image with connector plugins automatically
Configure Kafka Connect so that AMQ Streams automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins
property of the KafkaConnect
custom resource. AMQ Streams will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output
and automatically used in the Kafka Connect deployment.
Prerequisites
- The Cluster Operator must be deployed.
- A container registry.
You need to provide your own container registry where images can be pushed to, stored, and pulled from. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub.
Procedure
Configure the
KafkaConnect
custom resource by specifying the container registry in.spec.build.output
, and additional connectors in.spec.build.plugins
:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 #...
Create or update the resource:
$ oc apply -f <kafka_connect_configuration_file>
- Wait for the new container image to build, and for the Kafka Connect cluster to be deployed.
-
Use the Kafka Connect REST API or
KafkaConnector
custom resources to use the connector plugins you added.
Additional resources
6.4.3.2. Building a new container image with connector plugins from the Kafka Connect base image
Create a custom Docker image with connector plugins from the Kafka Connect base image Add the custom image to the /opt/kafka/plugins
directory.
You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins.
At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins
directory.
Prerequisites
Procedure
Create a new
Dockerfile
usingregistry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0
as the base image:FROM registry.redhat.io/amq-streams/kafka-34-rhel8:2.4.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
Example plugins file
$ tree ./my-plugins/ ./my-plugins/ ├── debezium-connector-mongodb │ ├── bson-<version>.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mongodb-driver-core-<version>.jar │ ├── README.md │ └── # ... ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-<version>.jar │ ├── debezium-core-<version>.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-<version>.jar │ ├── mysql-connector-java-<version>.jar │ ├── README.md │ └── # ... └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-<version>.jar ├── debezium-core-<version>.jar ├── LICENSE.txt ├── postgresql-<version>.jar ├── protobuf-java-<version>.jar ├── README.md └── # ...
The COPY command points to the plugin files to copy to the container image.
This example adds plugins for Debezium connectors (MongoDB, MySQL, and PostgreSQL), though not all files are listed for brevity. Debezium running in Kafka Connect looks the same as any other Kafka Connect task.
- Build the container image.
- Push your custom image to your container registry.
Point to the new container image.
You can point to the image in one of the following ways:
Edit the
KafkaConnect.spec.image
property of theKafkaConnect
custom resource.If set, this property overrides the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in the Cluster Operator.apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #...
-
Edit the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in theinstall/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to point to the new container image, and then reinstall the Cluster Operator.
6.4.3.3. Deploying KafkaConnector resources
Deploy KafkaConnector
resources to manage connectors. The KafkaConnector
custom resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. You don’t need to send HTTP requests to manage connectors, as with the Kafka Connect REST API. You manage a running connector instance by updating its corresponding KafkaConnector
resource, and then applying the updates. The Cluster Operator updates the configurations of the running connector instances. You remove a connector by deleting its corresponding KafkaConnector
.
KafkaConnector
resources must be deployed to the same namespace as the Kafka Connect cluster they link to.
In the configuration shown in this procedure, the autoRestart
property is set to true
. This enables automatic restarts of failed connectors and tasks. Up to seven restart attempts are made, after which restarts must be made manually. You annotate the KafkaConnector
resource to restart a connector or restart a connector task manually.
Example connectors
You can use your own connectors or try the examples provided by AMQ Streams. Up until Apache Kafka 3.1.0, example file connector plugins were included with Apache Kafka. Starting from the 3.1.1 and 3.2.0 releases of Apache Kafka, the examples need to be added to the plugin path as any other connector.
AMQ Streams provides an example KafkaConnector
configuration file (examples/connect/source-connector.yaml
) for the example file connector plugins, which creates the following connector instances as KafkaConnector
resources:
-
A
FileStreamSourceConnector
instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic. -
A
FileStreamSinkConnector
instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink).
We use the example file to create connectors in this procedure.
The example connectors are not intended for use in a production environment.
Prerequisites
- A Kafka Connect deployment
- The Cluster Operator is running
Procedure
Add the
FileStreamSourceConnector
andFileStreamSinkConnector
plugins to Kafka Connect in one of the following ways:- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
Set the
strimzi.io/use-connector-resources annotation
totrue
in the Kafka Connect configuration.apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ...
With the
KafkaConnector
resources enabled, the Cluster Operator watches for them.Edit the
examples/connect/source-connector.yaml
file:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 autoRestart: 5 enabled: true config: 6 file: "/opt/kafka/LICENSE" 7 topic: my-topic 8 # ...
- 1
- Name of the
KafkaConnector
resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. - 2
- Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
- 3
- Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 4
- Maximum number of Kafka Connect tasks that the connector can create.
- 5
- Enables automatic restarts of failed connectors and tasks.
- 6
- Connector configuration as key-value pairs.
- 7
- This example source connector configuration reads data from the
/opt/kafka/LICENSE
file. - 8
- Kafka topic to publish the source data to.
Create the source
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/source-connector.yaml
Create an
examples/connect/sink-connector.yaml
file:touch examples/connect/sink-connector.yaml
Paste the following YAML into the
sink-connector.yaml
file:apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: "/tmp/my-file" 3 topics: my-topic 4
- 1
- Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 2
- Connector configuration as key-value pairs.
- 3
- Temporary file to publish the source data to.
- 4
- Kafka topic to read the source data from.
Create the sink
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/sink-connector.yaml
Check that the connector resources were created:
oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector
Replace <my_connect_cluster> with the name of your Kafka Connect cluster.
In the container, execute
kafka-console-consumer.sh
to read the messages that were written to the topic by the source connector:oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap.NAMESPACE.svc:9092 --topic my-topic --from-beginning
Replace <my_kafka_cluster> with the name of your Kafka cluster.
Source and sink connector configuration options
The connector configuration is defined in the spec.config
property of the KafkaConnector
resource.
The FileStreamSourceConnector
and FileStreamSinkConnector
classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options.
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Source file to write messages to. If not specified, the standard input is used. |
| List | Null | The Kafka topic to publish data to. |
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Destination file to write messages to. If not specified, the standard output is used. |
| List | Null | One or more Kafka topics to read data from. |
| String | Null | A regular expression matching one or more Kafka topics to read data from. |
6.4.3.4. Manually restarting connectors
If you are using KafkaConnector
resources to manage connectors, use the restart
annotation to manually trigger a restart of a connector.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaConnector
custom resource that controls the Kafka connector you want to restart:oc get KafkaConnector
Restart the connector by annotating the
KafkaConnector
resource in OpenShift.oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart=true
The
restart
annotation is set totrue
.Wait for the next reconciliation to occur (every two minutes by default).
The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the
KafkaConnector
custom resource.
6.4.3.5. Manually restarting Kafka connector tasks
If you are using KafkaConnector
resources to manage connectors, use the restart-task
annotation to manually trigger a restart of a connector task.
Prerequisites
- The Cluster Operator is running.
Procedure
Find the name of the
KafkaConnector
custom resource that controls the Kafka connector task you want to restart:oc get KafkaConnector
Find the ID of the task to be restarted from the
KafkaConnector
custom resource. Task IDs are non-negative integers, starting from 0:oc describe KafkaConnector <kafka_connector_name>
Use the ID to restart the connector task by annotating the
KafkaConnector
resource in OpenShift:oc annotate KafkaConnector <kafka_connector_name> strimzi.io/restart-task=0
In this example, task
0
is restarted.Wait for the next reconciliation to occur (every two minutes by default).
The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the
KafkaConnector
custom resource.
6.4.3.6. Exposing the Kafka Connect API
Use the Kafka Connect REST API as an alternative to using KafkaConnector
resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name>-connect-api:8083
, where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance.
The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation.
The strimzi.io/use-connector-resources
annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect
resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
You can add the connector configuration as a JSON object.
Example curl request to add connector configuration
curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" } }'
The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
The connection is insecure, so allow external access advisedly.
If you decide to create services, use the labels from the selector
of the <connect_cluster_name>-connect-api
service to configure the pods to which the service will route the traffic:
Selector configuration for the service
# ... selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #...
You must also create a NetworkPolicy
that allows HTTP requests from external clients.
Example NetworkPolicy to allow requests to the Kafka Connect API
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-custom-connect-network-policy
spec:
ingress:
- from:
- podSelector: 1
matchLabels:
app: my-connector-manager
ports:
- port: 8083
protocol: TCP
podSelector:
matchLabels:
strimzi.io/cluster: my-connect-cluster
strimzi.io/kind: KafkaConnect
strimzi.io/name: my-connect-cluster-connect
policyTypes:
- Ingress
- 1
- The label of the pod that is allowed to connect to the API.
To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command.
6.4.3.7. Limiting access to the Kafka Connect API
It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. The Kafka Connect API provides extensive capabilities for altering connector configurations, which makes it all the more important to take security precautions. Someone with access to the Kafka Connect API could potentially obtain sensitive information that an administrator may assume is secure.
The Kafka Connect REST API can be accessed by anyone who has authenticated access to the OpenShift cluster and knows the endpoint URL, which includes the hostname/IP address and port number.
For example, suppose an organization uses a Kafka Connect cluster and connectors to stream sensitive data from a customer database to a central database. The administrator uses a configuration provider plugin to store sensitive information related to connecting to the customer database and the central database, such as database connection details and authentication credentials. The configuration provider protects this sensitive information from being exposed to unauthorized users. However, someone who has access to the Kafka Connect API can still obtain access to the customer database without the consent of the administrator. They can do this by setting up a fake database and configuring a connector to connect to it. They then modify the connector configuration to point to the customer database, but instead of sending the data to the central database, they send it to the fake database. By configuring the connector to connect to the fake database, the login details and credentials for connecting to the customer database are intercepted, even though they are stored securely in the configuration provider.
If you are using the KafkaConnector
custom resources, then by default the OpenShift RBAC rules permit only OpenShift cluster administrators to make changes to connectors. You can also designate non-cluster administrators to manage AMQ Streams resources. With KafkaConnector
resources enabled in your Kafka Connect configuration, changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. If you are not using the KafkaConnector
resource, the default RBAC rules do not limit access to the Kafka Connect API. If you want to limit direct access to the Kafka Connect REST API using OpenShift RBAC, you need to enable and use the KafkaConnector
resources.
For improved security, we recommend configuring the following properties for the Kafka Connect API:
org.apache.kafka.disallowed.login.modules
(Kafka 3.4 or later) Set the
org.apache.kafka.disallowed.login.modules
Java system property to prevent the use of insecure login modules. For example, specifyingcom.sun.security.auth.module.JndiLoginModule
prevents the use of the KafkaJndiLoginModule
.Example configuration for disallowing login modules
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... jvmOptions: javaSystemProperties: - name: org.apache.kafka.disallowed.login.modules value: com.sun.security.auth.module.JndiLoginModule, org.apache.kafka.common.security.kerberos.KerberosLoginModule # ...
Only allow trusted login modules and follow the latest advice from Kafka for the version you are using. As a best practice, you should explicitly disallow insecure login modules in your Kafka Connect configuration by using the
org.apache.kafka.disallowed.login.modules
system property.connector.client.config.override.policy
Set the
connector.client.config.override.policy
property toNone
to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses.Example configuration to specify connector override policy
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: connector.client.config.override.policy: None # ...
6.4.3.8. Switching from using the Kafka Connect API to using KafkaConnector custom resources
You can switch from using the Kafka Connect API to using KafkaConnector
custom resources to manage your connectors. To make the switch, do the following in the order shown:
-
Deploy
KafkaConnector
resources with the configuration to create your connector instances. -
Enable
KafkaConnector
resources in your Kafka Connect configuration by setting thestrimzi.io/use-connector-resources
annotation totrue
.
If you enable KafkaConnector
resources before creating them, you delete all connectors.
To switch from using KafkaConnector
resources to using the Kafka Connect API, first remove the annotation that enables the KafkaConnector
resources from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
When making the switch, check the status of the KafkaConnect
resource. The value of metadata.generation
(the current version of the deployment) must match status.observedGeneration
(the latest reconciliation of the resource). When the Kafka Connect cluster is Ready
, you can delete the KafkaConnector
resources.
6.5. Deploying Kafka MirrorMaker
The Cluster Operator deploys one or more Kafka MirrorMaker replicas to replicate data between Kafka clusters. This process is called mirroring to avoid confusion with the Kafka partitions replication concept. MirrorMaker consumes messages from the source cluster and republishes those messages to the target cluster.
6.5.1. Deploying Kafka MirrorMaker to your OpenShift cluster
This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker
or KafkaMirrorMaker2
resource depending on the version of MirrorMaker deployed.
Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker
custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker
resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2
custom resource with the IdentityReplicationPolicy
.
AMQ Streams provides example configuration files. In this procedure, we use the following example files:
-
examples/mirror-maker/kafka-mirror-maker.yaml
-
examples/mirror-maker/kafka-mirror-maker-2.yaml
Prerequisites
Procedure
Deploy Kafka MirrorMaker to your OpenShift cluster:
For MirrorMaker:
oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml
For MirrorMaker 2:
oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1
my-mirror-maker
is the name of the Kafka MirrorMaker cluster.my-mm2-cluster
is the name of the Kafka MirrorMaker 2 cluster.A pod ID identifies each pod created.
With the default deployment, you install a single MirrorMaker or MirrorMaker 2 pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
shows asRunning
.
Additional resources
6.6. Deploying Kafka Bridge
The Cluster Operator deploys one or more Kafka bridge replicas to send data between Kafka clusters and clients via HTTP API.
6.6.1. Deploying Kafka Bridge to your OpenShift cluster
This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaBridge
resource.
AMQ Streams provides example configuration files. In this procedure, we use the following example file:
-
examples/bridge/kafka-bridge.yaml
Prerequisites
Procedure
Deploy Kafka Bridge to your OpenShift cluster:
oc apply -f examples/bridge/kafka-bridge.yaml
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0
my-bridge
is the name of the Kafka Bridge cluster.A pod ID identifies each pod created.
With the default deployment, you install a single Kafka Bridge pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
shows asRunning
.
Additional resources
6.6.2. Exposing the Kafka Bridge service to your local machine
Use port forwarding to expose the AMQ Streams Kafka Bridge service to your local machine on http://localhost:8080.
Port forwarding is only suitable for development and testing purposes.
Procedure
List the names of the pods in your OpenShift cluster:
oc get pods -o name pod/kafka-consumer # ... pod/my-bridge-bridge-<pod_id>
Connect to the Kafka Bridge pod on port
8080
:oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &
NoteIf port 8080 on your local machine is already in use, use an alternative HTTP port, such as
8008
.
API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod.
6.6.3. Accessing the Kafka Bridge outside of OpenShift
After deployment, the AMQ Streams Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name>-bridge-service
service to access the API.
If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
If you decide to create Services, use the labels from the selector
of the <kafka_bridge_name>-bridge-service
service to configure the pods to which the service will route the traffic:
# ...
selector:
strimzi.io/cluster: kafka-bridge-name 1
strimzi.io/kind: KafkaBridge
#...
- 1
- Name of the Kafka Bridge custom resource in your OpenShift cluster.
6.7. Alternative standalone deployment options for AMQ Streams operators
You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator.
You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster.
6.7.1. Deploying the standalone Topic Operator
This procedure shows how to deploy the Topic Operator as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator.
A standalone deployment can operate with any Kafka cluster.
Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-topic-operator.yaml
deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The Topic Operator watches for KafkaTopic
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the Topic Operator configuration. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you want to use more than one Topic Operator, configure each of them to watch different namespaces. In this way, you can use Topic Operators with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the Topic Operator to connect to.
As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the
env
properties in theinstall/topic-operator/05-Deployment-strimzi-topic-operator.yaml
standalone deployment file.Example standalone Topic Operator deployment configuration
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_ZOOKEEPER_CONNECT 4 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 5 value: "18000" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 7 value: "6" - name: STRIMZI_LOG_LEVEL 8 value: INFO - name: STRIMZI_TLS_ENABLED 9 value: "false" - name: STRIMZI_JAVA_OPTS 10 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 11 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA 12 value: "false" - name: STRIMZI_TLS_AUTH_ENABLED 13 value: "false" - name: STRIMZI_SASL_ENABLED 14 value: "false" - name: STRIMZI_SASL_USERNAME 15 value: "admin" - name: STRIMZI_SASL_PASSWORD 16 value: "password" - name: STRIMZI_SASL_MECHANISM 17 value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL 18 value: "SSL"
- 1
- The OpenShift namespace for the Topic Operator to watch for
KafkaTopic
resources. Specify the namespace of the Kafka cluster. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The label to identify the
KafkaTopic
resources managed by the Topic Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaTopic
resource. If you deploy more than one Topic Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 4
- The host and port pair of the address to connect to the ZooKeeper cluster. This must be the same ZooKeeper cluster that your Kafka cluster is using.
- 5
- The ZooKeeper session timeout, in milliseconds. The default is
18000
(18 seconds). - 6
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 7
- The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential backoff. Consider increasing this value when topic creation takes more time due to the number of partitions or replicas. The default is
6
attempts. - 8
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 9
- Enables TLS support for encrypted communication with the Kafka brokers.
- 10
- (Optional) The Java options used by the JVM running the Topic Operator.
- 11
- (Optional) The debugging (
-D
) options set for the Topic Operator. - 12
- (Optional) Skips the generation of trust store certificates if TLS is enabled through
STRIMZI_TLS_ENABLED
. If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default isfalse
. - 13
- (Optional) Generates key store certificates for mTLS authentication. Setting this to
false
disables client authentication with mTLS to the Kafka brokers. The default istrue
. - 14
- (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is
false
. - 15
- (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 16
- (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 17
- (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. You can set the value toplain
,scram-sha-256
, orscram-sha-512
. - 18
- (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to
PLAINTEXT
,SSL
,SASL_PLAINTEXT
, orSASL_SSL
.
-
If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set
STRIMZI_PUBLIC_CA
totrue
. Set this property totrue
, for example, if you are using Amazon AWS MSK service. If you enabled mTLS with the
STRIMZI_TLS_ENABLED
environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster.Example mTLS configuration
# .... env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: "/path/to/truststore.p12" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: "TRUSTSTORE-PASSWORD" - name: STRIMZI_KEYSTORE_LOCATION 3 value: "/path/to/keystore.p12" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: "KEYSTORE-PASSWORD" # ...
Deploy the Topic Operator.
oc create -f install/topic-operator
Check the status of the deployment:
oc get deployments
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
6.7.2. Deploying the standalone User Operator
This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator.
A standalone deployment can operate with any Kafka cluster.
Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-user-operator.yaml
deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The User Operator watches for KafkaUser
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the User Operator configuration. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use more than one User Operator, configure each of them to watch different namespaces. In this way, you can use the User Operator with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the User Operator to connect to.
As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the following
env
properties in theinstall/user-operator/05-Deployment-strimzi-user-operator.yaml
standalone deployment file.Example standalone User Operator deployment configuration
apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-user-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_WORK_QUEUE_SIZE 7 value: 10000 - name: STRIMZI_CONTROLLER_THREAD_POOL_SIZE 8 value: 10 - name: STRIMZI_USER_OPERATIONS_THREAD_POOL_SIZE 9 value: 4 - name: STRIMZI_LOG_LEVEL 10 value: INFO - name: STRIMZI_GC_LOG_ENABLED 11 value: "true" - name: STRIMZI_CA_VALIDITY 12 value: "365" - name: STRIMZI_CA_RENEWAL 13 value: "30" - name: STRIMZI_JAVA_OPTS 14 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 15 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_SECRET_PREFIX 16 value: "kafka-" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 17 value: "true" - name: STRIMZI_MAINTENANCE_TIME_WINDOWS 18 value: '* * 8-10 * * ?;* * 14-15 * * ?' - name: STRIMZI_KAFKA_ADMIN_CLIENT_CONFIGURATION 19 value: | default.api.timeout.ms=120000 request.timeout.ms=60000 - name: STRIMZI_KRAFT_ENABLED 20 value: "false"
- 1
- The OpenShift namespace for the User Operator to watch for
KafkaUser
resources. Only one namespace can be specified. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA (certificate authority) that signs new user certificates for mTLS authentication. - 4
- The OpenShift
Secret
that contains the private key (ca.key
) value of the CA that signs new user certificates for mTLS authentication. - 5
- The label to identify the
KafkaUser
resources managed by the User Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaUser
resource. If you deploy more than one User Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 6
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 7
- The size of the controller event queue. The size of the queue should be at least as big as the maximal amount of users you expect the User Operator to operate. The default is
1024
. - 8
- The size of the worker pool for reconciling the users. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is50
. - 9
- The size of the worker pool for Kafka Admin API and OpenShift operations. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is4
. - 10
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 11
- Enables garbage collection (GC) logging. The default is
true
. - 12
- The validity period for the CA. The default is
365
days. - 13
- The renewal period for the CA. The renewal period is measured backwards from the expiry date of the current certificate. The default is
30
days to initiate certificate renewal before the old certificates expire. - 14
- (Optional) The Java options used by the JVM running the User Operator
- 15
- (Optional) The debugging (
-D
) options set for the User Operator - 16
- (Optional) Prefix for the names of OpenShift secrets created by the User Operator.
- 17
- (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to
false
, the User Operator will reject all resources withsimple
authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default istrue
. - 18
- (Optional) Semi-colon separated list of Cron Expressions defining the maintenance time windows during which the expiring user certificates will be renewed.
- 19
- (Optional) Configuration options for configuring the Kafka Admin client used by the User Operator in the properties format.
- 20
- (Optional) Indicates whether the Kafka cluster the User Operator is connecting to is using KRaft instead of ZooKeeper. Set this variable to
true
if the Kafka cluster uses KRaft. The default isfalse
. Note that some features are not available when running against KRaft clusters. For example, management of SCRAM-SHA-512 users is disabled because Apache Kafka currently does not support it.
If you are using mTLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the next step.
Example mTLS configuration
# .... env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs # ..."
- 1
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA that signs Kafka broker certificates. - 2
- The OpenShift
Secret
that contains the certificate public key (entity-operator.crt
) and private key (entity-operator.key
) that is used for mTLS authentication against the Kafka cluster.
Deploy the User Operator.
oc create -f install/user-operator
Check the status of the deployment:
oc get deployments
Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.