Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 7. Deploying Streams for Apache Kafka using installation artifacts
Having prepared your environment for a deployment of Streams for Apache Kafka, you can deploy Streams for Apache Kafka to an OpenShift cluster. Use the installation files provided with the release artifacts.
Streams for Apache Kafka is based on Strimzi 0.43.x. You can deploy Streams for Apache Kafka 2.8 on OpenShift 4.12 and 4.14 to 4.17.
The steps to deploy Streams for Apache Kafka using the installation files are as follows:
- Deploy the Cluster Operator
Use the Cluster Operator to deploy the following:
Optionally, deploy the following Kafka components according to your requirements:
To run the commands in this guide, an OpenShift user must have the rights to manage role-based access control (RBAC) and CRDs.
7.1. Basic deployment path Copier lienLien copié sur presse-papiers!
You can set up a deployment where Streams for Apache Kafka manages a single Kafka cluster in the same namespace. You might use this configuration for development or testing. Or you can use Streams for Apache Kafka in a production environment to manage a number of Kafka clusters in different namespaces.
The basic deployment path is as follows:
- Download the release artifacts
- Create an OpenShift namespace in which to deploy the Cluster Operator
-
Update the
install/cluster-operator
files to use the namespace created for the Cluster Operator - Install the Cluster Operator to watch one, multiple, or all namespaces
-
Update the
- Create a Kafka cluster
After which, you can deploy other Kafka components and set up monitoring of your deployment.
7.2. Deploying the Cluster Operator Copier lienLien copié sur presse-papiers!
The first step for any deployment of Streams for Apache Kafka is to install the Cluster Operator, which is responsible for deploying and managing Kafka clusters within an OpenShift cluster. A single command applies all the installation files in the install/cluster-operator
folder: oc apply -f ./install/cluster-operator
.
The command sets up everything you need to be able to create and manage a Kafka deployment, including the following resources:
-
Cluster Operator (
Deployment
,ConfigMap
) -
Streams for Apache Kafka CRDs (
CustomResourceDefinition
) -
RBAC resources (
ClusterRole
,ClusterRoleBinding
,RoleBinding
) -
Service account (
ServiceAccount
)
Cluster-scoped resources like CustomResourceDefinition
, ClusterRole
, and ClusterRoleBinding
require administrator privileges for installation. Prior to installation, it’s advisable to review the ClusterRole
specifications to ensure they do not grant unnecessary privileges.
After installation, the Cluster Operator runs as a regular Deployment
to watch for updates of Kafka resources. Any standard (non-admin) OpenShift user with privileges to access the Deployment
can configure it. A cluster administrator can also grant standard users the privileges necessary to manage Streams for Apache Kafka custom resources.
By default, a single replica of the Cluster Operator is deployed. You can add replicas with leader election so that additional Cluster Operators are on standby in case of disruption. For more information, see Section 9.6.4, “Running multiple Cluster Operator replicas with leader election”.
7.2.1. Specifying the namespaces the Cluster Operator watches Copier lienLien copié sur presse-papiers!
The Cluster Operator watches for updates in the namespaces where the Kafka resources are deployed. When you deploy the Cluster Operator, you specify which namespaces to watch in the OpenShift cluster. You can specify the following namespaces:
- A single selected namespace (the same namespace containing the Cluster Operator)
- Multiple selected namespaces
- All namespaces in the cluster
Watching multiple selected namespaces has the most impact on performance due to increased processing overhead. To optimize performance for namespace monitoring, it is generally recommended to either watch a single namespace or monitor the entire cluster. Watching a single namespace allows for focused monitoring of namespace-specific resources, while monitoring all namespaces provides a comprehensive view of the cluster’s resources across all namespaces.
The Cluster Operator watches for changes to the following resources:
-
Kafka
for the Kafka cluster. -
KafkaConnect
for the Kafka Connect cluster. -
KafkaConnector
for creating and managing connectors in a Kafka Connect cluster. -
KafkaMirrorMaker
for the Kafka MirrorMaker instance. -
KafkaMirrorMaker2
for the Kafka MirrorMaker 2 instance. -
KafkaBridge
for the Kafka Bridge instance. -
KafkaRebalance
for the Cruise Control optimization requests.
When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as Deployments, Pods, Services and ConfigMaps.
Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource.
Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption.
When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources.
While the Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster, the Topic Operator and User Operator watch for KafkaTopic
and KafkaUser
resources in a single namespace. For more information, see Section 1.2.1, “Watching Streams for Apache Kafka resources in OpenShift namespaces”.
7.2.2. Deploying the Cluster Operator to watch a single namespace Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources in a single namespace in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
oc get deployments -n my-cluster-operator-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
7.2.3. Deploying the Cluster Operator to watch multiple namespaces Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources across multiple namespaces in your OpenShift cluster.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to add a list of all the namespaces the Cluster Operator will watch to theSTRIMZI_NAMESPACE
environment variable.For example, in this procedure the Cluster Operator will watch the namespaces
watched-namespace-1
,watched-namespace-2
,watched-namespace-3
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each namespace listed, install the
RoleBindings
.In this example, we replace
watched-namespace
in these commands with the namespaces listed in the previous step, repeating them forwatched-namespace-1
,watched-namespace-2
,watched-namespace-3
:oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>
oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/023-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Cluster Operator:
oc create -f install/cluster-operator -n my-cluster-operator-namespace
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
oc get deployments -n my-cluster-operator-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
7.2.4. Deploying the Cluster Operator to watch all namespaces Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy the Cluster Operator to watch Streams for Apache Kafka resources across all namespaces in your OpenShift cluster.
When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created.
Prerequisites
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
, andRoleBinding
) resources.
Procedure
Edit the Streams for Apache Kafka installation files to use the namespace the Cluster Operator is going to be installed into.
For example, in this procedure the Cluster Operator is installed into the namespace
my-cluster-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to set the value of theSTRIMZI_NAMESPACE
environment variable to*
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create
ClusterRoleBindings
that grant cluster-wide access for all namespaces to the Cluster Operator.oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-watched --clusterrole=strimzi-cluster-operator-watched --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount my-cluster-operator-namespace:strimzi-cluster-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the Cluster Operator to your OpenShift cluster.
oc create -f install/cluster-operator -n my-cluster-operator-namespace
oc create -f install/cluster-operator -n my-cluster-operator-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get deployments -n my-cluster-operator-namespace
oc get deployments -n my-cluster-operator-namespace
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 1/1 1 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
7.3. Deploying Kafka Copier lienLien copié sur presse-papiers!
To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka
resource. Streams for Apache Kafka provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time.
After you have deployed the Cluster Operator, use a Kafka
resource to deploy the following components:
A Kafka cluster that uses KRaft or ZooKeeper:
- Topic Operator
- User Operator
Node pools are used in the deployment of a Kafka cluster in KRaft (Kafka Raft metadata) mode, and may be used for the deployment of a Kafka cluster with ZooKeeper. Node pools represent a distinct group of Kafka nodes within the Kafka cluster that share the same configuration. For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the Kafka
resource.
If you haven’t deployed a Kafka cluster as a Kafka
resource, you can’t use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, by deploying them as standalone components. You can also deploy and use other Kafka components with a Kafka cluster not managed by Streams for Apache Kafka.
7.3.1. Deploying a Kafka cluster in KRaft mode Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy a Kafka cluster in KRaft mode and associated node pools using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a Kafka
resource and KafkaNodePool
resources.
Streams for Apache Kafka provides the following example deployment files that you can use to create a Kafka cluster that uses node pools:
kafka/kraft/kafka-with-dual-role-nodes.yaml
- Deploys a Kafka cluster with one pool of nodes that share the broker and controller roles.
kafka/kraft/kafka.yaml
- Deploys a persistent Kafka cluster with one pool of controller nodes and one pool of broker nodes.
kafka/kraft/kafka-ephemeral.yaml
- Deploys an ephemeral Kafka cluster with one pool of controller nodes and one pool of broker nodes.
kafka/kraft/kafka-single-node.yaml
- Deploys a Kafka cluster with a single node.
kafka/kraft/kafka-jbod.yaml
- Deploys a Kafka cluster with multiple volumes in each broker node.
In this procedure, we use the example deployment file that deploys a Kafka cluster with one pool of nodes that share the broker and controller roles.
The Kafka
resource configuration for each example includes the strimzi.io/node-pools: enabled
annotation, which is required when using node pools. Kafka
resources using KRaft mode must also have the annotation strimzi.io/kraft: enabled
.
The example YAML files specify the latest supported Kafka version and KRaft metadata version used by the Kafka cluster.
You can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool
resources or migrate your existing Kafka cluster.
Prerequisites
Before you begin
By default, the example deployment files specify my-cluster
as the Kafka cluster name. The name cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name
property of the Kafka
resource in the relevant YAML file.
Procedure
Deploy a KRaft-based Kafka cluster.
To deploy a Kafka cluster with a single node pool that uses dual-role nodes:
oc apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml
oc apply -f examples/kafka/kraft/kafka-with-dual-role-nodes.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
oc get pods -n <my_cluster_operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the node pool names and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-pool-a-0 1/1 Running 0 my-cluster-pool-a-1 1/1 Running 0 my-cluster-pool-a-4 1/1 Running 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
my-cluster
is the name of the Kafka cluster. pool-a
is the name of the node pool.A sequential index number starting with
0
identifies each Kafka pod created. If you are using ZooKeeper, you’ll also see the ZooKeeper pods.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.Information on the deployment is also shown in the status of the
KafkaNodePool
resource, including a list of IDs for nodes in the pool.NoteNode IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the next node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed.
-
7.3.2. Deploying a ZooKeeper-based Kafka cluster Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy a ZooKeeper-based Kafka cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a Kafka
resource.
Streams for Apache Kafka provides the following example deployment files to create a Kafka cluster that uses ZooKeeper for cluster management:
kafka-persistent.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes.
kafka-jbod.yaml
- Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes).
kafka-persistent-single.yaml
- Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node.
kafka-ephemeral.yaml
- Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes.
kafka-ephemeral-single.yaml
- Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node.
To deploy a Kafka cluster that uses node pools, the following example YAML file provides the specification to create a Kafka
resource and KafkaNodePool
resources:
kafka/kafka-with-node-pools.yaml
- Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.
In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment.
The example YAML files specify the latest supported Kafka version and inter-broker protocol version.
From Kafka 3.0.0, when the inter.broker.protocol.version
is set to 3.0
or higher, the log.message.format.version
option is ignored and doesn’t need to be set.
Prerequisites
Before you begin
By default, the example deployment files specify my-cluster
as the Kafka cluster name. The name cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name
property of the Kafka
resource in the relevant YAML file.
Procedure
Deploy a ZooKeeper-based Kafka cluster.
To deploy an ephemeral cluster:
oc apply -f examples/kafka/kafka-ephemeral.yaml
oc apply -f examples/kafka/kafka-ephemeral.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy a persistent cluster:
oc apply -f examples/kafka/kafka-persistent.yaml
oc apply -f examples/kafka/kafka-persistent.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
oc get pods -n <my_cluster_operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the pod names and readiness
Copy to Clipboard Copied! Toggle word wrap Toggle overflow my-cluster
is the name of the Kafka cluster.A sequential index number starting with
0
identifies each Kafka and ZooKeeper pod created.With the default deployment, you create an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
7.3.3. Deploying the Topic Operator using the Cluster Operator Copier lienLien copié sur presse-papiers!
This procedure describes how to deploy the Topic Operator using the Cluster Operator.
You configure the entityOperator
property of the Kafka
resource to include the topicOperator
. By default, the Topic Operator watches for KafkaTopic
resources in the namespace of the Kafka cluster deployed by the Cluster Operator. You can also specify a namespace using watchedNamespace
in the Topic Operator spec
. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator.
If you use Streams for Apache Kafka to deploy multiple Kafka clusters into the same namespace, enable the Topic Operator for only one Kafka cluster or use the watchedNamespace
property to configure the Topic Operators to watch other namespaces.
If you want to use the Topic Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, you must deploy the Topic Operator as a standalone component.
For more information about configuring the entityOperator
and topicOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includetopicOperator
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Topic Operator
spec
using the properties described in theEntityTopicOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
oc get pods -n <my_cluster_operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
7.3.4. Deploying the User Operator using the Cluster Operator Copier lienLien copié sur presse-papiers!
This procedure describes how to deploy the User Operator using the Cluster Operator.
You configure the entityOperator
property of the Kafka
resource to include the userOperator
. By default, the User Operator watches for KafkaUser
resources in the namespace of the Kafka cluster deployment. You can also specify a namespace using watchedNamespace
in the User Operator spec
. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator.
If you want to use the User Operator with a Kafka cluster that is not managed by Streams for Apache Kafka, you must deploy the User Operator as a standalone component.
For more information about configuring the entityOperator
and userOperator
properties, see Configuring the Entity Operator.
Prerequisites
Procedure
Edit the
entityOperator
properties of theKafka
resource to includeuserOperator
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the User Operator
spec
using the properties described inEntityUserOperatorSpec
schema reference.Use an empty object (
{}
) if you want all properties to use their default values.Create or update the resource:
oc apply -f <kafka_configuration_file>
oc apply -f <kafka_configuration_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
oc get pods -n <my_cluster_operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the pod name and readiness
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 # ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow my-cluster
is the name of the Kafka cluster.READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
7.3.5. Connecting to ZooKeeper from a terminal Copier lienLien copié sur presse-papiers!
ZooKeeper services are secured with encryption and authentication and are not intended to be used by external applications that are not part of Streams for Apache Kafka.
However, if you want to use CLI tools that require a connection to ZooKeeper, you can use a terminal inside a ZooKeeper pod and connect to localhost:12181
as the ZooKeeper address.
Prerequisites
- An OpenShift cluster is available.
- A Kafka cluster is running.
- The Cluster Operator is running.
Procedure
Open the terminal using the OpenShift console or run the
exec
command from your CLI.For example:
oc exec -ti my-cluster-zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /
oc exec -ti my-cluster-zookeeper-0 -- bin/zookeeper-shell.sh localhost:12181 ls /
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Be sure to use
localhost:12181
.
7.3.6. List of Kafka cluster resources Copier lienLien copié sur presse-papiers!
The following resources are created by the Cluster Operator in the OpenShift cluster.
Shared resources
<kafka_cluster_name>-cluster-ca
- Secret with the Cluster CA private key used to encrypt the cluster communication.
<kafka_cluster_name>-cluster-ca-cert
- Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers.
<kafka_cluster_name>-clients-ca
- Secret with the Clients CA private key used to sign user certificates
<kafka_cluster_name>-clients-ca-cert
- Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka users.
<kafka_cluster_name>-cluster-operator-certs
- Secret with Cluster operators keys for communication with Kafka and ZooKeeper.
ZooKeeper nodes
<kafka_cluster_name>-zookeeper
Name given to the following ZooKeeper resources:
- StrimziPodSet for managing the ZooKeeper node pods.
- Service account used by the ZooKeeper nodes.
- PodDisruptionBudget configured for the ZooKeeper nodes.
<kafka_cluster_name>-zookeeper-<pod_id>
- Pods created by the StrimziPodSet.
<kafka_cluster_name>-zookeeper-nodes
- Headless Service needed to have DNS resolve the ZooKeeper pods IP addresses directly.
<kafka_cluster_name>-zookeeper-client
- Service used by Kafka brokers to connect to ZooKeeper nodes as clients.
<kafka_cluster_name>-zookeeper-config
- ConfigMap that contains the ZooKeeper ancillary configuration, and is mounted as a volume by the ZooKeeper node pods.
<kafka_cluster_name>-zookeeper-nodes
- Secret with ZooKeeper node keys.
<kafka_cluster_name>-network-policy-zookeeper
- Network policy managing access to the ZooKeeper services.
data-<kafka_cluster_name>-zookeeper-<pod_id>
- Persistent Volume Claim for the volume used for storing data for a specific ZooKeeper node. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.
Kafka brokers
<kafka_cluster_name>-kafka
Name given to the following Kafka resources:
- StrimziPodSet for managing the Kafka broker pods.
- Service account used by the Kafka pods.
- PodDisruptionBudget configured for the Kafka brokers.
<kafka_cluster_name>-kafka-<pod_id>
Name given to the following Kafka resources:
- Pods created by the StrimziPodSet.
- ConfigMaps with Kafka broker configuration.
<kafka_cluster_name>-kafka-brokers
- Service needed to have DNS resolve the Kafka broker pods IP addresses directly.
<kafka_cluster_name>-kafka-bootstrap
- Service can be used as bootstrap servers for Kafka clients connecting from within the OpenShift cluster.
<kafka_cluster_name>-kafka-external-bootstrap
-
Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is
external
and port is9094
. <kafka_cluster_name>-kafka-<pod_id>
-
Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The old service name will be used for backwards compatibility when the listener name is
external
and port is9094
. <kafka_cluster_name>-kafka-external-bootstrap
-
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type
route
. The old route name will be used for backwards compatibility when the listener name isexternal
and port is9094
. <kafka_cluster_name>-kafka-<pod_id>
-
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type
route
. The old route name will be used for backwards compatibility when the listener name isexternal
and port is9094
. <kafka_cluster_name>-kafka-<listener_name>-bootstrap
- Bootstrap service for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
<kafka_cluster_name>-kafka-<listener_name>-<pod_id>
- Service used to route traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled. The new service name will be used for all other external listeners.
<kafka_cluster_name>-kafka-<listener_name>-bootstrap
-
Bootstrap route for clients connecting from outside the OpenShift cluster. This resource is created only when an external listener is enabled and set to type
route
. The new route name will be used for all other external listeners. <kafka_cluster_name>-kafka-<listener_name>-<pod_id>
-
Route for traffic from outside the OpenShift cluster to individual pods. This resource is created only when an external listener is enabled and set to type
route
. The new route name will be used for all other external listeners. <kafka_cluster_name>-kafka-config
-
ConfigMap containing the Kafka ancillary configuration, which is mounted as a volume by the broker pods when the
UseStrimziPodSets
feature gate is disabled. <kafka_cluster_name>-kafka-brokers
- Secret with Kafka broker keys.
<kafka_cluster_name>-network-policy-kafka
- Network policy managing access to the Kafka services.
strimzi-namespace-name-<kafka_cluster_name>-kafka-init
- Cluster role binding used by the Kafka brokers.
<kafka_cluster_name>-jmx
- Secret with JMX username and password used to secure the Kafka broker port. This resource is created only when JMX is enabled in Kafka.
data-<kafka_cluster_name>-kafka-<pod_id>
- Persistent Volume Claim for the volume used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data.
data-<id>-<kafka_cluster_name>-kafka-<pod_id>
-
Persistent Volume Claim for the volume
id
used for storing data for a specific Kafka broker. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.
Kafka node pools
If you are using Kafka node pools, the resources created apply to the nodes managed in the node pools whether they are operating as brokers, controllers, or both. The naming convention includes the name of the Kafka cluster and the node pool: <kafka_cluster_name>-<pool_name>
.
<kafka_cluster_name>-<pool_name>
- Name given to the StrimziPodSet for managing the Kafka node pool.
<kafka_cluster_name>-<pool_name>-<pod_id>
Name given to the following Kafka node pool resources:
- Pods created by the StrimziPodSet.
- ConfigMaps with Kafka node configuration.
data-<kafka_cluster_name>-<pool_name>-<pod_id>
- Persistent Volume Claim for the volume used for storing data for a specific node. This resource is created only if persistent storage is selected for provisioning persistent volumes to store data.
data-<id>-<kafka_cluster_name>-<pool_name>-<pod_id>
-
Persistent Volume Claim for the volume
id
used for storing data for a specific node. This resource is created only if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data.
Entity Operator
These resources are only created if the Entity Operator is deployed using the Cluster Operator.
<kafka_cluster_name>-entity-operator
Name given to the following Entity Operator resources:
- Deployment with Topic and User Operators.
- Service account used by the Entity Operator.
- Network policy managing access to the Entity Operator metrics.
<kafka_cluster_name>-entity-operator-<random_string>
- Pod created by the Entity Operator deployment.
<kafka_cluster_name>-entity-topic-operator-config
- ConfigMap with ancillary configuration for Topic Operators.
<kafka_cluster_name>-entity-user-operator-config
- ConfigMap with ancillary configuration for User Operators.
<kafka_cluster_name>-entity-topic-operator-certs
- Secret with Topic Operator keys for communication with Kafka and ZooKeeper.
<kafka_cluster_name>-entity-user-operator-certs
- Secret with User Operator keys for communication with Kafka and ZooKeeper.
strimzi-<kafka_cluster_name>-entity-topic-operator
- Role binding used by the Entity Topic Operator.
strimzi-<kafka_cluster_name>-entity-user-operator
- Role binding used by the Entity User Operator.
Kafka Exporter
These resources are only created if the Kafka Exporter is deployed using the Cluster Operator.
<kafka_cluster_name>-kafka-exporter
Name given to the following Kafka Exporter resources:
- Deployment with Kafka Exporter.
- Service used to collect consumer lag metrics.
- Service account used by the Kafka Exporter.
- Network policy managing access to the Kafka Exporter metrics.
<kafka_cluster_name>-kafka-exporter-<random_string>
- Pod created by the Kafka Exporter deployment.
Cruise Control
These resources are only created if Cruise Control was deployed using the Cluster Operator.
<kafka_cluster_name>-cruise-control
Name given to the following Cruise Control resources:
- Deployment with Cruise Control.
- Service used to communicate with Cruise Control.
- Service account used by the Cruise Control.
<kafka_cluster_name>-cruise-control-<random_string>
- Pod created by the Cruise Control deployment.
<kafka_cluster_name>-cruise-control-config
- ConfigMap that contains the Cruise Control ancillary configuration, and is mounted as a volume by the Cruise Control pods.
<kafka_cluster_name>-cruise-control-certs
- Secret with Cruise Control keys for communication with Kafka and ZooKeeper.
<kafka_cluster_name>-network-policy-cruise-control
- Network policy managing access to the Cruise Control service.
7.4. Deploying Kafka Connect Copier lienLien copié sur presse-papiers!
Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database or messaging system, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed.
In Streams for Apache Kafka, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by Streams for Apache Kafka.
Using the concept of connectors, Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability.
The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect
resource and connectors created using the KafkaConnector
resource.
In order to use Kafka Connect, you need to do the following.
The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context.
7.4.1. Deploying Kafka Connect to your OpenShift cluster Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator.
A Kafka Connect cluster deployment is implemented with a configurable number of nodes (also called workers) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable.
The deployment uses a YAML file to provide the specification to create a KafkaConnect
resource.
Streams for Apache Kafka provides example configuration files. In this procedure, we use the following example file:
-
examples/connect/kafka-connect.yaml
If deploying Kafka Connect clusters to run in parallel, each instance must use unique names for internal Kafka Connect topics. To do this, configure each Kafka Connect instance to replace the defaults.
Prerequisites
Procedure
Deploy Kafka Connect to your OpenShift cluster. Use the
examples/connect/kafka-connect.yaml
file to deploy Kafka Connect.oc apply -f examples/connect/kafka-connect.yaml
oc apply -f examples/connect/kafka-connect.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
oc get pods -n <my_cluster_operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0
NAME READY STATUS RESTARTS my-connect-cluster-connect-<pod_id> 1/1 Running 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow my-connect-cluster
is the name of the Kafka Connect cluster.A pod ID identifies each pod created.
With the default deployment, you create a single Kafka Connect pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
7.4.2. List of Kafka Connect cluster resources Copier lienLien copié sur presse-papiers!
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <connect_cluster_name>-connect
Name given to the following Kafka Connect resources:
- StrimziPodSet that creates the Kafka Connect worker node pods.
- Headless service that provides stable DNS names to the Kafka Connect pods.
- Service account used by the Kafka Connect pods.
- Pod disruption budget configured for the Kafka Connect worker nodes.
- Network policy managing access to the Kafka Connect REST API.
- <connect_cluster_name>-connect-<pod_id>
- Pods created by the Kafka Connect StrimziPodSet.
- <connect_cluster_name>-connect-api
- Service which exposes the REST interface for managing the Kafka Connect cluster.
- <connect_cluster_name>-connect-config
- ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka Connect pods.
- strimzi-<namespace-name>-<connect_cluster_name>-connect-init
- Cluster role binding used by the Kafka Connect cluster.
- <connect_cluster_name>-connect-build
- Pod used to build a new container image with additional connector plugins (only when Kafka Connect Build feature is used).
- <connect_cluster_name>-connect-dockerfile
- ConfigMap with the Dockerfile generated to build the new container image with additional connector plugins (only when the Kafka Connect build feature is used).
7.5. Adding Kafka Connect connectors Copier lienLien copié sur presse-papiers!
Kafka Connect uses connectors to integrate with other systems to stream data. A connector is an instance of a Kafka Connector
class, which can be one of the following type:
- Source connector
- A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages.
- Sink connector
- A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system.
Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems.
Plugins provide a set of one or more artifacts that define a connector and task implementation for connecting to a given kind of data source. The configuration describes the source input data and target output data to feed into and out of Kafka Connect. The plugins might also contain the libraries and files needed to transform the data.
A Kafka Connect deployment can have one or more plugins, but only one version of each plugin. Plugins for many external systems are available for use with Kafka Connect. You can also create your own plugins.
Add connector plugins to Kafka Connect in one of the following ways:
- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
After plugins have been added to the container image, you can start, stop, and manage connector instances in the following ways:
You can also create new connector instances using these options.
7.5.1. Building new container images with connector plugins automatically Copier lienLien copié sur presse-papiers!
Configure Kafka Connect so that Streams for Apache Kafka automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins
property of the KafkaConnect
custom resource.
Streams for Apache Kafka automatically downloads and adds the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output
and automatically used in the Kafka Connect deployment.
Prerequisites
- The Cluster Operator must be deployed.
- A container registry.
You need to provide your own container registry where images can be pushed to, stored, and pulled from. Streams for Apache Kafka supports private container registries as well as public registries such as Quay or Docker Hub.
Procedure
Configure the
KafkaConnect
custom resource by specifying the container registry in.spec.build.output
, and additional connectors in.spec.build.plugins
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create or update the resource:
oc apply -f <kafka_connect_configuration_file>
$ oc apply -f <kafka_connect_configuration_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the new container image to build, and for the Kafka Connect cluster to be deployed.
-
Use the Kafka Connect REST API or
KafkaConnector
custom resources to use the connector plugins you added.
Rebuilding the container image with new artifacts
A new container image is built automatically when you change the base image (.spec.image
) or change the connector plugin artifacts configuration (.spec.build.plugins
).
To pull an upgraded base image or to download the latest connector plugin artifacts without changing the KafkaConnect
resource, you can trigger a rebuild of the container image associated with the Kafka Connect cluster by applying the annotation strimzi.io/force-rebuild=true
to the Kafka Connect StrimziPodSet
resource.
The annotation triggers the rebuilding process, fetching any new artifacts for plugins specified in the KafkaConnect
custom resource and incorporating them into the container image. The rebuild includes downloads of new plugin artifacts without versions.
7.5.2. Building new container images with connector plugins from the Kafka Connect base image Copier lienLien copié sur presse-papiers!
Create a custom Docker image with connector plugins from the Kafka Connect base image. Add the custom image to the /opt/kafka/plugins
directory.
You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins.
At startup, the Streams for Apache Kafka version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins
directory.
Prerequisites
Procedure
Create a new
Dockerfile
usingregistry.redhat.io/amq-streams/kafka-38-rhel9:2.8.0
as the base image:FROM registry.redhat.io/amq-streams/kafka-38-rhel9:2.8.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
FROM registry.redhat.io/amq-streams/kafka-38-rhel9:2.8.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example plugins file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The COPY command points to the plugin files to copy to the container image.
This example adds plugins for Debezium connectors (MongoDB, MySQL, and PostgreSQL), though not all files are listed for brevity. Debezium running in Kafka Connect looks the same as any other Kafka Connect task.
- Build the container image.
- Push your custom image to your container registry.
Point to the new container image.
You can point to the image in one of the following ways:
Edit the
KafkaConnect.spec.image
property of theKafkaConnect
custom resource.If set, this property overrides the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in the Cluster Operator.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the
STRIMZI_KAFKA_CONNECT_IMAGES
environment variable in theinstall/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml
file to point to the new container image, and then reinstall the Cluster Operator.
7.5.3. Deploying KafkaConnector resources Copier lienLien copié sur presse-papiers!
Deploy KafkaConnector
resources to manage connectors. The KafkaConnector
custom resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. You don’t need to send HTTP requests to manage connectors, as with the Kafka Connect REST API. You manage a running connector instance by updating its corresponding KafkaConnector
resource, and then applying the updates. The Cluster Operator updates the configurations of the running connector instances. You remove a connector by deleting its corresponding KafkaConnector
.
KafkaConnector
resources must be deployed to the same namespace as the Kafka Connect cluster they link to.
In the configuration shown in this procedure, the autoRestart
feature is enabled (enabled: true
) for automatic restarts of failed connectors and tasks. You can also annotate the KafkaConnector
resource to restart a connector or restart a connector task manually.
Example connectors
You can use your own connectors or try the examples provided by Streams for Apache Kafka. Up until Apache Kafka 3.1.0, example file connector plugins were included with Apache Kafka. Starting from the 3.1.1 and 3.2.0 releases of Apache Kafka, the examples need to be added to the plugin path as any other connector.
Streams for Apache Kafka provides an example KafkaConnector
configuration file (examples/connect/source-connector.yaml
) for the example file connector plugins, which creates the following connector instances as KafkaConnector
resources:
-
A
FileStreamSourceConnector
instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic. -
A
FileStreamSinkConnector
instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink).
We use the example file to create connectors in this procedure.
The example connectors are not intended for use in a production environment.
Prerequisites
- A Kafka Connect deployment
- The Cluster Operator is running
Procedure
Add the
FileStreamSourceConnector
andFileStreamSinkConnector
plugins to Kafka Connect in one of the following ways:- Configure Kafka Connect to build a new container image with plugins automatically
- Create a Docker image from the base Kafka Connect image (manually or using continuous integration)
Set the
strimzi.io/use-connector-resources annotation
totrue
in the Kafka Connect configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow With the
KafkaConnector
resources enabled, the Cluster Operator watches for them.Edit the
examples/connect/source-connector.yaml
file:Example source connector configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the
KafkaConnector
resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. - 2
- Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
- 3
- Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 4
- Maximum number of Kafka Connect tasks that the connector can create.
- 5
- Enables automatic restarts of failed connectors and tasks. By default, the number of restarts is indefinite, but you can set a maximum on the number of automatic restarts using the
maxRestarts
property. - 6
- Connector configuration as key-value pairs.
- 7
- Location of the external data file. In this example, we’re configuring the
FileStreamSourceConnector
to read from the/opt/kafka/LICENSE
file. - 8
- Kafka topic to publish the source data to.
Create the source
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/source-connector.yaml
oc apply -f examples/connect/source-connector.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an
examples/connect/sink-connector.yaml
file:touch examples/connect/sink-connector.yaml
touch examples/connect/sink-connector.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Paste the following YAML into the
sink-connector.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 2
- Connector configuration as key-value pairs.
- 3
- Temporary file to publish the source data to.
- 4
- Kafka topic to read the source data from.
Create the sink
KafkaConnector
in your OpenShift cluster:oc apply -f examples/connect/sink-connector.yaml
oc apply -f examples/connect/sink-connector.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the connector resources were created:
oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector
oc get kctr --selector strimzi.io/cluster=<my_connect_cluster> -o name my-source-connector my-sink-connector
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <my_connect_cluster> with the name of your Kafka Connect cluster.
In the container, execute
kafka-console-consumer.sh
to read the messages that were written to the topic by the source connector:oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap.NAMESPACE.svc:9092 --topic my-topic --from-beginning
oc exec <my_kafka_cluster>-kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server <my_kafka_cluster>-kafka-bootstrap.NAMESPACE.svc:9092 --topic my-topic --from-beginning
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace <my_kafka_cluster> with the name of your Kafka cluster.
Source and sink connector configuration options
The connector configuration is defined in the spec.config
property of the KafkaConnector
resource.
The FileStreamSourceConnector
and FileStreamSinkConnector
classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options.
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Source file to write messages to. If not specified, the standard input is used. |
| List | Null | The Kafka topic to publish data to. |
Name | Type | Default value | Description |
---|---|---|---|
| String | Null | Destination file to write messages to. If not specified, the standard output is used. |
| List | Null | One or more Kafka topics to read data from. |
| String | Null | A regular expression matching one or more Kafka topics to read data from. |
7.5.4. Exposing the Kafka Connect API Copier lienLien copié sur presse-papiers!
Use the Kafka Connect REST API as an alternative to using KafkaConnector
resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name>-connect-api:8083
, where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance.
The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation.
The strimzi.io/use-connector-resources
annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect
resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
You can add the connector configuration as a JSON object.
Example curl request to add connector configuration
The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
The connection is insecure, so allow external access advisedly.
If you decide to create services, use the labels from the selector
of the <connect_cluster_name>-connect-api
service to configure the pods to which the service will route the traffic:
Selector configuration for the service
You must also create a NetworkPolicy
that allows HTTP requests from external clients.
Example NetworkPolicy to allow requests to the Kafka Connect API
- 1
- The label of the pod that is allowed to connect to the API.
To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command.
7.5.5. Limiting access to the Kafka Connect API Copier lienLien copié sur presse-papiers!
It is crucial to restrict access to the Kafka Connect API only to trusted users to prevent unauthorized actions and potential security issues. The Kafka Connect API provides extensive capabilities for altering connector configurations, which makes it all the more important to take security precautions. Someone with access to the Kafka Connect API could potentially obtain sensitive information that an administrator may assume is secure.
The Kafka Connect REST API can be accessed by anyone who has authenticated access to the OpenShift cluster and knows the endpoint URL, which includes the hostname/IP address and port number.
For example, suppose an organization uses a Kafka Connect cluster and connectors to stream sensitive data from a customer database to a central database. The administrator uses a configuration provider plugin to store sensitive information related to connecting to the customer database and the central database, such as database connection details and authentication credentials. The configuration provider protects this sensitive information from being exposed to unauthorized users. However, someone who has access to the Kafka Connect API can still obtain access to the customer database without the consent of the administrator. They can do this by setting up a fake database and configuring a connector to connect to it. They then modify the connector configuration to point to the customer database, but instead of sending the data to the central database, they send it to the fake database. By configuring the connector to connect to the fake database, the login details and credentials for connecting to the customer database are intercepted, even though they are stored securely in the configuration provider.
If you are using the KafkaConnector
custom resources, then by default the OpenShift RBAC rules permit only OpenShift cluster administrators to make changes to connectors. You can also designate non-cluster administrators to manage Streams for Apache Kafka resources. With KafkaConnector
resources enabled in your Kafka Connect configuration, changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. If you are not using the KafkaConnector
resource, the default RBAC rules do not limit access to the Kafka Connect API. If you want to limit direct access to the Kafka Connect REST API using OpenShift RBAC, you need to enable and use the KafkaConnector
resources.
For improved security, we recommend configuring the following properties for the Kafka Connect API:
org.apache.kafka.disallowed.login.modules
(Kafka 3.4 or later) Set the
org.apache.kafka.disallowed.login.modules
Java system property to prevent the use of insecure login modules. For example, specifyingcom.sun.security.auth.module.JndiLoginModule
prevents the use of the KafkaJndiLoginModule
.Example configuration for disallowing login modules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only allow trusted login modules and follow the latest advice from Kafka for the version you are using. As a best practice, you should explicitly disallow insecure login modules in your Kafka Connect configuration by using the
org.apache.kafka.disallowed.login.modules
system property.connector.client.config.override.policy
Set the
connector.client.config.override.policy
property toNone
to prevent connector configurations from overriding the Kafka Connect configuration and the consumers and producers it uses.Example configuration to specify connector override policy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.5.6. Switching to using KafkaConnector custom resources Copier lienLien copié sur presse-papiers!
You can switch from using the Kafka Connect API to using KafkaConnector
custom resources to manage your connectors. To make the switch, do the following in the order shown:
-
Deploy
KafkaConnector
resources with the configuration to create your connector instances. -
Enable
KafkaConnector
resources in your Kafka Connect configuration by setting thestrimzi.io/use-connector-resources
annotation totrue
.
If you enable KafkaConnector
resources before creating them, you delete all connectors.
To switch from using KafkaConnector
resources to using the Kafka Connect API, first remove the annotation that enables the KafkaConnector
resources from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
When making the switch, check the status of the KafkaConnect
resource. The value of metadata.generation
(the current version of the deployment) must match status.observedGeneration
(the latest reconciliation of the resource). When the Kafka Connect cluster is Ready
, you can delete the KafkaConnector
resources.
7.6. Deploying Kafka MirrorMaker Copier lienLien copié sur presse-papiers!
Kafka MirrorMaker replicates data between two or more Kafka clusters, within or across data centers. This process is called mirroring to avoid confusion with the concept of Kafka partition replication. MirrorMaker consumes messages from a source cluster and republishes those messages to a target cluster.
Data replication across clusters supports scenarios that require the following:
- Recovery of data in the event of a system failure
- Consolidation of data from multiple source clusters for centralized analysis
- Restriction of data access to a specific cluster
- Provision of data at a specific location to improve latency
7.6.1. Deploying Kafka MirrorMaker to your OpenShift cluster Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker
or KafkaMirrorMaker2
resource depending on the version of MirrorMaker deployed. MirrorMaker 2 is based on Kafka Connect and uses its configuration properties.
Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker
custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in Streams for Apache Kafka as well. The KafkaMirrorMaker
resource will be removed from Streams for Apache Kafka when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2
custom resource with the IdentityReplicationPolicy
.
Streams for Apache Kafka provides example configuration files. In this procedure, we use the following example files:
-
examples/mirror-maker/kafka-mirror-maker.yaml
-
examples/mirror-maker/kafka-mirror-maker-2.yaml
If deploying MirrorMaker 2 clusters to run in parallel, using the same target Kafka cluster, each instance must use unique names for internal Kafka Connect topics. To do this, configure each MirrorMaker 2 instance to replace the defaults.
Prerequisites
Procedure
Deploy Kafka MirrorMaker to your OpenShift cluster:
For MirrorMaker:
oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml
oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For MirrorMaker 2:
oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml
oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
oc get pods -n <my_cluster_operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1
NAME READY STATUS RESTARTS my-mirror-maker-mirror-maker-<pod_id> 1/1 Running 1 my-mm2-cluster-mirrormaker2-<pod_id> 1/1 Running 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow my-mirror-maker
is the name of the Kafka MirrorMaker cluster.my-mm2-cluster
is the name of the Kafka MirrorMaker 2 cluster.A pod ID identifies each pod created.
With the default deployment, you install a single MirrorMaker or MirrorMaker 2 pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
7.6.2. List of Kafka MirrorMaker 2 cluster resources Copier lienLien copié sur presse-papiers!
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <mirrormaker2_cluster_name>-mirrormaker2
Name given to the following MirrorMaker 2 resources:
- StrimziPodSet that creates the MirrorMaker 2 worker node pods.
- Headless service that provides stable DNS names to the MirrorMaker 2 pods.
- Service account used by the MirrorMaker 2 pods.
- Pod disruption budget configured for the MirrorMaker 2 worker nodes.
- Network Policy managing access to the MirrorMaker 2 REST API.
- <mirrormaker2_cluster_name>-mirrormaker2-<pod_id>
- Pods created by the MirrorMaker 2 StrimziPodSet.
- <mirrormaker2_cluster_name>-mirrormaker2-api
- Service which exposes the REST interface for managing the MirrorMaker 2 cluster.
- <mirrormaker2_cluster_name>-mirrormaker2-config
- ConfigMap which contains the MirrorMaker 2 ancillary configuration and is mounted as a volume by the MirrorMaker 2 pods.
- strimzi-<namespace-name>-<mirrormaker2_cluster_name>-mirrormaker2-init
- Cluster role binding used by the MirrorMaker 2 cluster.
7.6.3. List of Kafka MirrorMaker cluster resources Copier lienLien copié sur presse-papiers!
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <mirrormaker_cluster_name>-mirror-maker
Name given to the following MirrorMaker resources:
- Deployment which is responsible for creating the MirrorMaker pods.
- Service account used by the MirrorMaker nodes.
- Pod Disruption Budget configured for the MirrorMaker worker nodes.
- <mirrormaker_cluster_name>-mirror-maker-config
- ConfigMap which contains ancillary configuration for MirrorMaker, and is mounted as a volume by the MirrorMaker pods.
7.7. Deploying Kafka Bridge Copier lienLien copié sur presse-papiers!
Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster.
7.7.1. Deploying Kafka Bridge to your OpenShift cluster Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator.
The deployment uses a YAML file to provide the specification to create a KafkaBridge
resource.
Streams for Apache Kafka provides example configuration files. In this procedure, we use the following example file:
-
examples/bridge/kafka-bridge.yaml
Prerequisites
Procedure
Deploy Kafka Bridge to your OpenShift cluster:
oc apply -f examples/bridge/kafka-bridge.yaml
oc apply -f examples/bridge/kafka-bridge.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get pods -n <my_cluster_operator_namespace>
oc get pods -n <my_cluster_operator_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0
NAME READY STATUS RESTARTS my-bridge-bridge-<pod_id> 1/1 Running 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow my-bridge
is the name of the Kafka Bridge cluster.A pod ID identifies each pod created.
With the default deployment, you install a single Kafka Bridge pod.
READY
shows the number of replicas that are ready/expected. The deployment is successful when theSTATUS
displays asRunning
.
7.7.2. Exposing the Kafka Bridge service to your local machine Copier lienLien copié sur presse-papiers!
Use port forwarding to expose the Kafka Bridge service to your local machine on http://localhost:8080.
Port forwarding is only suitable for development and testing purposes.
Procedure
List the names of the pods in your OpenShift cluster:
oc get pods -o name pod/kafka-consumer # ... pod/my-bridge-bridge-<pod_id>
oc get pods -o name pod/kafka-consumer # ... pod/my-bridge-bridge-<pod_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the Kafka Bridge pod on port
8080
:oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &
oc port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf port 8080 on your local machine is already in use, use an alternative HTTP port, such as
8008
.
API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod.
7.7.3. Accessing the Kafka Bridge outside of OpenShift Copier lienLien copié sur presse-papiers!
After deployment, the Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name>-bridge-service
service to access the API.
If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features:
-
LoadBalancer
orNodePort
type services -
Ingress
resources (Kubernetes only) - OpenShift routes (OpenShift only)
If you decide to create Services, use the labels from the selector
of the <kafka_bridge_name>-bridge-service
service to configure the pods to which the service will route the traffic:
# ... selector: strimzi.io/cluster: kafka-bridge-name strimzi.io/kind: KafkaBridge #...
# ...
selector:
strimzi.io/cluster: kafka-bridge-name
strimzi.io/kind: KafkaBridge
#...
- 1
- Name of the Kafka Bridge custom resource in your OpenShift cluster.
7.7.4. List of Kafka Bridge cluster resources Copier lienLien copié sur presse-papiers!
The following resources are created by the Cluster Operator in the OpenShift cluster:
- <bridge_cluster_name>-bridge
- Deployment which is in charge to create the Kafka Bridge worker node pods.
- <bridge_cluster_name>-bridge-service
- Service which exposes the REST interface of the Kafka Bridge cluster.
- <bridge_cluster_name>-bridge-config
- ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods.
- <bridge_cluster_name>-bridge
- Pod Disruption Budget configured for the Kafka Bridge worker nodes.
7.8. Alternative standalone deployment options for Streams for Apache Kafka operators Copier lienLien copié sur presse-papiers!
You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator.
You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster.
7.8.1. Deploying the standalone Topic Operator Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy the Topic Operator as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator.
Standalone deployment files are provided with Streams for Apache Kafka. Use the 05-Deployment-strimzi-topic-operator.yaml
deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The Topic Operator watches for KafkaTopic
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the Topic Operator configuration. A single Topic Operator can watch a single namespace. One namespace should be watched by only one Topic Operator. If you want to use more than one Topic Operator, configure each of them to watch different namespaces. In this way, you can use Topic Operators with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the Topic Operator to connect to.
As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the
env
properties in theinstall/topic-operator/05-Deployment-strimzi-topic-operator.yaml
standalone deployment file.Example standalone Topic Operator deployment configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenShift namespace for the Topic Operator to watch for
KafkaTopic
resources. Specify the namespace of the Kafka cluster. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The label to identify the
KafkaTopic
resources managed by the Topic Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaTopic
resource. If you deploy more than one Topic Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 4
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 5
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 6
- Enables TLS support for encrypted communication with the Kafka brokers.
- 7
- (Optional) The Java options used by the JVM running the Topic Operator.
- 8
- (Optional) The debugging (
-D
) options set for the Topic Operator. - 9
- (Optional) Skips the generation of trust store certificates if TLS is enabled through
STRIMZI_TLS_ENABLED
. If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default isfalse
. - 10
- (Optional) Generates key store certificates for mTLS authentication. Setting this to
false
disables client authentication with mTLS to the Kafka brokers. The default istrue
. - 11
- (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is
false
. - 12
- (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 13
- (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. - 14
- (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through
STRIMZI_SASL_ENABLED
. You can set the value toplain
,scram-sha-256
, orscram-sha-512
. - 15
- (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to
PLAINTEXT
,SSL
,SASL_PLAINTEXT
, orSASL_SSL
. - 16
-
If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set
STRIMZI_PUBLIC_CA
totrue
. Set this property totrue
, for example, if you are using Amazon AWS MSK service. If you enabled mTLS with the
STRIMZI_TLS_ENABLED
environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster.Example mTLS configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you need to configure custom SASL authentication, you can define the necessary authentication properties using the
STRIMZI_SASL_CUSTOM_CONFIG_JSON
environment variable for the standalone operator. For example, this configuration may be used for accessing a Kafka cluster in a cloud provider with a custom login module like the Amazon MSK Library for AWS Identity and Access Management (aws-msk_iam-auth
).The property
STRIMZI_ALTERABLE_TOPIC_CONFIG
defaults toALL
, allowing all.spec.config
properties to be set in theKafkaTopic
resource. If this setting is not suitable for a managed Kafka service, do as follows:- If only a subset of properties is configurable, list them as comma-separated values.
-
If no properties are to be configured, use
NONE
, which is equivalent to an empty property list.
NoteOnly Kafka configuration properties starting with
sasl.
can be set with theSTRIMZI_SASL_CUSTOM_CONFIG_JSON
environment variable.Example custom SASL configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables cluster configuration lookup for managed Kafka services that don’t allow topic configuration changes.
- 2
- Defines the topic configuration properties that can be updated based on the limitations set by managed Kafka services.
- 3
- Specifies the SASL properties to be set in JSON format. Only properties starting with
sasl.
are allowed.
Example Dockerfile with external jars
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Apply the changes to the
Deployment
configuration to deploy the Topic Operator. Check the status of the deployment:
oc get deployments
oc get deployments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1
NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
7.8.2. Deploying the standalone User Operator Copier lienLien copié sur presse-papiers!
This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator.
A standalone deployment can operate with any Kafka cluster.
Standalone deployment files are provided with Streams for Apache Kafka. Use the 05-Deployment-strimzi-user-operator.yaml
deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster.
The User Operator watches for KafkaUser
resources in a single namespace. You specify the namespace to watch, and the connection to the Kafka cluster, in the User Operator configuration. A single User Operator can watch a single namespace. One namespace should be watched by only one User Operator. If you want to use more than one User Operator, configure each of them to watch different namespaces. In this way, you can use the User Operator with multiple Kafka clusters.
Prerequisites
You are running a Kafka cluster for the User Operator to connect to.
As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service.
Procedure
Edit the following
env
properties in theinstall/user-operator/05-Deployment-strimzi-user-operator.yaml
standalone deployment file.Example standalone User Operator deployment configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenShift namespace for the User Operator to watch for
KafkaUser
resources. Only one namespace can be specified. - 2
- The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down.
- 3
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA (certificate authority) that signs new user certificates for mTLS authentication. - 4
- The OpenShift
Secret
that contains the private key (ca.key
) value of the CA that signs new user certificates for mTLS authentication. - 5
- The label to identify the
KafkaUser
resources managed by the User Operator. This does not have to be the name of the Kafka cluster. It can be the label assigned to theKafkaUser
resource. If you deploy more than one User Operator, the labels must be unique for each. That is, the operators cannot manage the same resources. - 6
- The interval between periodic reconciliations, in milliseconds. The default is
120000
(2 minutes). - 7
- The size of the controller event queue. The size of the queue should be at least as big as the maximal amount of users you expect the User Operator to operate. The default is
1024
. - 8
- The size of the worker pool for reconciling the users. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is50
. - 9
- The size of the worker pool for Kafka Admin API and OpenShift operations. Bigger pool might require more resources, but it will also handle more
KafkaUser
resources The default is4
. - 10
- The level for printing logging messages. You can set the level to
ERROR
,WARNING
,INFO
,DEBUG
, orTRACE
. - 11
- Enables garbage collection (GC) logging. The default is
true
. - 12
- The validity period for the CA. The default is
365
days. - 13
- The renewal period for the CA. The renewal period is measured backwards from the expiry date of the current certificate. The default is
30
days to initiate certificate renewal before the old certificates expire. - 14
- (Optional) The Java options used by the JVM running the User Operator
- 15
- (Optional) The debugging (
-D
) options set for the User Operator - 16
- (Optional) Prefix for the names of OpenShift secrets created by the User Operator.
- 17
- (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to
false
, the User Operator will reject all resources withsimple
authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default istrue
. - 18
- (Optional) Semi-colon separated list of Cron Expressions defining the maintenance time windows during which the expiring user certificates will be renewed.
- 19
- (Optional) Configuration options for configuring the Kafka Admin client used by the User Operator in the properties format.
If you are using mTLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the next step.
Example mTLS configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The OpenShift
Secret
that contains the public key (ca.crt
) value of the CA that signs Kafka broker certificates. - 2
- The OpenShift
Secret
that contains the certificate public key (entity-operator.crt
) and private key (entity-operator.key
) that is used for mTLS authentication against the Kafka cluster.
Deploy the User Operator.
oc create -f install/user-operator
oc create -f install/user-operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the deployment:
oc get deployments
oc get deployments
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Output shows the deployment name and readiness
NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1
NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.