Chapter 3. Deployment configuration
This chapter describes how to configure different aspects of the supported deployments:
- Kafka clusters
- Kafka Connect clusters
- Kafka Connect clusters with Source2Image support
- Kafka Mirror Maker
3.1. Kafka cluster configuration
The full schema of the Kafka
resource is described in the Section C.1, “Kafka
schema reference”. All labels that are applied to the desired Kafka
resource will also be applied to the OpenShift resources making up the Kafka cluster. This provides a convenient mechanism for resources to be labeled as required.
3.1.1. Data storage considerations
An efficient data storage infrastructure is essential to the optimal performance of AMQ Streams.
AMQ Streams requires block storage and is designed to work optimally with cloud-based block storage solutions, including Amazon Elastic Block Store (EBS). The use of file storage (for example, NFS) is not recommended.
Choose local storage (local persistent volumes) when possible. If local storage is not available, you can use a Storage Area Network (SAN) accessed by a protocol such as Fibre Channel or iSCSI.
3.1.1.1. Apache Kafka and Zookeeper storage
Use separate disks for Apache Kafka and Zookeeper.
Three types of data storage are supported:
- Ephemeral (Recommended for development only)
- Persistent
- JBOD (Just a Bunch of Disks, suitable for Kafka only)
For more information, see Kafka and Zookeeper storage.
Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with Zookeeper, which requires fast, low latency data access.
You do not need to provision replicated storage because Kafka and Zookeeper both have built-in data replication.
3.1.1.2. File systems
It is recommended that you configure your storage system to use the XFS file system. AMQ Streams is also compatible with the ext4 file system, but this might require additional configuration for best results.
3.1.2. Kafka and Zookeeper storage types
As stateful applications, Kafka and Zookeeper need to store data on disk. AMQ Streams supports three storage types for this data:
- Ephemeral
- Persistent
- JBOD storage
JBOD storage is supported only for Kafka, not for Zookeeper.
When configuring a Kafka
resource, you can specify the type of storage used by the Kafka broker and its corresponding Zookeeper node. You configure the storage type using the storage
property in the following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
The storage type is configured in the type
field.
The storage type cannot be changed after a Kafka cluster is deployed.
3.1.2.1. Ephemeral storage
Ephemeral storage uses the `emptyDir` volumes to store data. To use ephemeral storage, the type
field should be set to ephemeral
.
EmptyDir
volumes are not persistent and the data stored in them will be lost when the Pod is restarted. After the new pod is started, it has to recover all data from other nodes of the cluster. Ephemeral storage is not suitable for use with single node Zookeeper clusters and for Kafka topics with replication factor 1, because it will lead to data loss.
An example of Ephemeral storage
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: ephemeral # ... zookeeper: # ... storage: type: ephemeral # ...
3.1.2.1.1. Log directories
The ephemeral volume will be used by the Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data/kafka-log_idx_
-
Where
idx
is the Kafka broker pod index. For example/var/lib/kafka/data/kafka-log0
.
3.1.2.2. Persistent storage
Persistent storage uses Persistent Volume Claims to provision persistent volumes for storing data. Persistent Volume Claims can be used to provision volumes of many different types, depending on the Storage Class which will provision the volume. The data types which can be used with persistent volume claims include many types of SAN storage as well as Local persistent volumes.
To use persistent storage, the type
has to be set to persistent-claim
. Persistent storage supports additional configuration options:
id
(optional)-
Storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. Default is
0
. size
(required)- Defines the size of the persistent volume claim, for example, "1000Gi".
class
(optional)- The OpenShift Storage Class to use for dynamic volume provisioning.
selector
(optional)- Allows selecting a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume.
deleteClaim
(optional)-
Boolean value which specifies if the Persistent Volume Claim has to be deleted when the cluster is undeployed. Default is
false
.
Increasing the size of persistent volumes in an existing AMQ Streams cluster is only supported in OpenShift versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. For other versions of OpenShift and storage classes which do not support volume expansion, you must decide the necessary storage size before deploying the cluster. Decreasing the size of existing persistent volumes is not possible.
Example fragment of persistent storage configuration with 1000Gi size
# ... storage: type: persistent-claim size: 1000Gi # ...
The following example demonstrates the use of a storage class.
Example fragment of persistent storage configuration with specific Storage Class
# ... storage: type: persistent-claim size: 1Gi class: my-storage-class # ...
Finally, a selector
can be used to select a specific labeled persistent volume to provide needed features such as an SSD.
Example fragment of persistent storage configuration with selector
# ... storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true # ...
3.1.2.2.1. Storage class overrides
You can specify a different storage class for one or more Kafka brokers, instead of using the default storage class. This is useful if, for example, storage classes are restricted to different availability zones or data centers. You can use the overrides
field for this purpose.
In this example, the default storage class is named my-storage-class
:
Example AMQ Streams cluster using storage class overrides
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # ... kafka: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ...
As a result of the configured overrides
property, the broker volumes use the following storage classes:
-
The persistent volumes of broker 0 will use
my-storage-class-zone-1a
. -
The persistent volumes of broker 1 will use
my-storage-class-zone-1b
. -
The persistent volumes of broker 2 will use
my-storage-class-zone-1c
.
The overrides
property is currently used only to override storage class configurations. Overriding other storage configuration fields is not currently supported. Other fields from the storage configuration are currently not supported.
3.1.2.2.2. Persistent Volume Claim naming
When persistent storage is used, it creates Persistent Volume Claims with the following names:
data-cluster-name-kafka-idx
-
Persistent Volume Claim for the volume used for storing data for the Kafka broker pod
idx
. data-cluster-name-zookeeper-idx
-
Persistent Volume Claim for the volume used for storing data for the Zookeeper node pod
idx
.
3.1.2.2.3. Log directories
The persistent volume will be used by the Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data/kafka-log_idx_
-
Where
idx
is the Kafka broker pod index. For example/var/lib/kafka/data/kafka-log0
.
3.1.2.3. Resizing persistent volumes
You can provision increased storage capacity by increasing the size of the persistent volumes used by an existing AMQ Streams cluster. Resizing persistent volumes is supported in clusters that use either a single persistent volume or multiple persistent volumes in a JBOD storage configuration.
You can increase but not decrease the size of persistent volumes. Decreasing the size of persistent volumes is not currently supported in OpenShift.
Prerequisites
- An OpenShift cluster with support for volume resizing.
- The Cluster Operator is running.
- A Kafka cluster using persistent volumes created using a storage class that supports volume expansion.
Procedure
In a
Kafka
resource, increase the size of the persistent volume allocated to the Kafka cluster, the Zookeeper cluster, or both.-
To increase the volume size allocated to the Kafka cluster, edit the
spec.kafka.storage
property. To increase the volume size allocated to the Zookeeper cluster, edit the
spec.zookeeper.storage
property.For example, to increase the volume size from
1000Gi
to2000Gi
:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: persistent-claim size: 2000Gi class: my-storage-class # ... zookeeper: # ...
-
To increase the volume size allocated to the Kafka cluster, edit the
Create or update the resource.
On OpenShift, use
oc apply
:oc apply -f your-file
OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically.
Additional resources
For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes.
3.1.2.4. JBOD storage overview
You can configure AMQ Streams to use JBOD, a data storage configuration of multiple disks or volumes. JBOD is one approach to providing increased data storage for Kafka brokers. It can also improve performance.
A JBOD configuration is described by one or more volumes, each of which can be either ephemeral or persistent. The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. For example, you cannot change the size of a persistent storage volume after it has been provisioned.
3.1.2.4.1. JBOD configuration
To use JBOD with AMQ Streams, the storage type
must be set to jbod
. The volumes
property allows you to describe the disks that make up your JBOD storage array or configuration. The following fragment shows an example JBOD configuration:
# ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false # ...
The ids cannot be changed once the JBOD volumes are created.
Users can add or remove volumes from the JBOD configuration.
3.1.2.4.2. JBOD and Persistent Volume Claims
When persistent storage is used to declare JBOD volumes, the naming scheme of the resulting Persistent Volume Claims is as follows:
data-id-cluster-name-kafka-idx
-
Where
id
is the ID of the volume used for storing data for Kafka broker podidx
.
3.1.2.4.3. Log directories
The JBOD volumes will be used by the Kafka brokers as log directories mounted into the following path:
/var/lib/kafka/data-id/kafka-log_idx_
-
Where
id
is the ID of the volume used for storing data for Kafka broker podidx
. For example/var/lib/kafka/data-0/kafka-log0
.
3.1.2.5. Adding volumes to JBOD storage
This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type.
When adding a new volume under an id
which was already used in the past and removed, you have to make sure that the previously used PersistentVolumeClaims
have been deleted.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- A Kafka cluster with JBOD storage
Procedure
Edit the
spec.kafka.storage.volumes
property in theKafka
resource. Add the new volumes to thevolumes
array. For example, add the new volume with id2
:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
- Create new topics or reassign existing partitions to the new disks.
Additional resources
For more information about reassigning topics, see Section 3.1.22.2, “Partition reassignment”.
3.1.2.6. Removing volumes from JBOD storage
This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. The JBOD storage always has to contain at least one volume.
To avoid data loss, you have to move all partitions before removing the volumes.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- A Kafka cluster with JBOD storage with two or more volumes
Procedure
- Reassign all partitions from the disks which are you going to remove. Any data in partitions still assigned to the disks which are going to be removed might be lost.
Edit the
spec.kafka.storage.volumes
property in theKafka
resource. Remove one or more volumes from thevolumes
array. For example, remove the volumes with ids1
and2
:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
For more information about reassigning topics, see Section 3.1.22.2, “Partition reassignment”.
Additional resources
- For more information about ephemeral storage, see ephemeral storage schema reference.
- For more information about persistent storage, see persistent storage schema reference.
- For more information about JBOD storage, see JBOD schema reference.
-
For more information about the schema for
Kafka
, seeKafka
schema reference.
3.1.3. Kafka broker replicas
A Kafka cluster can run with many brokers. You can configure the number of brokers used for the Kafka cluster in Kafka.spec.kafka.replicas
. The best number of brokers for your cluster has to be determined based on your specific use case.
3.1.3.1. Configuring the number of broker nodes
This procedure describes how to configure the number of Kafka broker nodes in a new cluster. It only applies to new clusters with no partitions. If your cluster already has topics defined, see Section 3.1.22, “Scaling clusters”.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- A Kafka cluster with no topics defined yet
Procedure
Edit the
replicas
property in theKafka
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... replicas: 3 # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
If your cluster already has topics defined, see Section 3.1.22, “Scaling clusters”.
3.1.4. Kafka broker configuration
AMQ Streams allows you to customize the configuration of the Kafka brokers in your Kafka cluster. You can specify and configure most of the options listed in the "Broker Configs" section of the Apache Kafka documentation. You cannot configure options that are related to the following areas:
- Security (Encryption, Authentication, and Authorization)
- Listener configuration
- Broker ID configuration
- Configuration of log data directories
- Inter-broker communication
- Zookeeper connectivity
These options are automatically configured by AMQ Streams.
3.1.4.1. Kafka broker configuration
A Kafka broker can be configured using the config
property in Kafka.spec.kafka
.
This property should contain the Kafka broker configuration options as keys with values in one of the following JSON types:
- String
- Number
- Boolean
You can specify and configure all of the options in the "Broker Configs" section of the Apache Kafka documentation apart from those managed directly by AMQ Streams. Specifically, you are prevented from modifying all configuration options with keys equal to or starting with one of the following strings:
-
listeners
-
advertised.
-
broker.
-
listener.
-
host.name
-
port
-
inter.broker.listener.name
-
sasl.
-
ssl.
-
security.
-
password.
-
principal.builder.class
-
log.dir
-
zookeeper.connect
-
zookeeper.set.acl
-
authorizer.
-
super.user
If the config
property specifies a restricted option, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to Kafka.
The Cluster Operator does not validate keys or values in the provided config
object. If invalid configuration is provided, the Kafka cluster might not start or might become unstable. In such cases, you must fix the configuration in the Kafka.spec.kafka.config
object and the Cluster Operator will roll out the new configuration to all Kafka brokers.
An example Kafka broker configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 # ...
3.1.4.2. Configuring Kafka brokers
You can configure an existing Kafka broker, or create a new Kafka broker with a specified configuration.
Prerequisites
- An OpenShift cluster is available.
- The Cluster Operator is running.
Procedure
-
Open the YAML configuration file that contains the
Kafka
resource specifying the cluster deployment. In the
spec.kafka.config
property in theKafka
resource, enter one or more Kafka configuration settings. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... config: default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 # ... zookeeper: # ...
Apply the new configuration to create or update the resource.
On OpenShift, use
oc apply
:oc apply -f kafka.yaml
where
kafka.yaml
is the YAML configuration file for the resource that you want to configure; for example,kafka-persistent.yaml
.
3.1.5. Kafka broker listeners
AMQ Streams allows users to configure the listeners which will be enabled in Kafka brokers. Three types of listener are supported:
- Plain listener on port 9092 (without encryption)
- TLS listener on port 9093 (with encryption)
- External listener on port 9094 for access from outside of OpenShift
3.1.5.1. Mutual TLS authentication for clients
3.1.5.1.1. Mutual TLS authentication
Mutual TLS authentication is always used for the communication between Kafka brokers and Zookeeper pods.Mutual authentication or two-way authentication is when both the server and the client present certificates. AMQ Streams can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. When you configure mutual authentication, the broker authenticates the client and the client authenticates the broker.
TLS authentication is more commonly one-way, with one party authenticating the identity of another. For example, when HTTPS is used between a web browser and a web server, the server obtains proof of the identity of the browser.
3.1.5.1.2. When to use mutual TLS authentication for clients
Mutual TLS authentication is recommended for authenticating Kafka clients when:
- The client supports authentication using mutual TLS authentication
- It is necessary to use the TLS certificates rather than passwords
- You can reconfigure and restart client applications periodically so that they do not use expired certificates.
3.1.5.2. SCRAM-SHA authentication
SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. AMQ Streams can configure Kafka to use SASL (Simple Authentication and Security Layer) SCRAM-SHA-512 to provide authentication on both unencrypted and TLS-encrypted client connections. TLS authentication is always used internally between Kafka brokers and Zookeeper nodes. When used with a TLS client connection, the TLS protocol provides encryption, but is not used for authentication.
The following properties of SCRAM make it safe to use SCRAM-SHA even on unencrypted connections:
- The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user.
- The server and client each generate a new challenge for each authentication exchange. This means that the exchange is resilient against replay attacks.
3.1.5.2.1. Supported SCRAM credentials
AMQ Streams supports SCRAM-SHA-512 only. When a KafkaUser.spec.authentication.type
is configured with scram-sha-512
the User Operator will generate a random 12 character password consisting of upper and lowercase ASCII letters and numbers.
3.1.5.2.2. When to use SCRAM-SHA authentication for clients
SCRAM-SHA is recommended for authenticating Kafka clients when:
- The client supports authentication using SCRAM-SHA-512
- It is necessary to use passwords rather than the TLS certificates
- Authentication for unencrypted communication is required
3.1.5.3. Kafka listeners
You can configure Kafka broker listeners using the listeners
property in the Kafka.spec.kafka
resource. The listeners
property contains three sub-properties:
-
plain
-
tls
-
external
When none of these properties are defined, the listener will be disabled.
An example of listeners
property with all listeners enabled
# ... listeners: plain: {} tls: {} external: type: loadbalancer # ...
An example of listeners
property with only the plain listener enabled
# ... listeners: plain: {} # ...
3.1.5.3.1. External listener
The external listener is used to connect to a Kafka cluster from outside of an OpenShift environment. AMQ Streams supports three types of external listeners:
-
route
-
loadbalancer
-
nodeport
3.1.5.3.1.1. Exposing Kafka using OpenShift Routes
An external listener of type route
exposes Kafka by using OpenShift Routes
and the HAProxy router. A dedicated Route
is created for every Kafka broker pod. An additional Route
is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes
to connect to Kafka on port 443.
When exposing Kafka using OpenShift Routes
, TLS encryption is always used.
By default, the route hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying the requested hosts in the overrides
property. AMQ Streams will not perform any validation that the requested hosts are available; you must ensure that they are free and can be used.
Example of an external listener of type routes
configured with overrides for OpenShift route hosts
# ... listeners: external: type: route authentication: type: tls overrides: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com # ...
For more information on using Routes
to access Kafka, see Section 3.1.5.5, “Accessing Kafka using OpenShift routes”.
3.1.5.3.1.2. Exposing Kafka using loadbalancers
External listeners of type loadbalancer
expose Kafka by using Loadbalancer
type Services
. A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to connections on port 9094.
By default, TLS encryption is enabled. To disable it, set the tls
field to false
.
For more information on using loadbalancers to access Kafka, see Section 3.1.5.6, “Accessing Kafka using loadbalancers”.
3.1.5.3.1.3. Exposing Kafka using node ports
External listeners of type nodeport
expose Kafka by using NodePort
type Services
. When exposing Kafka in this way, Kafka clients connect directly to the nodes of OpenShift. You must enable access to the ports on the OpenShift nodes for each client (for example, in firewalls or security groups). Each Kafka broker pod is then accessible on a separate port. Additional NodePort
type Service
is created to serve as a Kafka bootstrap address.
When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. When selecting the node address, the different address types are used with the following priority:
- ExternalDNS
- ExternalIP
- Hostname
- InternalDNS
- InternalIP
By default, TLS encryption is enabled. To disable it, set the tls
field to false
.
TLS hostname verification is not currently supported when exposing Kafka clusters using node ports.
By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. However, you can override the assigned node ports by specifying the requested port numbers in the overrides
property. AMQ Streams does not perform any validation on the requested ports; you must ensure that they are free and available for use.
Example of an external listener configured with overrides for node ports
# ... listeners: external: type: nodeport tls: true authentication: type: tls overrides: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002 # ...
For more information on using node ports to access Kafka, see Section 3.1.5.7, “Accessing Kafka using node ports”.
3.1.5.3.1.4. Customizing advertised addresses on external listeners
By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed. You can customize the advertised hostname and port in the overrides
property of the external listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of external listeners.
Example of an external listener configured with overrides for advertised addresses
# ... listeners: external: type: route authentication: type: tls overrides: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342 # ...
Additionally, you can specify the name of the bootstrap service. This name will be added to the broker certificates and can be used for TLS hostname verification. Adding the additional bootstrap address is available for all types of external listeners.
Example of an external listener configured with an additional bootstrap address
# ... listeners: external: type: route authentication: type: tls overrides: bootstrap: address: example.hostname # ...
3.1.5.3.1.5. Customizing DNS names of external listeners
On loadbalancer
listeners, you can use the dnsAnnotations
property to add additional annotations to the load balancer services. You can use these annotations to instrument DNS tooling such as External DNS, which automatically assigns DNS names to the services.
Example of an external listener of type loadbalancer
using External DNS annotations
# ... listeners: external: type: loadbalancer authentication: type: tls overrides: bootstrap: dnsAnnotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" brokers: - broker: 0 dnsAnnotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 1 dnsAnnotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 2 dnsAnnotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" # ...
3.1.5.3.2. Listener authentication
The listener sub-properties can also contain additional configuration. Both listeners support the authentication
property. This is used to specify an authentication mechanism specific to that listener:
- mutual TLS authentication (only on the listeners with TLS encryption)
- SCRAM-SHA authentication
If no authentication
property is specified then the listener does not authenticate clients which connect though that listener.
An example where the plain listener is configured for SCRAM-SHA authentication and the tls
listener with mutual TLS authentication
# ... listeners: plain: authentication: type: scram-sha-512 tls: authentication: type: tls external: type: loadbalancer tls: true authentication: type: tls # ...
Authentication must be configured when using the User Operator to manage KafkaUsers
.
3.1.5.3.3. Network policies
AMQ Streams automatically creates a NetworkPolicy
resource for every listener that is enabled on a Kafka broker. By default, a NetworkPolicy
grants access to a listener to all applications and namespaces. If you want to restrict access to a listener to only selected applications or namespaces, use the networkPolicyPeers
field. Each listener can have a different networkPolicyPeers
configuration.
The following example shows a networkPolicyPeers
configuration for a plain
and a tls
listener:
# ... listeners: plain: authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer tls: authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ...
In the above example:
-
Only application pods matching the labels
app: kafka-sasl-consumer
andapp: kafka-sasl-producer
can connect to theplain
listener. The application pods must be running in the same namespace as the Kafka broker. -
Only application pods running in namespaces matching the labels
project: myproject
andproject: myproject2
can connect to thetls
listener.
The syntax of the networkPolicyPeers
field is the same as the from
field in the NetworkPolicy
resource in Kubernetes. For more information about the schema, see NetworkPolicyPeer API reference and the KafkaListeners
schema reference.
Your configuration of OpenShift must support Ingress NetworkPolicies in order to use network policies in AMQ Streams.
3.1.5.4. Configuring Kafka listeners
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
listeners
property in theKafka.spec.kafka
resource.An example configuration of the plain (unencrypted) listener without authentication:
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: plain: {} # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
-
For more information about the schema, see
KafkaListeners
schema reference.
3.1.5.5. Accessing Kafka using OpenShift routes
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Deploy Kafka cluster with an external listener enabled and configured to the type
route
.An example configuration with an external listener configured to use
Routes
:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: external: type: route # ... # ... zookeeper: # ...
Create or update the resource.
oc apply -f your-file
Find the address of the bootstrap
Route
.oc get routes _cluster-name_-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'
Use the address together with port 443 in your Kafka client as the bootstrap address.
Extract the public certificate of the broker certification authority
oc extract secret/_cluster-name_-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.
Additional resources
-
For more information about the schema, see
KafkaListeners
schema reference.
3.1.5.6. Accessing Kafka using loadbalancers
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Deploy Kafka cluster with an external listener enabled and configured to the type
loadbalancer
.An example configuration with an external listener configured to use loadbalancers:
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: external: type: loadbalancer tls: true # ... # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Find the hostname of the bootstrap loadbalancer.
On OpenShift this can be done using
oc get
:oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}{"\n"}'
If no hostname was found (nothing was returned by the command), use the loadbalancer IP address.
On OpenShift this can be done using
oc get
:oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
Use the hostname or IP address together with port 9094 in your Kafka client as the bootstrap address.
Unless TLS encryption was disabled, extract the public certificate of the broker certification authority.
On OpenShift this can be done using
oc extract
:oc extract secret/cluster-name-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.
Additional resources
-
For more information about the schema, see
KafkaListeners
schema reference.
3.1.5.7. Accessing Kafka using node ports
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Deploy Kafka cluster with an external listener enabled and configured to the type
nodeport
.An example configuration with an external listener configured to use node ports:
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: external: type: nodeport tls: true # ... # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Find the port number of the bootstrap service.
On OpenShift this can be done using
oc get
:oc get service cluster-name-kafka-external-bootstrap -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}'
The port should be used in the Kafka bootstrap address.
Find the address of the OpenShift node.
On OpenShift this can be done using
oc get
:oc get node node-name -o=jsonpath='{range .status.addresses[*]}{.type}{"\t"}{.address}{"\n"}'
If several different addresses are returned, select the address type you want based on the following order:
- ExternalDNS
- ExternalIP
- Hostname
- InternalDNS
InternalIP
Use the address with the port found in the previous step in the Kafka bootstrap address.
Unless TLS encryption was disabled, extract the public certificate of the broker certification authority.
On OpenShift this can be done using
oc extract
:oc extract secret/cluster-name-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication.
Additional resources
-
For more information about the schema, see
KafkaListeners
schema reference.
3.1.5.8. Restricting access to Kafka listeners using networkPolicyPeers
You can restrict access to a listener to only selected applications by using the networkPolicyPeers
field.
Prerequisites
- An OpenShift cluster with support for Ingress NetworkPolicies.
- The Cluster Operator is running.
Procedure
-
Open the
Kafka
resource. In the
networkPolicyPeers
field, define the application pods or namespaces that will be allowed to access the Kafka cluster.For example, to configure a
tls
listener to allow connections only from application pods with the labelapp
set tokafka-client
:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: tls: networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # ... zookeeper: # ...
Create or update the resource.
On OpenShift use
oc apply
:oc apply -f your-file
Additional resources
-
For more information about the schema, see NetworkPolicyPeer API reference and the
KafkaListeners
schema reference.
3.1.6. Authentication and Authorization
AMQ Streams supports authentication and authorization. Authentication can be configured independently for each listener. Authorization is always configured for the whole Kafka cluster.
3.1.6.1. Authentication
Authentication is configured as part of the listener configuration in the authentication
property. The authentication mechanism is defined by the type
field.
When the authentication
property is missing, no authentication is enabled on a given listener. The listener will accept all connections without authentication.
Supported authentication mechanisms:
- TLS client authentication
- SASL SCRAM-SHA-512
3.1.6.1.1. TLS client authentication
TLS Client authentication is enabled by specifying the type
as tls
. The TLS client authentication is supported only on the tls
listener.
An example of authentication
with type tls
# ... authentication: type: tls # ...
3.1.6.2. Configuring authentication in Kafka brokers
Prerequisites
- An OpenShift cluster is available.
- The Cluster Operator is running.
Procedure
-
Open the YAML configuration file that contains the
Kafka
resource specifying the cluster deployment. In the
spec.kafka.listeners
property in theKafka
resource, add theauthentication
field to the listeners for which you want to enable authentication. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: tls: authentication: type: tls # ... zookeeper: # ...
Apply the new configuration to create or update the resource.
On OpenShift, use
oc apply
:oc apply -f kafka.yaml
where
kafka.yaml
is the YAML configuration file for the resource that you want to configure; for example,kafka-persistent.yaml
.
Additional resources
- For more information about the supported authentication mechanisms, see authentication reference.
-
For more information about the schema for
Kafka
, seeKafka
schema reference.
3.1.6.3. Authorization
Authorization can be configured using the authorization
property in the Kafka.spec.kafka
resource. When the authorization
property is missing, no authorization will be enabled. When authorization is enabled it will be applied for all enabled listeners. The authorization method is defined by the type
field.
Currently, the only supported authorization method is the Simple authorization.
3.1.6.3.1. Simple authorization
Simple authorization is using the SimpleAclAuthorizer
plugin. SimpleAclAuthorizer
is the default authorization plugin which is part of Apache Kafka. To enable simple authorization, the type
field should be set to simple
.
An example of Simple authorization
# ... authorization: type: simple # ...
3.1.6.4. Configuring authorization in Kafka brokers
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Add or edit the
authorization
property in theKafka.spec.kafka
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... authorization: type: simple # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
- For more information about the supported authorization methods, see authorization reference.
-
For more information about the schema for
Kafka
, seeKafka
schema reference.
3.1.7. Zookeeper replicas
Zookeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven.
The majority of nodes must be available in order to maintain an effective quorum. If the Zookeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available Zookeeper cluster is crucial for AMQ Streams.
- Three-node cluster
- A three-node Zookeeper cluster requires at least two nodes to be up and running in order to maintain the quorum. It can tolerate only one node being unavailable.
- Five-node cluster
- A five-node Zookeeper cluster requires at least three nodes to be up and running in order to maintain the quorum. It can tolerate two nodes being unavailable.
- Seven-node cluster
- A seven-node Zookeeper cluster requires at least four nodes to be up and running in order to maintain the quorum. It can tolerate three nodes being unavailable.
For development purposes, it is also possible to run Zookeeper with a single node.
Having more nodes does not necessarily mean better performance, as the costs to maintain the quorum will rise with the number of nodes in the cluster. Depending on your availability requirements, you can decide for the number of nodes to use.
3.1.7.1. Number of Zookeeper nodes
The number of Zookeeper nodes can be configured using the replicas
property in Kafka.spec.zookeeper
.
An example showing replicas configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... replicas: 3 # ...
3.1.7.2. Changing the number of Zookeeper replicas
Prerequisites
- An OpenShift cluster is available.
- The Cluster Operator is running.
Procedure
-
Open the YAML configuration file that contains the
Kafka
resource specifying the cluster deployment. In the
spec.zookeeper.replicas
property in theKafka
resource, enter the number of replicated Zookeeper servers. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... replicas: 3 # ...
Apply the new configuration to create or update the resource.
On OpenShift, use
oc apply
:oc apply -f kafka.yaml
where
kafka.yaml
is the YAML configuration file for the resource that you want to configure; for example,kafka-persistent.yaml
.
3.1.8. Zookeeper configuration
AMQ Streams allows you to customize the configuration of Apache Zookeeper nodes. You can specify and configure most of the options listed in the Zookeeper documentation.
Options which cannot be configured are those related to the following areas:
- Security (Encryption, Authentication, and Authorization)
- Listener configuration
- Configuration of data directories
- Zookeeper cluster composition
These options are automatically configured by AMQ Streams.
3.1.8.1. Zookeeper configuration
Zookeeper nodes are configured using the config
property in Kafka.spec.zookeeper
. This property contains the Zookeeper configuration options as keys. The values can be described using one of the following JSON types:
- String
- Number
- Boolean
Users can specify and configure the options listed in Zookeeper documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
server.
-
dataDir
-
dataLogDir
-
clientPort
-
authProvider
-
quorum.auth
-
requireClientAuthScheme
When one of the forbidden options is present in the config
property, it is ignored and a warning message is printed to the Custer Operator log file. All other options are passed to Zookeeper.
The Cluster Operator does not validate keys or values in the provided config
object. When invalid configuration is provided, the Zookeeper cluster might not start or might become unstable. In such cases, the configuration in the Kafka.spec.zookeeper.config
object should be fixed and the cluster operator will roll out the new configuration to all Zookeeper nodes.
Selected options have default values:
-
timeTick
with default value2000
-
initLimit
with default value5
-
syncLimit
with default value2
-
autopurge.purgeInterval
with default value1
These options will be automatically configured when they are not present in the Kafka.spec.zookeeper.config
property.
An example showing Zookeeper configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... zookeeper: # ... config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 # ...
3.1.8.2. Configuring Zookeeper
Prerequisites
- An OpenShift cluster is available.
- The Cluster Operator is running.
Procedure
-
Open the YAML configuration file that contains the
Kafka
resource specifying the cluster deployment. In the
spec.zookeeper.config
property in theKafka
resource, enter one or more Zookeeper configuration settings. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... zookeeper: # ... config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 # ...
Apply the new configuration to create or update the resource.
On OpenShift, use
oc apply
:oc apply -f kafka.yaml
where
kafka.yaml
is the YAML configuration file for the resource that you want to configure; for example,kafka-persistent.yaml
.
3.1.9. Zookeeper connection
Zookeeper services are secured with encryption and authentication and are not intended to be used by external applications that are not part of AMQ Streams.
However, if you want to use Kafka CLI tools that require a connection to Zookeeper, such as the kafka-topics
tool, you can use a terminal inside a Kafka container and connect to the local end of the TLS tunnel to Zookeeper by using localhost:2181
as the Zookeeper address.
3.1.9.1. Connecting to Zookeeper from a terminal
Open a terminal inside a Kafka container to use Kafka CLI tools that require a Zookeeper connection.
Prerequisites
- An OpenShift cluster is available.
- A kafka cluster is running.
- The Cluster Operator is running.
Procedure
Open the terminal using the OpenShift console or run the
exec
command from your CLI.For example:
oc exec -ti my-cluster-kafka-0 -- bin/kafka-topics.sh --list --zookeeper localhost:2181
Be sure to use
localhost:2181
.You can now run Kafka commands to Zookeeper.
3.1.10. Entity Operator
The Entity Operator is responsible for managing different entities in a running Kafka cluster. The currently supported entities are:
- Kafka topics
- managed by the Topic Operator.
- Kafka users
- managed by the User Operator
Both Topic and User Operators can be deployed on their own. But the easiest way to deploy them is together with the Kafka cluster as part of the Entity Operator. The Entity Operator can include either one or both of them depending on the configuration. They will be automatically configured to manage the topics and users of the Kafka cluster with which they are deployed.
For more information about Topic Operator, see Section 4.2, “Topic Operator”. For more information about how to use Topic Operator to create or delete topics, see Chapter 5, Using the Topic Operator.
3.1.10.1. Configuration
The Entity Operator can be configured using the entityOperator
property in Kafka.spec
The entityOperator
property supports several sub-properties:
-
tlsSidecar
-
topicOperator
-
userOperator
-
template
The tlsSidecar
property can be used to configure the TLS sidecar container which is used to communicate with Zookeeper. For more details about configuring the TLS sidecar, see Section 3.1.18, “TLS sidecar”.
The template
property can be used to configure details of the Entity Operator pod, such as labels, annotations, affinity, tolerations and so on.
The topicOperator
property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator.
The userOperator
property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator.
Example of basic configuration enabling both operators
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {}
When both topicOperator
and userOperator
properties are missing, the Entity Operator will be not deployed.
3.1.10.1.1. Topic Operator
Topic Operator deployment can be configured using additional options inside the topicOperator
object. The following options are supported:
watchedNamespace
-
The OpenShift namespace in which the topic operator watches for
KafkaTopics
. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds
-
The interval between periodic reconciliations in seconds. Default
90
. zookeeperSessionTimeoutSeconds
-
The Zookeeper session timeout in seconds. Default
20
. topicMetadataMaxAttempts
-
The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation could take more time due to the number of partitions or replicas. Default
6
. image
-
The
image
property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.17, “Container images”. resources
-
The
resources
property configures the amount of resources allocated to the Topic Operator. For more details about resource request and limit configuration, see Section 3.1.11, “CPU and memory resources”. logging
The
logging
property configures the logging of the Topic Operator.The Topic Operator has its own configurable logger:
-
rootLogger.level
-
Example of Topic Operator configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 # ...
3.1.10.1.2. User Operator
User Operator deployment can be configured using additional options inside the userOperator
object. The following options are supported:
watchedNamespace
-
The OpenShift namespace in which the topic operator watches for
KafkaUsers
. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds
-
The interval between periodic reconciliations in seconds. Default
120
. zookeeperSessionTimeoutSeconds
-
The Zookeeper session timeout in seconds. Default
6
. image
-
The
image
property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.17, “Container images”. resources
-
The
resources
property configures the amount of resources allocated to the User Operator. For more details about resource request and limit configuration, see Section 3.1.11, “CPU and memory resources”. logging
The
logging
property configures the logging of the User Operator.The User Operator has its own configurable logger:
-
rootLogger.level
-
Example of Topic Operator configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 # ...
3.1.10.2. Configuring Entity Operator
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
entityOperator
property in theKafka
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.11. CPU and memory resources
For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.
AMQ Streams supports two types of resources:
- CPU
- Memory
AMQ Streams uses the OpenShift syntax for specifying CPU and memory resources.
3.1.11.1. Resource limits and requests
Resource limits and requests are configured using the resources
property in the following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Additional resources
- For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers.
3.1.11.1.1. Resource requests
Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.
If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.
Resources requests are specified in the requests
property. Resources requests currently supported by AMQ Streams:
-
cpu
-
memory
A request may be configured for one or more supported resources.
Example resource request configuration with all resources
# ... resources: requests: cpu: 12 memory: 64Gi # ...
3.1.11.1.2. Resource limits
Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.
Resource limits are specified in the limits
property. Resource limits currently supported by AMQ Streams:
-
cpu
-
memory
A resource may be configured for one or more supported limits.
Example resource limits configuration
# ... resources: limits: cpu: 12 memory: 64Gi # ...
3.1.11.1.3. Supported CPU formats
CPU requests and limits are supported in the following formats:
-
Number of CPU cores as integer (
5
CPU core) or decimal (2.5
CPU core). -
Number or millicpus / millicores (
100m
) where 1000 millicores is the same1
CPU core.
Example CPU units
# ... resources: requests: cpu: 500m limits: cpu: 2.5 # ...
The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.
Additional resources
- For more information on CPU specification, see the Meaning of CPU.
3.1.11.1.4. Supported memory formats
Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.
-
To specify memory in megabytes, use the
M
suffix. For example1000M
. -
To specify memory in gigabytes, use the
G
suffix. For example1G
. -
To specify memory in mebibytes, use the
Mi
suffix. For example1000Mi
. -
To specify memory in gibibytes, use the
Gi
suffix. For example1Gi
.
An example of using different memory units
# ... resources: requests: memory: 512Mi limits: memory: 2Gi # ...
Additional resources
- For more details about memory specification and additional supported units, see Meaning of memory.
3.1.11.2. Configuring resource requests and limits
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
resources
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... resources: requests: cpu: "8" memory: 64Gi limits: cpu: "12" memory: 128Gi # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
-
For more information about the schema, see
Resources
schema reference.
3.1.12. Logging
This section provides information on loggers and how to configure log levels.
You can set the log levels by specifying the loggers and their levels directly (inline) or use a custom (external) config map.
3.1.12.1. Kafka loggers
Kafka has its own configurable loggers:
-
kafka.root.logger.level
-
log4j.logger.org.I0Itec.zkclient.ZkClient
-
log4j.logger.org.apache.zookeeper
-
log4j.logger.kafka
-
log4j.logger.org.apache.kafka
-
log4j.logger.kafka.request.logger
-
log4j.logger.kafka.network.Processor
-
log4j.logger.kafka.server.KafkaApis
-
log4j.logger.kafka.network.RequestChannel$
-
log4j.logger.kafka.controller
-
log4j.logger.kafka.log.LogCleaner
-
log4j.logger.state.change.logger
-
log4j.logger.kafka.authorizer.logger
Zookeeper
-
zookeeper.root.logger
-
3.1.12.2. Specifying inline logging
Procedure
Edit the YAML file to specify the loggers and logging level for the required components.
For example, the logging level here is set to INFO:
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... logging: type: inline loggers: logger.name: "INFO" # ... zookeeper: # ... logging: type: inline loggers: logger.name: "INFO" # ... entityOperator: # ... topicOperator: # ... logging: type: inline loggers: logger.name: "INFO" # ... # ... userOperator: # ... logging: type: inline loggers: logger.name: "INFO" # ...
You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF.
For more information about the log levels, see the log4j manual.
Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.12.3. Specifying an external ConfigMap for logging
Procedure
Edit the YAML file to specify the name of the
ConfigMap
to use for the required components. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... logging: type: external name: customConfigMap # ...
Remember to place your custom ConfigMap under the
log4j.properties
orlog4j2.properties
key.Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Garbage collector (GC) logging can also be enabled (or disabled). For more information on GC, see Section 3.1.16.1, “JVM configuration”
3.1.13. Kafka rack awareness
The rack awareness feature in AMQ Streams helps to spread the Kafka broker pods and Kafka topic replicas across different racks. Enabling rack awareness helps to improve availability of Kafka brokers and the topics they are hosting.
"Rack" might represent an availability zone, data center, or an actual rack in your data center.
3.1.13.1. Configuring rack awareness in Kafka brokers
Kafka rack awareness can be configured in the rack
property of Kafka.spec.kafka
. The rack
object has one mandatory field named topologyKey
. This key needs to match one of the labels assigned to the OpenShift cluster nodes. The label is used by OpenShift when scheduling the Kafka broker pods to nodes. If the OpenShift cluster is running on a cloud provider platform, that label should represent the availability zone where the node is running. Usually, the nodes are labeled with failure-domain.beta.kubernetes.io/zone
that can be easily used as the topologyKey
value. This has the effect of spreading the broker pods across zones, and also setting the brokers' broker.rack
configuration parameter inside Kafka broker.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Consult your OpenShift administrator regarding the node label that represents the zone / rack into which the node is deployed.
Edit the
rack
property in theKafka
resource using the label as the topology key.apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: failure-domain.beta.kubernetes.io/zone # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional Resources
- For information about Configuring init container image for Kafka rack awareness, see Section 3.1.17, “Container images”.
3.1.14. Healthchecks
Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.
OpenShift supports two types of Healthcheck probes:
- Liveness probes
- Readiness probes
For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.
Users can configure selected options for liveness and readiness probes.
3.1.14.1. Healthcheck configurations
Liveness and readiness probes can be configured using the livenessProbe
and readinessProbe
properties in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Both livenessProbe
and readinessProbe
support two additional options:
-
initialDelaySeconds
-
timeoutSeconds
The initialDelaySeconds
property defines the initial delay before the probe is tried for the first time. Default is 15 seconds.
The timeoutSeconds
property defines timeout of the probe. Default is 5 seconds.
An example of liveness and readiness probe configuration
# ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ...
3.1.14.2. Configuring healthchecks
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
livenessProbe
orreadinessProbe
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.15. Prometheus metrics
AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.
3.1.15.1. Metrics configuration
Prometheus metrics are enabled by configuring the metrics
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
When the metrics
property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}
).
Example of enabling metrics without any further configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: {} # ... zookeeper: # ...
The metrics
property might contain additional configuration for the Prometheus JMX exporter.
Example of enabling metrics with additional Prometheus JMX Exporter configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: lowercaseOutputName: true rules: - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count" name: "kafka_server_$1_$2_total" - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count" name: "kafka_server_$1_$2_total" labels: topic: "$3" # ... zookeeper: # ...
3.1.15.2. Configuring Prometheus metrics
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
metrics
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... metrics: lowercaseOutputName: true # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.16. JVM Options
Apache Kafka and Apache Zookeeper run inside a Java Virtual Machine (JVM). JVM configuration options optimize the performance for different platforms and architectures. AMQ Streams allows you to configure some of these options.
3.1.16.1. JVM configuration
JVM options can be configured using the jvmOptions
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
Only a selected subset of available JVM options can be configured. The following options are supported:
-Xms and -Xmx
-Xms
configures the minimum initial allocation heap size when the JVM starts. -Xmx
configures the maximum heap size.
The units accepted by JVM settings such as -Xmx
and -Xms
are those accepted by the JDK java
binary in the corresponding image. Accordingly, 1g
or 1G
means 1,073,741,824 bytes, and Gi
is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G
means 1,000,000,000 bytes, and 1Gi
means 1,073,741,824 bytes
The default values used for -Xms
and -Xmx
depends on whether there is a memory request limit configured for the container:
- If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
-
If there is no memory limit then the JVM’s minimum memory will be set to
128M
and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Setting -Xmx
explicitly requires some care:
-
The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by
-Xmx
. -
If
-Xmx
is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it). -
If
-Xmx
is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if-Xms
is set to-Xmx
, or some later time if not).
When setting -Xmx
explicitly, it is recommended to:
- set the memory request and the memory limit to the same value,
-
use a memory request that is at least 4.5 × the
-Xmx
, -
consider setting
-Xms
to the same value as-Xms
.
Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.
Example fragment configuring -Xmx
and -Xms
# ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ...
In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.
Setting the same value for initial (-Xms
) and maximum (-Xmx
) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.
-server
-server
enables the server JVM. This option can be set to true or false.
Example fragment configuring -server
# ... jvmOptions: "-server": true # ...
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
-XX
-XX
object can be used for configuring advanced runtime options of a JVM. The -server
and -XX
options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS
option of Apache Kafka.
Example showing the use of the -XX
object
jvmOptions: "-XX": "UseG1GC": true, "MaxGCPauseMillis": 20, "InitiatingHeapOccupancyPercent": 35, "ExplicitGCInvokesConcurrent": true, "UseParNewGC": false
The example configuration above will result in the following JVM options:
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
3.1.16.1.1. Garbage collector logging
The jvmOptions
section also allows you to enable and disable garbage collector (GC) logging. GC logging is enabled by default. To disable it, set the gcLoggingEnabled
property as follows:
Example of disabling GC logging
# ... jvmOptions: gcLoggingEnabled: false # ...
3.1.16.2. Configuring JVM options
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
jvmOptions
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jvmOptions: "-Xmx": "8g" "-Xms": "8g" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.17. Container images
AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.
3.1.17.1. Container image configurations
Container image which should be used for given components can be specified using the image
property in:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
3.1.17.1.1. Configuring the Kafka.spec.kafka.image
property
The Kafka.spec.kafka.image
property functions differently from the others, because AMQ Streams supports multiple versions of Kafka, each requiring the own image. The STRIMZI_KAFKA_IMAGES
environment variable of the Cluster Operator configuration is used to provide a mapping between Kafka versions and the corresponding images. This is used in combination with the Kafka.spec.kafka.image
and Kafka.spec.kafka.version
properties as follows:
-
If neither
Kafka.spec.kafka.image
norKafka.spec.kafka.version
are given in the custom resource then theversion
will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
If
Kafka.spec.kafka.image
is given butKafka.spec.kafka.version
is not then the given image will be used and theversion
will be assumed to be the Cluster Operator’s default Kafka version. -
If
Kafka.spec.kafka.version
is given butKafka.spec.kafka.image
is not then image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
Both
Kafka.spec.kafka.version
andKafka.spec.kafka.image
are given the given image will be used, and it will be assumed to contain a Kafka broker with the given version.
It is best to provide just Kafka.spec.kafka.version
and leave the Kafka.spec.kafka.image
property unspecified. This reduces the chances of making a mistake in configuring the Kafka
resource. If you need to change the images used for different versions of Kafka, it is better to configure the Cluster Operator’s STRIMZI_KAFKA_IMAGES
environment variable.
3.1.17.1.2. Configuring the image
property in other resources
For the image
property in the other custom resources, the given value will be used during deployment. If the image
property is missing, the image
specified in the Cluster Operator configuration will be used. If the image
name is not defined in the Cluster Operator configuration, then the default value will be used.
For Kafka broker TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper nodes:
-
Container image specified in the
STRIMZI_DEFAULT_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper node TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Topic Operator:
-
Container image specified in the
STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For User Operator:
-
Container image specified in the
STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For Entity Operator TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect with Source2image support:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.
Example of container image configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
3.1.17.2. Configuring container images
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
image
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.18. TLS sidecar
A sidecar is a container that runs in a pod but serves a supporting purpose. In AMQ Streams, the TLS sidecar uses TLS to encrypt and decrypt all communication between the various components and Zookeeper. Zookeeper does not have native TLS support.
The TLS sidecar is used in:
- Kafka brokers
- Zookeeper nodes
- Entity Operator
3.1.18.1. TLS sidecar configuration
The TLS sidecar can be configured using the tlsSidecar
property in:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
Kafka.spec.entityOperator
The TLS sidecar supports the following additional options:
-
image
-
resources
-
logLevel
-
readinessProbe
-
livenessProbe
The resources
property can be used to specify the memory and CPU resources allocated for the TLS sidecar.
The image
property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 3.1.17, “Container images”.
The logLevel
property is used to specify the logging level. Following logging levels are supported:
- emerg
- alert
- crit
- err
- warning
- notice
- info
- debug
The default value is notice.
For more information about configuring the readinessProbe
and livenessProbe
properties for the healthchecks, see Section 3.1.14.1, “Healthcheck configurations”.
Example of TLS sidecar configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... tlsSidecar: image: my-org/my-image:latest resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logLevel: debug readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... zookeeper: # ...
3.1.18.2. Configuring TLS sidecar
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
tlsSidecar
property in theKafka
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.19. Configuring pod scheduling
When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.
3.1.19.1. Scheduling pods based on other applications
3.1.19.1.1. Avoid critical applications to share the node
Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.
3.1.19.1.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.1.19.1.3. Configuring pod anti-affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. ThetopologyKey
should be set tokubernetes.io/hostname
to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.19.2. Scheduling pods to specific nodes
3.1.19.2.1. Node scheduling
The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.
OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type
or custom labels to select the right node.
3.1.19.2.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.1.19.2.3. Configuring node affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Label the nodes where AMQ Streams components should be scheduled.
On OpenShift this can be done using
oc label
:oc label node your-node node-type=fast-network
Alternatively, some of the existing labels might be reused.
Edit the
affinity
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.19.3. Using dedicated nodes
3.1.19.3.1. Dedicated nodes
Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.
Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.
To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.
3.1.19.3.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.1.19.3.3. Tolerations
Tolerations can be configured using the tolerations
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The format of the tolerations
property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.
3.1.19.3.4. Setting up dedicated nodes and scheduling pods on them
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Select the nodes which should be used as dedicated.
- Make sure there are no workloads scheduled on these nodes.
Set the taints on the selected nodes:
On OpenShift this can be done using
oc adm taint
:oc adm taint node your-node dedicated=Kafka:NoSchedule
Additionally, add a label to the selected nodes as well.
On OpenShift this can be done using
oc label
:oc label node your-node dedicated=Kafka
Edit the
affinity
andtolerations
properties in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.1.20. Performing a rolling update of a Kafka cluster
This procedure describes how to manually trigger a rolling update of an existing Kafka cluster by using an OpenShift annotation.
Prerequisites
- A running Kafka cluster.
- A running Cluster Operator.
Procedure
Find the name of the
StatefulSet
that controls the Kafka pods you want to manually update.For example, if your Kafka cluster is named my-cluster, the corresponding
StatefulSet
is named my-cluster-kafka.Annotate a
StatefulSet
resource in OpenShift.On OpenShift, use
oc annotate
:oc annotate statefulset cluster-name-kafka strimzi.io/manual-rolling-update=true
-
Wait for the next reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated
StatefulSet
is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from theStatefulSet
.
Additional resources
- For more information about deploying the Cluster Operator, see Section 2.3, “Cluster Operator”.
- For more information about deploying the Kafka cluster on OpenShift, see Section 2.4.1, “Deploying the Kafka cluster to OpenShift”.
3.1.21. Performing a rolling update of a Zookeeper cluster
This procedure describes how to manually trigger a rolling update of an existing Zookeeper cluster by using an OpenShift annotation.
Prerequisites
- A running Zookeeper cluster.
- A running Cluster Operator.
Procedure
Find the name of the
StatefulSet
that controls the Zookeeper pods you want to manually update.For example, if your Kafka cluster is named my-cluster, the corresponding
StatefulSet
is named my-cluster-zookeeper.Annotate a
StatefulSet
resource in OpenShift.On OpenShift, use
oc annotate
:oc annotate statefulset cluster-name-zookeeper strimzi.io/manual-rolling-update=true
-
Wait for the next reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated
StatefulSet
is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from theStatefulSet
.
Additional resources
- For more information about deploying the Cluster Operator, see Section 2.3, “Cluster Operator”.
- For more information about deploying the Zookeeper cluster, see Section 2.4.1, “Deploying the Kafka cluster to OpenShift”.
3.1.22. Scaling clusters
3.1.22.1. Scaling Kafka clusters
3.1.22.1.1. Adding brokers to a cluster
The primary way of increasing throughput for a topic is to increase the number of partitions for that topic. That works because the extra partitions allow the load of the topic to be shared between the different brokers in the cluster. However, in situations where every broker is constrained by a particular resource (typically I/O) using more partitions will not result in increased throughput. Instead, you need to add brokers to the cluster.
When you add an extra broker to the cluster, Kafka does not assign any partitions to it automatically. You must decide which partitions to move from the existing brokers to the new broker.
Once the partitions have been redistributed between all the brokers, the resource utilization of each broker should be reduced.
3.1.22.1.2. Removing brokers from a cluster
Because AMQ Streams uses StatefulSets
to manage broker pods, you cannot remove any pod from the cluster. You can only remove one or more of the highest numbered pods from the cluster. For example, in a cluster of 12 brokers the pods are named cluster-name-kafka-0
up to cluster-name-kafka-11
. If you decide to scale down by one broker, the cluster-name-kafka-11
will be removed.
Before you remove a broker from a cluster, ensure that it is not assigned to any partitions. You should also decide which of the remaining brokers will be responsible for each of the partitions on the broker being decommissioned. Once the broker has no assigned partitions, you can scale the cluster down safely.
3.1.22.2. Partition reassignment
The Topic Operator does not currently support reassigning replicas to different brokers, so it is necessary to connect directly to broker pods to reassign replicas to brokers.
Within a broker pod, the kafka-reassign-partitions.sh
utility allows you to reassign partitions to different brokers.
It has three different modes:
--generate
- Takes a set of topics and brokers and generates a reassignment JSON file which will result in the partitions of those topics being assigned to those brokers. Because this operates on whole topics, it cannot be used when you just need to reassign some of the partitions of some topics.
--execute
- Takes a reassignment JSON file and applies it to the partitions and brokers in the cluster. Brokers that gain partitions as a result become followers of the partition leader. For a given partition, once the new broker has caught up and joined the ISR (in-sync replicas) the old broker will stop being a follower and will delete its replica.
--verify
-
Using the same reassignment JSON file as the
--execute
step,--verify
checks whether all of the partitions in the file have been moved to their intended brokers. If the reassignment is complete, --verify also removes any throttles that are in effect. Unless removed, throttles will continue to affect the cluster even after the reassignment has finished.
It is only possible to have one reassignment running in a cluster at any given time, and it is not possible to cancel a running reassignment. If you need to cancel a reassignment, wait for it to complete and then perform another reassignment to revert the effects of the first reassignment. The kafka-reassign-partitions.sh
will print the reassignment JSON for this reversion as part of its output. Very large reassignments should be broken down into a number of smaller reassignments in case there is a need to stop in-progress reassignment.
3.1.22.2.1. Reassignment JSON file
The reassignment JSON file has a specific structure:
{
"version": 1,
"partitions": [
<PartitionObjects>
]
}
Where <PartitionObjects> is a comma-separated list of objects like:
{ "topic": <TopicName>, "partition": <Partition>, "replicas": [ <AssignedBrokerIds> ] }
Although Kafka also supports a "log_dirs"
property this should not be used in Red Hat AMQ Streams.
The following is an example reassignment JSON file that assigns topic topic-a
, partition 4
to brokers 2
, 4
and 7
, and topic topic-b
partition 2
to brokers 1
, 5
and 7
:
{ "version": 1, "partitions": [ { "topic": "topic-a", "partition": 4, "replicas": [2,4,7] }, { "topic": "topic-b", "partition": 2, "replicas": [1,5,7] } ] }
Partitions not included in the JSON are not changed.
3.1.22.2.2. Reassigning partitions between JBOD volumes
When using JBOD storage in your Kafka cluster, you can choose to reassign the partitions between specific volumes and their log directories (each volume has a single log directory). To reassign a partition to a specific volume, add the log_dirs
option to <PartitionObjects> in the reassignment JSON file.
{ "topic": <TopicName>, "partition": <Partition>, "replicas": [ <AssignedBrokerIds> ], "log_dirs": [ <AssignedLogDirs> ] }
The log_dirs
object should contain the same number of log directories as the number of replicas specified in the replicas
object. The value should be either an absolute path to the log directory, or the any
keyword.
For example:
{ "topic": "topic-a", "partition": 4, "replicas": [2,4,7]. "log_dirs": [ "/var/lib/kafka/data-0/kafka-log2", "/var/lib/kafka/data-0/kafka-log4", "/var/lib/kafka/data-0/kafka-log7" ] }
3.1.22.3. Generating reassignment JSON files
This procedure describes how to generate a reassignment JSON file that reassigns all the partitions for a given set of topics using the kafka-reassign-partitions.sh
tool.
Prerequisites
- A running Cluster Operator
-
A
Kafka
resource - A set of topics to reassign the partitions of
Procedure
Prepare a JSON file named
topics.json
that lists the topics to move. It must have the following structure:{ "version": 1, "topics": [ <TopicObjects> ] }
where <TopicObjects> is a comma-separated list of objects like:
{ "topic": <TopicName> }
For example if you want to reassign all the partitions of
topic-a
andtopic-b
, you would need to prepare atopics.json
file like this:{ "version": 1, "topics": [ { "topic": "topic-a"}, { "topic": "topic-b"} ] }
Copy the
topics.json
file to one of the broker pods:On OpenShift:
cat topics.json | oc rsh -c kafka <BrokerPod> /bin/bash -c \ 'cat > /tmp/topics.json'
Use the
kafka-reassign-partitions.sh`
command to generate the reassignment JSON.On OpenShift:
oc rsh -c kafka <BrokerPod> \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --topics-to-move-json-file /tmp/topics.json \ --broker-list <BrokerList> \ --generate
For example, to move all the partitions of
topic-a
andtopic-b
to brokers4
and7
oc rsh -c kafka _<BrokerPod>_ \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --topics-to-move-json-file /tmp/topics.json \ --broker-list 4,7 \ --generate
3.1.22.4. Creating reassignment JSON files manually
You can manually create the reassignment JSON file if you want to move specific partitions.
3.1.22.5. Reassignment throttles
Partition reassignment can be a slow process because it involves transferring large amounts of data between brokers. To avoid a detrimental impact on clients, you can throttle the reassignment process. This might cause the reassignment to take longer to complete.
- If the throttle is too low then the newly assigned brokers will not be able to keep up with records being published and the reassignment will never complete.
- If the throttle is too high then clients will be impacted.
For example, for producers, this could manifest as higher than normal latency waiting for acknowledgement. For consumers, this could manifest as a drop in throughput caused by higher latency between polls.
3.1.22.6. Scaling up a Kafka cluster
This procedure describes how to increase the number of brokers in a Kafka cluster.
Prerequisites
- An existing Kafka cluster.
-
A reassignment JSON file named
reassignment.json
that describes how partitions should be reassigned to brokers in the enlarged cluster.
Procedure
-
Add as many new brokers as you need by increasing the
Kafka.spec.kafka.replicas
configuration option. - Verify that the new broker pods have started.
Copy the
reassignment.json
file to the broker pod on which you will later execute the commands:On OpenShift:
cat reassignment.json | \ oc rsh -c kafka broker-pod /bin/bash -c \ 'cat > /tmp/reassignment.json'
For example:
cat reassignment.json | \ oc rsh -c kafka my-cluster-kafka-0 /bin/bash -c \ 'cat > /tmp/reassignment.json'
Execute the partition reassignment using the
kafka-reassign-partitions.sh
command line tool from the same broker pod.On OpenShift:
oc rsh -c kafka broker-pod \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --execute
If you are going to throttle replication you can also pass the
--throttle
option with an inter-broker throttled rate in bytes per second. For example:On OpenShift:
oc rsh -c kafka my-cluster-kafka-0 \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 5000000 \ --execute
This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file.
If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example:
On OpenShift:
oc rsh -c kafka my-cluster-kafka-0 \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 10000000 \ --execute
Periodically verify whether the reassignment has completed using the
kafka-reassign-partitions.sh
command line tool from any of the broker pods. This is the same command as the previous step but with the--verify
option instead of the--execute
option.On OpenShift:
oc rsh -c kafka broker-pod \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --verify
For example, on OpenShift,
oc rsh -c kafka my-cluster-kafka-0 \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --verify
-
The reassignment has finished when the
--verify
command reports each of the partitions being moved as completed successfully. This final--verify
will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers.
3.1.22.7. Scaling down a Kafka cluster
Additional resources
This procedure describes how to decrease the number of brokers in a Kafka cluster.
Prerequisites
- An existing Kafka cluster.
-
A reassignment JSON file named
reassignment.json
describing how partitions should be reassigned to brokers in the cluster once the broker(s) in the highest numberedPod(s)
have been removed.
Procedure
Copy the
reassignment.json
file to the broker pod on which you will later execute the commands:On OpenShift:
cat reassignment.json | \ oc rsh -c kafka broker-pod /bin/bash -c \ 'cat > /tmp/reassignment.json'
For example:
cat reassignment.json | \ oc rsh -c kafka my-cluster-kafka-0 /bin/bash -c \ 'cat > /tmp/reassignment.json'
Execute the partition reassignment using the
kafka-reassign-partitions.sh
command line tool from the same broker pod.On OpenShift:
oc rsh -c kafka broker-pod \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --execute
If you are going to throttle replication you can also pass the
--throttle
option with an inter-broker throttled rate in bytes per second. For example:On OpenShift:
oc rsh -c kafka my-cluster-kafka-0 \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 5000000 \ --execute
This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file.
If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example:
On OpenShift:
oc rsh -c kafka my-cluster-kafka-0 \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 10000000 \ --execute
Periodically verify whether the reassignment has completed using the
kafka-reassign-partitions.sh
command line tool from any of the broker pods. This is the same command as the previous step but with the--verify
option instead of the--execute
option.On OpenShift:
oc rsh -c kafka broker-pod \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --verify
For example, on OpenShift,
oc rsh -c kafka my-cluster-kafka-0 \ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 \ --reassignment-json-file /tmp/reassignment.json \ --verify
-
The reassignment has finished when the
--verify
command reports each of the partitions being moved as completed successfully. This final--verify
will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers. Once all the partition reassignments have finished, the broker(s) being removed should not have responsibility for any of the partitions in the cluster. You can verify this by checking that the broker’s data log directory does not contain any live partition logs. If the log directory on the broker contains a directory that does not match the extended regular expression
\.[a-z0-9]-delete$
then the broker still has live partitions and it should not be stopped.You can check this by executing the command:
oc rsh <BrokerN> -c kafka /bin/bash -c \ "ls -l /var/lib/kafka/kafka-log_<N>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-delete$'"
where N is the number of the
Pod(s)
being deleted.If the above command prints any output then the broker still has live partitions. In this case, either the reassignment has not finished, or the reassignment JSON file was incorrect.
-
Once you have confirmed that the broker has no live partitions you can edit the
Kafka.spec.kafka.replicas
of yourKafka
resource, which will scale down theStatefulSet
, deleting the highest numbered brokerPod(s)
.
3.1.23. Deleting Kafka nodes manually
Additional resources
This procedure describes how to delete an existing Kafka node by using an OpenShift annotation. Deleting a Kafka node consists of deleting both the Pod
on which the Kafka broker is running and the related PersistentVolumeClaim
(if the cluster was deployed with persistent storage). After deletion, the Pod
and its related PersistentVolumeClaim
are recreated automatically.
Deleting a PersistentVolumeClaim
can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues.
Prerequisites
- A running Kafka cluster.
- A running Cluster Operator.
Procedure
Find the name of the
Pod
that you want to delete.For example, if the cluster is named cluster-name, the pods are named cluster-name-kafka-index, where index starts at zero and ends at the total number of replicas.
Annotate the
Pod
resource in OpenShift.On OpenShift use
oc annotate
:oc annotate pod cluster-name-kafka-index strimzi.io/delete-pod-and-pvc=true
- Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.
Additional resources
- For more information about deploying the Cluster Operator, see Section 2.3, “Cluster Operator”.
- For more information about deploying the Kafka cluster on OpenShift, see Section 2.4.1, “Deploying the Kafka cluster to OpenShift”.
3.1.24. Deleting Zookeeper nodes manually
This procedure describes how to delete an existing Zookeeper node by using an OpenShift annotation. Deleting a Zookeeper node consists of deleting both the Pod
on which Zookeeper is running and the related PersistentVolumeClaim
(if the cluster was deployed with persistent storage). After deletion, the Pod
and its related PersistentVolumeClaim
are recreated automatically.
Deleting a PersistentVolumeClaim
can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues.
Prerequisites
- A running Zookeeper cluster.
- A running Cluster Operator.
Procedure
Find the name of the
Pod
that you want to delete.For example, if the cluster is named cluster-name, the pods are named cluster-name-zookeeper-index, where index starts at zero and ends at the total number of replicas.
Annotate the
Pod
resource in OpenShift.On OpenShift use
oc annotate
:oc annotate pod cluster-name-zookeeper-index strimzi.io/delete-pod-and-pvc=true
- Wait for the next reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated.
Additional resources
- For more information about deploying the Cluster Operator, see Section 2.3, “Cluster Operator”.
- For more information about deploying the Zookeeper cluster on OpenShift, see Section 2.4.1, “Deploying the Kafka cluster to OpenShift”.
3.1.25. Maintenance time windows for rolling updates
Maintenance time windows allow you to schedule certain rolling updates of your Kafka and Zookeeper clusters to start at a convenient time.
3.1.25.1. Maintenance time windows overview
In most cases, the Cluster Operator only updates your Kafka or Zookeeper clusters in response to changes to the corresponding Kafka
resource. This enables you to plan when to apply changes to a Kafka
resource to minimize the impact on Kafka client applications.
However, some updates to your Kafka and Zookeeper clusters can happen without any corresponding change to the Kafka
resource. For example, the Cluster Operator will need to perform a rolling restart if a CA (Certificate Authority) certificate that it manages is close to expiry.
While a rolling restart of the pods should not affect availability of the service (assuming correct broker and topic configurations), it could affect performance of the Kafka client applications. Maintenance time windows allow you to schedule such spontaneous rolling updates of your Kafka and Zookeeper clusters to start at a convenient time. If maintenance time windows are not configured for a cluster then it is possible that such spontaneous rolling updates will happen at an inconvenient time, such as during a predictable period of high load.
3.1.25.2. Maintenance time window definition
You configure maintenance time windows by entering an array of strings in the Kafka.spec.maintenanceTimeWindows
property. Each string is a cron expression interpreted as being in UTC (Coordinated Universal Time, which for practical purposes is the same as Greenwich Mean Time).
The following example configures a single maintenance time window that starts at midnight and ends at 01:59am (UTC), on Sundays, Mondays, Tuesdays, Wednesdays, and Thursdays:
# ... maintenanceTimeWindows: - "* * 0-1 ? * SUN,MON,TUE,WED,THU *" # ...
In practice, maintenance windows should be set in conjunction with the Kafka.spec.clusterCa.renewalDays
and Kafka.spec.clientsCa.renewalDays
properties of the Kafka
resource, to ensure that the necessary CA certificate renewal can be completed in the configured maintenance time windows.
AMQ Streams does not schedule maintenance operations exactly according to the given windows. Instead, for each reconciliation, it checks whether a maintenance window is currently "open". This means that the start of maintenance operations within a given time window can be delayed by up to the Cluster Operator reconciliation interval. Maintenance time windows must therefore be at least this long.
Additional resources
- For more information about the Cluster Operator configuration, see Section 4.1.6, “Cluster Operator Configuration”.
3.1.25.3. Configuring a maintenance time window
You can configure a maintenance time window for rolling updates triggered by supported processes.
Prerequisites
- An OpenShift cluster.
- The Cluster Operator is running.
Procedure
Add or edit the
maintenanceTimeWindows
property in theKafka
resource. For example to allow maintenance between 0800 and 1059 and between 1400 and 1559 you would set themaintenanceTimeWindows
as shown below:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... maintenanceTimeWindows: - "* * 8-10 * * ?" - "* * 14-15 * * ?"
Create or update the resource.
On OpenShift, use
oc apply
:oc apply -f your-file
Additional resources
- Performing a rolling update of a Kafka cluster, see Section 3.1.20, “Performing a rolling update of a Kafka cluster”
- Performing a rolling update of a Zookeeper cluster, see Section 3.1.21, “Performing a rolling update of a Zookeeper cluster”
3.1.26. List of resources created as part of Kafka cluster
The following resources will created by the Cluster Operator in the OpenShift cluster:
cluster-name-kafka
- StatefulSet which is in charge of managing the Kafka broker pods.
cluster-name-kafka-brokers
- Service needed to have DNS resolve the Kafka broker pods IP addresses directly.
cluster-name-kafka-bootstrap
- Service can be used as bootstrap servers for Kafka clients.
cluster-name-kafka-external-bootstrap
- Bootstrap service for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled.
cluster-name-kafka-pod-id
- Service used to route traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled.
cluster-name-kafka-external-bootstrap
-
Bootstrap route for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled and set to type
route
. cluster-name-kafka-pod-id
-
Route for traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled and set to type
route
. cluster-name-kafka-config
- ConfigMap which contains the Kafka ancillary configuration and is mounted as a volume by the Kafka broker pods.
cluster-name-kafka-brokers
- Secret with Kafka broker keys.
cluster-name-kafka
- Service account used by the Kafka brokers.
cluster-name-kafka
- Pod Disruption Budget configured for the Kafka brokers.
strimzi-namespace-name-cluster-name-kafka-init
- Cluster role binding used by the Kafka brokers.
cluster-name-zookeeper
- StatefulSet which is in charge of managing the Zookeeper node pods.
cluster-name-zookeeper-nodes
- Service needed to have DNS resolve the Zookeeper pods IP addresses directly.
cluster-name-zookeeper-client
- Service used by Kafka brokers to connect to Zookeeper nodes as clients.
cluster-name-zookeeper-config
- ConfigMap which contains the Zookeeper ancillary configuration and is mounted as a volume by the Zookeeper node pods.
cluster-name-zookeeper-nodes
- Secret with Zookeeper node keys.
cluster-name-zookeeper
- Pod Disruption Budget configured for the Zookeeper nodes.
cluster-name-entity-operator
- Deployment with Topic and User Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-topic-operator-config
- Configmap with ancillary configuration for Topic Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-user-operator-config
- Configmap with ancillary configuration for User Operators. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-operator-certs
- Secret with Entitiy operators keys for communication with Kafka and Zookeeper. This resource will be created only if Cluster Operator deployed Entity Operator.
cluster-name-entity-operator
- Service account used by the Entity Operator.
strimzi-cluster-name-topic-operator
- Role binding used by the Entity Operator.
strimzi-cluster-name-user-operator
- Role binding used by the Entity Operator.
cluster-name-cluster-ca
- Secret with the Cluster CA used to encrypt the cluster communication.
cluster-name-cluster-ca-cert
- Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers.
cluster-name-clients-ca
- Secret with the Clients CA used to encrypt the communication between Kafka brokers and Kafka clients.
cluster-name-clients-ca-cert
- Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka brokers.
cluster-name-cluster-operator-certs
- Secret with Cluster operators keys for communication with Kafka and Zookeeper.
data-cluster-name-kafka-idx
-
Persistent Volume Claim for the volume used for storing data for the Kafka broker pod
idx
. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data. data-id-cluster-name-kafka-idx
-
Persistent Volume Claim for the volume
id
used for storing data for the Kafka broker podidx
. This resource is only created if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data. data-cluster-name-zookeeper-idx
-
Persistent Volume Claim for the volume used for storing data for the Zookeeper node pod
idx
. This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data.
3.2. Kafka Connect cluster configuration
The full schema of the KafkaConnect
resource is described in the Section C.55, “KafkaConnect
schema reference”. All labels that are applied to the desired KafkaConnect
resource will also be applied to the OpenShift resources making up the Kafka Connect cluster. This provides a convenient mechanism for resources to be labeled as required.
3.2.1. Replicas
Kafka Connect clusters can run multiple of nodes. The number of nodes is defined in the KafkaConnect
and KafkaConnectS2I
resources. Running a Kafka Connect cluster with multiple nodes can provide better availability and scalability. However, when running Kafka Connect on OpenShift it is not absolutely necessary to run multiple nodes of Kafka Connect for high availability. If a node where Kafka Connect is deployed to crashes, OpenShift will automatically reschedule the Kafka Connect pod to a different node. However, running Kafka Connect with multiple nodes can provide faster failover times, because the other nodes will be up and running already.
3.2.1.1. Configuring the number of nodes
The number of Kafka Connect nodes is configured using the replicas
property in KafkaConnect.spec
and KafkaConnectS2I.spec
.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
replicas
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnectS2I metadata: name: my-cluster spec: # ... replicas: 3 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.2. Bootstrap servers
A Kafka Connect cluster always works in combination with a Kafka cluster. A Kafka cluster is specified as a list of bootstrap servers. On OpenShift, the list must ideally contain the Kafka cluster bootstrap service named cluster-name-kafka-bootstrap
, and a port of 9092 for plain traffic or 9093 for encrypted traffic.
The list of bootstrap servers is configured in the bootstrapServers
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. The servers must be defined as a comma-separated list specifying one or more Kafka brokers, or a service pointing to Kafka brokers specified as a hostname:_port_
pairs.
When using Kafka Connect with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the cluster.
3.2.2.1. Configuring bootstrap servers
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
bootstrapServers
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... bootstrapServers: my-cluster-kafka-bootstrap:9092 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.3. Connecting to Kafka brokers using TLS
By default, Kafka Connect tries to connect to Kafka brokers using a plain text connection. If you prefer to use TLS, additional configuration is required.
3.2.3.1. TLS support in Kafka Connect
TLS support is configured in the tls
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. The tls
property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format.
An example showing TLS configuration with multiple certificates
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... tls: trustedCertificates: - secretName: my-secret certificate: ca.crt - secretName: my-other-secret certificate: certificate.crt # ...
When multiple certificates are stored in the same secret, it can be listed multiple times.
An example showing TLS configuration with multiple certificates from the same secret
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnectS2I metadata: name: my-cluster spec: # ... tls: trustedCertificates: - secretName: my-secret certificate: ca.crt - secretName: my-secret certificate: ca2.crt # ...
3.2.3.2. Configuring TLS in Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
-
If they exist, the name of the
Secret
for the certificate used for TLS Server Authentication, and the key under which the certificate is stored in theSecret
Procedure
(Optional) If they do not already exist, prepare the TLS certificate used in authentication in a file and create a
Secret
.NoteThe secrets created by the Cluster Operator for Kafka cluster may be used directly.
On OpenShift this can be done using
oc create
:oc create secret generic my-secret --from-file=my-file.crt
Edit the
tls
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.4. Connecting to Kafka brokers with Authentication
By default, Kafka Connect will try to connect to Kafka brokers without authentication. Authentication is enabled through the KafkaConnect
and KafkaConnectS2I
resources.
3.2.4.1. Authentication support in Kafka Connect
Authentication is configured through the authentication
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. The authentication
property specifies the type of the authentication mechanisms which should be used and additional configuration details depending on the mechanism. The currently supported authentication types are:
- TLS client authentication
- SASL-based authentication using the SCRAM-SHA-512 mechanism
- SASL-based authentication using the PLAIN mechanism
3.2.4.1.1. TLS Client Authentication
To use TLS client authentication, set the type
property to the value tls
. TLS client authentication uses a TLS certificate to authenticate. The certificate is specified in the certificateAndKey
property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.
TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Connect see Section 3.2.3, “Connecting to Kafka brokers using TLS”.
An example TLS client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... authentication: type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key # ...
3.2.4.1.2. SASL based SCRAM-SHA-512 authentication
To configure Kafka Connect to use SASL-based SCRAM-SHA-512 authentication, set the type
property to scram-sha-512
. This authentication mechanism requires a username and password.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name of theSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example SASL based SCRAM-SHA-512 client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... authentication: type: scram-sha-512 username: my-connect-user passwordSecret: secretName: my-connect-user password: my-connect-password-key # ...
3.2.4.1.3. SASL based PLAIN authentication
To configure Kafka Connect to use SASL-based PLAIN authentication, set the type
property to plain
. This authentication mechanism requires a username and password.
The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name of such aSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example showing SASL based PLAIN client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... authentication: type: plain username: my-connect-user passwordSecret: secretName: my-connect-user password: my-connect-password-key # ...
3.2.4.2. Configuring TLS client authentication in Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
-
If they exist, the name of the
Secret
with the public and private keys used for TLS Client Authentication, and the keys under which they are stored in theSecret
Procedure
(Optional) If they do not already exist, prepare the keys used for authentication in a file and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift this can be done using
oc create
:oc create secret generic my-secret --from-file=my-public.crt --from-file=my-private.key
Edit the
authentication
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public.crt key: my-private.key # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.4.3. Configuring SCRAM-SHA-512 authentication in Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- Username of the user which should be used for authentication
-
If they exist, the name of the
Secret
with the password used for authentication and the key under which the password is stored in theSecret
Procedure
(Optional) If they do not already exist, prepare a file with the password used in authentication and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift this can be done using
oc create
:echo -n '1f2d1e2e67df' > <my-password>.txt oc create secret generic <my-secret> --from-file=<my-password.txt>
Edit the
authentication
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... authentication: type: scram-sha-512 username: _<my-username>_ passwordSecret: secretName: _<my-secret>_ password: _<my-password.txt>_ # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.5. Kafka Connect configuration
AMQ Streams allows you to customize the configuration of Apache Kafka Connect nodes by editing certain options listed in Apache Kafka documentation.
Configuration options that cannot be configured relate to:
- Kafka cluster bootstrap address
- Security (Encryption, Authentication, and Authorization)
- Listener / REST interface configuration
- Plugin path configuration
These options are automatically configured by AMQ Streams.
3.2.5.1. Kafka Connect configuration
Kafka Connect is configured using the config
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. This property contains the Kafka Connect configuration options as keys. The values can be one of the following JSON types:
- String
- Number
- Boolean
You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
listeners
-
plugin.path
-
rest.
-
bootstrap.servers
When a forbidden option is present in the config
property, it is ignored and a warning message is printed to the Custer Operator log file. All other options are passed to Kafka Connect.
The Cluster Operator does not validate keys or values in the config
object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config
or KafkaConnectS2I.spec.config
object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.
Certain options have default values:
-
group.id
with default valueconnect-cluster
-
offset.storage.topic
with default valueconnect-cluster-offsets
-
config.storage.topic
with default valueconnect-cluster-configs
-
status.storage.topic
with default valueconnect-cluster-status
-
key.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
-
value.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
These options are automatically configured in case they are not present in the KafkaConnect.spec.config
or KafkaConnectS2I.spec.config
properties.
Example Kafka Connect configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ...
3.2.5.2. Configuring Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
config
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.6. CPU and memory resources
For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.
AMQ Streams supports two types of resources:
- CPU
- Memory
AMQ Streams uses the OpenShift syntax for specifying CPU and memory resources.
3.2.6.1. Resource limits and requests
Resource limits and requests are configured using the resources
property in the following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Additional resources
- For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers.
3.2.6.1.1. Resource requests
Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.
If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.
Resources requests are specified in the requests
property. Resources requests currently supported by AMQ Streams:
-
cpu
-
memory
A request may be configured for one or more supported resources.
Example resource request configuration with all resources
# ... resources: requests: cpu: 12 memory: 64Gi # ...
3.2.6.1.2. Resource limits
Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.
Resource limits are specified in the limits
property. Resource limits currently supported by AMQ Streams:
-
cpu
-
memory
A resource may be configured for one or more supported limits.
Example resource limits configuration
# ... resources: limits: cpu: 12 memory: 64Gi # ...
3.2.6.1.3. Supported CPU formats
CPU requests and limits are supported in the following formats:
-
Number of CPU cores as integer (
5
CPU core) or decimal (2.5
CPU core). -
Number or millicpus / millicores (
100m
) where 1000 millicores is the same1
CPU core.
Example CPU units
# ... resources: requests: cpu: 500m limits: cpu: 2.5 # ...
The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.
Additional resources
- For more information on CPU specification, see the Meaning of CPU.
3.2.6.1.4. Supported memory formats
Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.
-
To specify memory in megabytes, use the
M
suffix. For example1000M
. -
To specify memory in gigabytes, use the
G
suffix. For example1G
. -
To specify memory in mebibytes, use the
Mi
suffix. For example1000Mi
. -
To specify memory in gibibytes, use the
Gi
suffix. For example1Gi
.
An example of using different memory units
# ... resources: requests: memory: 512Mi limits: memory: 2Gi # ...
Additional resources
- For more details about memory specification and additional supported units, see Meaning of memory.
3.2.6.2. Configuring resource requests and limits
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
resources
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... resources: requests: cpu: "8" memory: 64Gi limits: cpu: "12" memory: 128Gi # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
-
For more information about the schema, see
Resources
schema reference.
3.2.7. Logging
This section provides information on loggers and how to configure log levels.
You can set the log levels by specifying the loggers and their levels directly (inline) or use a custom (external) config map.
3.2.7.1. Kafka Connect loggers
Kafka Connect has its own configurable loggers:
-
connect.root.logger.level
-
log4j.logger.org.apache.zookeeper
-
log4j.logger.org.I0Itec.zkclient
-
log4j.logger.org.reflections
3.2.7.2. Specifying inline logging
Procedure
Edit the YAML file to specify the loggers and logging level for the required components.
For example, the logging level here is set to INFO:
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # ... logging: type: inline loggers: logger.name: "INFO" # ...
You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF.
For more information about the log levels, see the log4j manual.
Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.7.3. Specifying an external ConfigMap for logging
Procedure
Edit the YAML file to specify the name of the
ConfigMap
to use for the required components. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # ... logging: type: external name: customConfigMap # ...
Remember to place your custom ConfigMap under the
log4j.properties
orlog4j2.properties
key.Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Garbage collector (GC) logging can also be enabled (or disabled). For more information on GC, see Section 3.2.10.1, “JVM configuration”
3.2.8. Healthchecks
Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.
OpenShift supports two types of Healthcheck probes:
- Liveness probes
- Readiness probes
For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.
Users can configure selected options for liveness and readiness probes.
3.2.8.1. Healthcheck configurations
Liveness and readiness probes can be configured using the livenessProbe
and readinessProbe
properties in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Both livenessProbe
and readinessProbe
support two additional options:
-
initialDelaySeconds
-
timeoutSeconds
The initialDelaySeconds
property defines the initial delay before the probe is tried for the first time. Default is 15 seconds.
The timeoutSeconds
property defines timeout of the probe. Default is 5 seconds.
An example of liveness and readiness probe configuration
# ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ...
3.2.8.2. Configuring healthchecks
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
livenessProbe
orreadinessProbe
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.9. Prometheus metrics
AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.
3.2.9.1. Metrics configuration
Prometheus metrics are enabled by configuring the metrics
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
When the metrics
property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}
).
Example of enabling metrics without any further configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: {} # ... zookeeper: # ...
The metrics
property might contain additional configuration for the Prometheus JMX exporter.
Example of enabling metrics with additional Prometheus JMX Exporter configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: lowercaseOutputName: true rules: - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count" name: "kafka_server_$1_$2_total" - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count" name: "kafka_server_$1_$2_total" labels: topic: "$3" # ... zookeeper: # ...
3.2.9.2. Configuring Prometheus metrics
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
metrics
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... metrics: lowercaseOutputName: true # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.10. JVM Options
Apache Kafka and Apache Zookeeper run inside a Java Virtual Machine (JVM). JVM configuration options optimize the performance for different platforms and architectures. AMQ Streams allows you to configure some of these options.
3.2.10.1. JVM configuration
JVM options can be configured using the jvmOptions
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
Only a selected subset of available JVM options can be configured. The following options are supported:
-Xms and -Xmx
-Xms
configures the minimum initial allocation heap size when the JVM starts. -Xmx
configures the maximum heap size.
The units accepted by JVM settings such as -Xmx
and -Xms
are those accepted by the JDK java
binary in the corresponding image. Accordingly, 1g
or 1G
means 1,073,741,824 bytes, and Gi
is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G
means 1,000,000,000 bytes, and 1Gi
means 1,073,741,824 bytes
The default values used for -Xms
and -Xmx
depends on whether there is a memory request limit configured for the container:
- If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
-
If there is no memory limit then the JVM’s minimum memory will be set to
128M
and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Setting -Xmx
explicitly requires some care:
-
The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by
-Xmx
. -
If
-Xmx
is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it). -
If
-Xmx
is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if-Xms
is set to-Xmx
, or some later time if not).
When setting -Xmx
explicitly, it is recommended to:
- set the memory request and the memory limit to the same value,
-
use a memory request that is at least 4.5 × the
-Xmx
, -
consider setting
-Xms
to the same value as-Xms
.
Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.
Example fragment configuring -Xmx
and -Xms
# ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ...
In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.
Setting the same value for initial (-Xms
) and maximum (-Xmx
) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.
-server
-server
enables the server JVM. This option can be set to true or false.
Example fragment configuring -server
# ... jvmOptions: "-server": true # ...
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
-XX
-XX
object can be used for configuring advanced runtime options of a JVM. The -server
and -XX
options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS
option of Apache Kafka.
Example showing the use of the -XX
object
jvmOptions: "-XX": "UseG1GC": true, "MaxGCPauseMillis": 20, "InitiatingHeapOccupancyPercent": 35, "ExplicitGCInvokesConcurrent": true, "UseParNewGC": false
The example configuration above will result in the following JVM options:
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
3.2.10.1.1. Garbage collector logging
The jvmOptions
section also allows you to enable and disable garbage collector (GC) logging. GC logging is enabled by default. To disable it, set the gcLoggingEnabled
property as follows:
Example of disabling GC logging
# ... jvmOptions: gcLoggingEnabled: false # ...
3.2.10.2. Configuring JVM options
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
jvmOptions
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jvmOptions: "-Xmx": "8g" "-Xms": "8g" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.11. Container images
AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.
3.2.11.1. Container image configurations
Container image which should be used for given components can be specified using the image
property in:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
3.2.11.1.1. Configuring the Kafka.spec.kafka.image
property
The Kafka.spec.kafka.image
property functions differently from the others, because AMQ Streams supports multiple versions of Kafka, each requiring the own image. The STRIMZI_KAFKA_IMAGES
environment variable of the Cluster Operator configuration is used to provide a mapping between Kafka versions and the corresponding images. This is used in combination with the Kafka.spec.kafka.image
and Kafka.spec.kafka.version
properties as follows:
-
If neither
Kafka.spec.kafka.image
norKafka.spec.kafka.version
are given in the custom resource then theversion
will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
If
Kafka.spec.kafka.image
is given butKafka.spec.kafka.version
is not then the given image will be used and theversion
will be assumed to be the Cluster Operator’s default Kafka version. -
If
Kafka.spec.kafka.version
is given butKafka.spec.kafka.image
is not then image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
Both
Kafka.spec.kafka.version
andKafka.spec.kafka.image
are given the given image will be used, and it will be assumed to contain a Kafka broker with the given version.
It is best to provide just Kafka.spec.kafka.version
and leave the Kafka.spec.kafka.image
property unspecified. This reduces the chances of making a mistake in configuring the Kafka
resource. If you need to change the images used for different versions of Kafka, it is better to configure the Cluster Operator’s STRIMZI_KAFKA_IMAGES
environment variable.
3.2.11.1.2. Configuring the image
property in other resources
For the image
property in the other custom resources, the given value will be used during deployment. If the image
property is missing, the image
specified in the Cluster Operator configuration will be used. If the image
name is not defined in the Cluster Operator configuration, then the default value will be used.
For Kafka broker TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper nodes:
-
Container image specified in the
STRIMZI_DEFAULT_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper node TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Topic Operator:
-
Container image specified in the
STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For User Operator:
-
Container image specified in the
STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For Entity Operator TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect with Source2image support:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.
Example of container image configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
3.2.11.2. Configuring container images
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
image
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.12. Configuring pod scheduling
When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.
3.2.12.1. Scheduling pods based on other applications
3.2.12.1.1. Avoid critical applications to share the node
Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.
3.2.12.1.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.2.12.1.3. Configuring pod anti-affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. ThetopologyKey
should be set tokubernetes.io/hostname
to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.12.2. Scheduling pods to specific nodes
3.2.12.2.1. Node scheduling
The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.
OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type
or custom labels to select the right node.
3.2.12.2.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.2.12.2.3. Configuring node affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Label the nodes where AMQ Streams components should be scheduled.
On OpenShift this can be done using
oc label
:oc label node your-node node-type=fast-network
Alternatively, some of the existing labels might be reused.
Edit the
affinity
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.12.3. Using dedicated nodes
3.2.12.3.1. Dedicated nodes
Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.
Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.
To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.
3.2.12.3.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.2.12.3.3. Tolerations
Tolerations can be configured using the tolerations
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The format of the tolerations
property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.
3.2.12.3.4. Setting up dedicated nodes and scheduling pods on them
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Select the nodes which should be used as dedicated.
- Make sure there are no workloads scheduled on these nodes.
Set the taints on the selected nodes:
On OpenShift this can be done using
oc adm taint
:oc adm taint node your-node dedicated=Kafka:NoSchedule
Additionally, add a label to the selected nodes as well.
On OpenShift this can be done using
oc label
:oc label node your-node dedicated=Kafka
Edit the
affinity
andtolerations
properties in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.2.13. Using external configuration and secrets
Kafka Connect connectors are configured using an HTTP REST interface. The connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself.
Some parts of the configuration of a Kafka Connect connector can be externalized using ConfigMaps or Secrets. You can then reference the configuration values in HTTP REST commands (this keeps the configuration separate and more secure, if needed). This method applies especially to confidential data, such as usernames, passwords, or certificates.
ConfigMaps and Secrets are standard OpenShift resources used for storing of configurations and confidential data.
3.2.13.1. Storing connector configurations externally
You can mount ConfigMaps or Secrets into a Kafka Connect pod as volumes or environment variables. Volumes and environment variables are configured in the externalConfiguration
property in KafkaConnect.spec
and KafkaConnectS2I.spec
.
3.2.13.1.1. External configuration as environment variables
The env
property is used to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret.
The names of user-defined environment variables cannot start with KAFKA_
or STRIMZI_
.
To mount a value from a Secret to an environment variable, use the valueFrom
property and the secretKeyRef
as shown in the following example.
Example of an environment variable set to a value from a Secret
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: secretKeyRef: name: my-secret key: my-key
A common use case for mounting Secrets to environment variables is when your connector needs to communicate with Amazon AWS and needs to read the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables with credentials.
To mount a value from a ConfigMap to an environment variable, use configMapKeyRef
in the valueFrom
property as shown in the following example.
Example of an environment variable set to a value from a ConfigMap
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key
3.2.13.1.2. External configuration as volumes
You can also mount ConfigMaps or Secrets to a Kafka Connect pod as volumes. Using volumes instead of environment variables is useful in the following scenarios:
- Mounting truststores or keystores with TLS certificates
- Mounting a properties file that is used to configure Kafka Connect connectors
In the volumes
property of the externalConfiguration
resource, list the ConfigMaps or Secrets that will be mounted as volumes. Each volume must specify a name in the name
property and a reference to ConfigMap or Secret.
Example of volumes with external configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: volumes: - name: connector1 configMap: name: connector1-configuration - name: connector1-certificates secret: secretName: connector1-certificates
The volumes will be mounted inside the Kafka Connect containers in the path /opt/kafka/external-configuration/<volume-name>
. For example, the files from a volume named connector1
would appear in the directory /opt/kafka/external-configuration/connector1
.
The FileConfigProvider
has to be used to read the values from the mounted properties files in connector configurations.
3.2.13.2. Mounting Secrets as environment variables
You can create an OpenShift Secret and mount it to Kafka Connect as an environment variable.
Prerequisites
- A running Cluster Operator.
Procedure
Create a secret containing the information that will be mounted as an environment variable. For example:
apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=
Create or edit the Kafka Connect resource. Configure the
externalConfiguration
section of theKafkaConnect
orKafkaConnectS2I
custom resource to reference the secret. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey
Apply the changes to your Kafka Connect deployment.
On OpenShift use
oc apply
:oc apply -f your-file
The environment variables are now available for use when developing your connectors.
Additional resources
-
For more information about external configuration in Kafka Connect, see Section C.63, “
ExternalConfiguration
schema reference”.
3.2.13.3. Mounting Secrets as volumes
You can create an OpenShift Secret, mount it as a volume to Kafka Connect, and then use it to configure a Kafka Connect connector.
Prerequisites
- A running Cluster Operator.
Procedure
Create a secret containing a properties file that defines the configuration options for your connector configuration. For example:
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- dbUsername: my-user dbPassword: my-password
Create or edit the Kafka Connect resource. Configure the
FileConfigProvider
in theconfig
section and theexternalConfiguration
section of theKafkaConnect
orKafkaConnectS2I
custom resource to reference the secret. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider #... externalConfiguration: volumes: - name: connector-config secret: secretName: mysecret
Apply the changes to your Kafka Connect deployment.
On OpenShift use
oc apply
:oc apply -f your-file
Use the values from the mounted properties file in your JSON payload with connector configuration. For example:
{ "name":"my-connector", "config":{ "connector.class":"MyDbConnector", "tasks.max":"3", "database": "my-postgresql:5432" "username":"${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbUsername}", "password":"${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbPassword}", # ... } }
Additional resources
-
For more information about external configuration in Kafka Connect, see Section C.63, “
ExternalConfiguration
schema reference”.
3.2.14. List of resources created as part of Kafka Connect cluster
The following resources will created by the Cluster Operator in the OpenShift cluster:
- connect-cluster-name-connect
- Deployment which is in charge to create the Kafka Connect worker node pods.
- connect-cluster-name-connect-api
- Service which exposes the REST interface for managing the Kafka Connect cluster.
- connect-cluster-name-config
- ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods.
- connect-cluster-name-connect
- Pod Disruption Budget configured for the Kafka Connect worker nodes.
3.3. Kafka Connect cluster with Source2Image support
The full schema of the KafkaConnectS2I
resource is described in the Section C.69, “KafkaConnectS2I
schema reference”. All labels that are applied to the desired KafkaConnectS2I
resource will also be applied to the OpenShift resources making up the Kafka Connect cluster with Source2Image support. This provides a convenient mechanism for resources to be labeled as required.
3.3.1. Replicas
Kafka Connect clusters can run multiple of nodes. The number of nodes is defined in the KafkaConnect
and KafkaConnectS2I
resources. Running a Kafka Connect cluster with multiple nodes can provide better availability and scalability. However, when running Kafka Connect on OpenShift it is not absolutely necessary to run multiple nodes of Kafka Connect for high availability. If a node where Kafka Connect is deployed to crashes, OpenShift will automatically reschedule the Kafka Connect pod to a different node. However, running Kafka Connect with multiple nodes can provide faster failover times, because the other nodes will be up and running already.
3.3.1.1. Configuring the number of nodes
The number of Kafka Connect nodes is configured using the replicas
property in KafkaConnect.spec
and KafkaConnectS2I.spec
.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
replicas
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnectS2I metadata: name: my-cluster spec: # ... replicas: 3 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.2. Bootstrap servers
A Kafka Connect cluster always works in combination with a Kafka cluster. A Kafka cluster is specified as a list of bootstrap servers. On OpenShift, the list must ideally contain the Kafka cluster bootstrap service named cluster-name-kafka-bootstrap
, and a port of 9092 for plain traffic or 9093 for encrypted traffic.
The list of bootstrap servers is configured in the bootstrapServers
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. The servers must be defined as a comma-separated list specifying one or more Kafka brokers, or a service pointing to Kafka brokers specified as a hostname:_port_
pairs.
When using Kafka Connect with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the cluster.
3.3.2.1. Configuring bootstrap servers
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
bootstrapServers
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... bootstrapServers: my-cluster-kafka-bootstrap:9092 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.3. Connecting to Kafka brokers using TLS
By default, Kafka Connect tries to connect to Kafka brokers using a plain text connection. If you prefer to use TLS, additional configuration is required.
3.3.3.1. TLS support in Kafka Connect
TLS support is configured in the tls
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. The tls
property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format.
An example showing TLS configuration with multiple certificates
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... tls: trustedCertificates: - secretName: my-secret certificate: ca.crt - secretName: my-other-secret certificate: certificate.crt # ...
When multiple certificates are stored in the same secret, it can be listed multiple times.
An example showing TLS configuration with multiple certificates from the same secret
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnectS2I metadata: name: my-cluster spec: # ... tls: trustedCertificates: - secretName: my-secret certificate: ca.crt - secretName: my-secret certificate: ca2.crt # ...
3.3.3.2. Configuring TLS in Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
-
If they exist, the name of the
Secret
for the certificate used for TLS Server Authentication, and the key under which the certificate is stored in theSecret
Procedure
(Optional) If they do not already exist, prepare the TLS certificate used in authentication in a file and create a
Secret
.NoteThe secrets created by the Cluster Operator for Kafka cluster may be used directly.
On OpenShift this can be done using
oc create
:oc create secret generic my-secret --from-file=my-file.crt
Edit the
tls
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.4. Connecting to Kafka brokers with Authentication
By default, Kafka Connect will try to connect to Kafka brokers without authentication. Authentication is enabled through the KafkaConnect
and KafkaConnectS2I
resources.
3.3.4.1. Authentication support in Kafka Connect
Authentication is configured through the authentication
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. The authentication
property specifies the type of the authentication mechanisms which should be used and additional configuration details depending on the mechanism. The currently supported authentication types are:
- TLS client authentication
- SASL-based authentication using the SCRAM-SHA-512 mechanism
- SASL-based authentication using the PLAIN mechanism
3.3.4.1.1. TLS Client Authentication
To use TLS client authentication, set the type
property to the value tls
. TLS client authentication uses a TLS certificate to authenticate. The certificate is specified in the certificateAndKey
property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.
TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Connect see Section 3.3.3, “Connecting to Kafka brokers using TLS”.
An example TLS client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... authentication: type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key # ...
3.3.4.1.2. SASL based SCRAM-SHA-512 authentication
To configure Kafka Connect to use SASL-based SCRAM-SHA-512 authentication, set the type
property to scram-sha-512
. This authentication mechanism requires a username and password.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name of theSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example SASL based SCRAM-SHA-512 client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... authentication: type: scram-sha-512 username: my-connect-user passwordSecret: secretName: my-connect-user password: my-connect-password-key # ...
3.3.4.1.3. SASL based PLAIN authentication
To configure Kafka Connect to use SASL-based PLAIN authentication, set the type
property to plain
. This authentication mechanism requires a username and password.
The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name of such aSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example showing SASL based PLAIN client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-cluster spec: # ... authentication: type: plain username: my-connect-user passwordSecret: secretName: my-connect-user password: my-connect-password-key # ...
3.3.4.2. Configuring TLS client authentication in Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
-
If they exist, the name of the
Secret
with the public and private keys used for TLS Client Authentication, and the keys under which they are stored in theSecret
Procedure
(Optional) If they do not already exist, prepare the keys used for authentication in a file and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift this can be done using
oc create
:oc create secret generic my-secret --from-file=my-public.crt --from-file=my-private.key
Edit the
authentication
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public.crt key: my-private.key # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.4.3. Configuring SCRAM-SHA-512 authentication in Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- Username of the user which should be used for authentication
-
If they exist, the name of the
Secret
with the password used for authentication and the key under which the password is stored in theSecret
Procedure
(Optional) If they do not already exist, prepare a file with the password used in authentication and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift this can be done using
oc create
:echo -n '1f2d1e2e67df' > <my-password>.txt oc create secret generic <my-secret> --from-file=<my-password.txt>
Edit the
authentication
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... authentication: type: scram-sha-512 username: _<my-username>_ passwordSecret: secretName: _<my-secret>_ password: _<my-password.txt>_ # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.5. Kafka Connect configuration
AMQ Streams allows you to customize the configuration of Apache Kafka Connect nodes by editing certain options listed in Apache Kafka documentation.
Configuration options that cannot be configured relate to:
- Kafka cluster bootstrap address
- Security (Encryption, Authentication, and Authorization)
- Listener / REST interface configuration
- Plugin path configuration
These options are automatically configured by AMQ Streams.
3.3.5.1. Kafka Connect configuration
Kafka Connect is configured using the config
property in KafkaConnect.spec
and KafkaConnectS2I.spec
. This property contains the Kafka Connect configuration options as keys. The values can be one of the following JSON types:
- String
- Number
- Boolean
You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
listeners
-
plugin.path
-
rest.
-
bootstrap.servers
When a forbidden option is present in the config
property, it is ignored and a warning message is printed to the Custer Operator log file. All other options are passed to Kafka Connect.
The Cluster Operator does not validate keys or values in the config
object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config
or KafkaConnectS2I.spec.config
object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.
Certain options have default values:
-
group.id
with default valueconnect-cluster
-
offset.storage.topic
with default valueconnect-cluster-offsets
-
config.storage.topic
with default valueconnect-cluster-configs
-
status.storage.topic
with default valueconnect-cluster-status
-
key.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
-
value.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
These options are automatically configured in case they are not present in the KafkaConnect.spec.config
or KafkaConnectS2I.spec.config
properties.
Example Kafka Connect configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ...
3.3.5.2. Configuring Kafka Connect
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
config
property in theKafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.6. CPU and memory resources
For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.
AMQ Streams supports two types of resources:
- CPU
- Memory
AMQ Streams uses the OpenShift syntax for specifying CPU and memory resources.
3.3.6.1. Resource limits and requests
Resource limits and requests are configured using the resources
property in the following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Additional resources
- For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers.
3.3.6.1.1. Resource requests
Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.
If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.
Resources requests are specified in the requests
property. Resources requests currently supported by AMQ Streams:
-
cpu
-
memory
A request may be configured for one or more supported resources.
Example resource request configuration with all resources
# ... resources: requests: cpu: 12 memory: 64Gi # ...
3.3.6.1.2. Resource limits
Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.
Resource limits are specified in the limits
property. Resource limits currently supported by AMQ Streams:
-
cpu
-
memory
A resource may be configured for one or more supported limits.
Example resource limits configuration
# ... resources: limits: cpu: 12 memory: 64Gi # ...
3.3.6.1.3. Supported CPU formats
CPU requests and limits are supported in the following formats:
-
Number of CPU cores as integer (
5
CPU core) or decimal (2.5
CPU core). -
Number or millicpus / millicores (
100m
) where 1000 millicores is the same1
CPU core.
Example CPU units
# ... resources: requests: cpu: 500m limits: cpu: 2.5 # ...
The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.
Additional resources
- For more information on CPU specification, see the Meaning of CPU.
3.3.6.1.4. Supported memory formats
Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.
-
To specify memory in megabytes, use the
M
suffix. For example1000M
. -
To specify memory in gigabytes, use the
G
suffix. For example1G
. -
To specify memory in mebibytes, use the
Mi
suffix. For example1000Mi
. -
To specify memory in gibibytes, use the
Gi
suffix. For example1Gi
.
An example of using different memory units
# ... resources: requests: memory: 512Mi limits: memory: 2Gi # ...
Additional resources
- For more details about memory specification and additional supported units, see Meaning of memory.
3.3.6.2. Configuring resource requests and limits
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
resources
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... resources: requests: cpu: "8" memory: 64Gi limits: cpu: "12" memory: 128Gi # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
-
For more information about the schema, see
Resources
schema reference.
3.3.7. Logging
This section provides information on loggers and how to configure log levels.
You can set the log levels by specifying the loggers and their levels directly (inline) or use a custom (external) config map.
3.3.7.1. Kafka Connect with Source2Image loggers
Kafka Connect with Source2Image support has its own configurable loggers:
-
connect.root.logger.level
-
log4j.logger.org.apache.zookeeper
-
log4j.logger.org.I0Itec.zkclient
-
log4j.logger.org.reflections
3.3.7.2. Specifying inline logging
Procedure
Edit the YAML file to specify the loggers and logging level for the required components.
For example, the logging level here is set to INFO:
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnectS2I spec: # ... logging: type: inline loggers: logger.name: "INFO" # ...
You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF.
For more information about the log levels, see the log4j manual.
Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.7.3. Specifying an external ConfigMap for logging
Procedure
Edit the YAML file to specify the name of the
ConfigMap
to use for the required components. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnectS2I spec: # ... logging: type: external name: customConfigMap # ...
Remember to place your custom ConfigMap under the
log4j.properties
orlog4j2.properties
key.Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Garbage collector (GC) logging can also be enabled (or disabled). For more information on GC, see Section 3.3.10.1, “JVM configuration”
3.3.8. Healthchecks
Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.
OpenShift supports two types of Healthcheck probes:
- Liveness probes
- Readiness probes
For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.
Users can configure selected options for liveness and readiness probes.
3.3.8.1. Healthcheck configurations
Liveness and readiness probes can be configured using the livenessProbe
and readinessProbe
properties in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Both livenessProbe
and readinessProbe
support two additional options:
-
initialDelaySeconds
-
timeoutSeconds
The initialDelaySeconds
property defines the initial delay before the probe is tried for the first time. Default is 15 seconds.
The timeoutSeconds
property defines timeout of the probe. Default is 5 seconds.
An example of liveness and readiness probe configuration
# ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ...
3.3.8.2. Configuring healthchecks
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
livenessProbe
orreadinessProbe
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.9. Prometheus metrics
AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.
3.3.9.1. Metrics configuration
Prometheus metrics are enabled by configuring the metrics
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
When the metrics
property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}
).
Example of enabling metrics without any further configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: {} # ... zookeeper: # ...
The metrics
property might contain additional configuration for the Prometheus JMX exporter.
Example of enabling metrics with additional Prometheus JMX Exporter configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: lowercaseOutputName: true rules: - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count" name: "kafka_server_$1_$2_total" - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count" name: "kafka_server_$1_$2_total" labels: topic: "$3" # ... zookeeper: # ...
3.3.9.2. Configuring Prometheus metrics
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
metrics
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... metrics: lowercaseOutputName: true # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.10. JVM Options
Apache Kafka and Apache Zookeeper run inside a Java Virtual Machine (JVM). JVM configuration options optimize the performance for different platforms and architectures. AMQ Streams allows you to configure some of these options.
3.3.10.1. JVM configuration
JVM options can be configured using the jvmOptions
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
Only a selected subset of available JVM options can be configured. The following options are supported:
-Xms and -Xmx
-Xms
configures the minimum initial allocation heap size when the JVM starts. -Xmx
configures the maximum heap size.
The units accepted by JVM settings such as -Xmx
and -Xms
are those accepted by the JDK java
binary in the corresponding image. Accordingly, 1g
or 1G
means 1,073,741,824 bytes, and Gi
is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G
means 1,000,000,000 bytes, and 1Gi
means 1,073,741,824 bytes
The default values used for -Xms
and -Xmx
depends on whether there is a memory request limit configured for the container:
- If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
-
If there is no memory limit then the JVM’s minimum memory will be set to
128M
and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Setting -Xmx
explicitly requires some care:
-
The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by
-Xmx
. -
If
-Xmx
is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it). -
If
-Xmx
is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if-Xms
is set to-Xmx
, or some later time if not).
When setting -Xmx
explicitly, it is recommended to:
- set the memory request and the memory limit to the same value,
-
use a memory request that is at least 4.5 × the
-Xmx
, -
consider setting
-Xms
to the same value as-Xms
.
Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.
Example fragment configuring -Xmx
and -Xms
# ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ...
In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.
Setting the same value for initial (-Xms
) and maximum (-Xmx
) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.
-server
-server
enables the server JVM. This option can be set to true or false.
Example fragment configuring -server
# ... jvmOptions: "-server": true # ...
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
-XX
-XX
object can be used for configuring advanced runtime options of a JVM. The -server
and -XX
options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS
option of Apache Kafka.
Example showing the use of the -XX
object
jvmOptions: "-XX": "UseG1GC": true, "MaxGCPauseMillis": 20, "InitiatingHeapOccupancyPercent": 35, "ExplicitGCInvokesConcurrent": true, "UseParNewGC": false
The example configuration above will result in the following JVM options:
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
3.3.10.1.1. Garbage collector logging
The jvmOptions
section also allows you to enable and disable garbage collector (GC) logging. GC logging is enabled by default. To disable it, set the gcLoggingEnabled
property as follows:
Example of disabling GC logging
# ... jvmOptions: gcLoggingEnabled: false # ...
3.3.10.2. Configuring JVM options
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
jvmOptions
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jvmOptions: "-Xmx": "8g" "-Xms": "8g" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.11. Container images
AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.
3.3.11.1. Container image configurations
Container image which should be used for given components can be specified using the image
property in:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
3.3.11.1.1. Configuring the Kafka.spec.kafka.image
property
The Kafka.spec.kafka.image
property functions differently from the others, because AMQ Streams supports multiple versions of Kafka, each requiring the own image. The STRIMZI_KAFKA_IMAGES
environment variable of the Cluster Operator configuration is used to provide a mapping between Kafka versions and the corresponding images. This is used in combination with the Kafka.spec.kafka.image
and Kafka.spec.kafka.version
properties as follows:
-
If neither
Kafka.spec.kafka.image
norKafka.spec.kafka.version
are given in the custom resource then theversion
will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
If
Kafka.spec.kafka.image
is given butKafka.spec.kafka.version
is not then the given image will be used and theversion
will be assumed to be the Cluster Operator’s default Kafka version. -
If
Kafka.spec.kafka.version
is given butKafka.spec.kafka.image
is not then image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
Both
Kafka.spec.kafka.version
andKafka.spec.kafka.image
are given the given image will be used, and it will be assumed to contain a Kafka broker with the given version.
It is best to provide just Kafka.spec.kafka.version
and leave the Kafka.spec.kafka.image
property unspecified. This reduces the chances of making a mistake in configuring the Kafka
resource. If you need to change the images used for different versions of Kafka, it is better to configure the Cluster Operator’s STRIMZI_KAFKA_IMAGES
environment variable.
3.3.11.1.2. Configuring the image
property in other resources
For the image
property in the other custom resources, the given value will be used during deployment. If the image
property is missing, the image
specified in the Cluster Operator configuration will be used. If the image
name is not defined in the Cluster Operator configuration, then the default value will be used.
For Kafka broker TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper nodes:
-
Container image specified in the
STRIMZI_DEFAULT_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper node TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Topic Operator:
-
Container image specified in the
STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For User Operator:
-
Container image specified in the
STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For Entity Operator TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect with Source2image support:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.
Example of container image configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
3.3.11.2. Configuring container images
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
image
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.12. Configuring pod scheduling
When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.
3.3.12.1. Scheduling pods based on other applications
3.3.12.1.1. Avoid critical applications to share the node
Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.
3.3.12.1.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.3.12.1.3. Configuring pod anti-affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. ThetopologyKey
should be set tokubernetes.io/hostname
to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.12.2. Scheduling pods to specific nodes
3.3.12.2.1. Node scheduling
The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.
OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type
or custom labels to select the right node.
3.3.12.2.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.3.12.2.3. Configuring node affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Label the nodes where AMQ Streams components should be scheduled.
On OpenShift this can be done using
oc label
:oc label node your-node node-type=fast-network
Alternatively, some of the existing labels might be reused.
Edit the
affinity
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.12.3. Using dedicated nodes
3.3.12.3.1. Dedicated nodes
Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.
Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.
To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.
3.3.12.3.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.3.12.3.3. Tolerations
Tolerations can be configured using the tolerations
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The format of the tolerations
property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.
3.3.12.3.4. Setting up dedicated nodes and scheduling pods on them
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Select the nodes which should be used as dedicated.
- Make sure there are no workloads scheduled on these nodes.
Set the taints on the selected nodes:
On OpenShift this can be done using
oc adm taint
:oc adm taint node your-node dedicated=Kafka:NoSchedule
Additionally, add a label to the selected nodes as well.
On OpenShift this can be done using
oc label
:oc label node your-node dedicated=Kafka
Edit the
affinity
andtolerations
properties in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.3.13. Using external configuration and secrets
Kafka Connect connectors are configured using an HTTP REST interface. The connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself.
Some parts of the configuration of a Kafka Connect connector can be externalized using ConfigMaps or Secrets. You can then reference the configuration values in HTTP REST commands (this keeps the configuration separate and more secure, if needed). This method applies especially to confidential data, such as usernames, passwords, or certificates.
ConfigMaps and Secrets are standard OpenShift resources used for storing of configurations and confidential data.
3.3.13.1. Storing connector configurations externally
You can mount ConfigMaps or Secrets into a Kafka Connect pod as volumes or environment variables. Volumes and environment variables are configured in the externalConfiguration
property in KafkaConnect.spec
and KafkaConnectS2I.spec
.
3.3.13.1.1. External configuration as environment variables
The env
property is used to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret.
The names of user-defined environment variables cannot start with KAFKA_
or STRIMZI_
.
To mount a value from a Secret to an environment variable, use the valueFrom
property and the secretKeyRef
as shown in the following example.
Example of an environment variable set to a value from a Secret
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: secretKeyRef: name: my-secret key: my-key
A common use case for mounting Secrets to environment variables is when your connector needs to communicate with Amazon AWS and needs to read the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables with credentials.
To mount a value from a ConfigMap to an environment variable, use configMapKeyRef
in the valueFrom
property as shown in the following example.
Example of an environment variable set to a value from a ConfigMap
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key
3.3.13.1.2. External configuration as volumes
You can also mount ConfigMaps or Secrets to a Kafka Connect pod as volumes. Using volumes instead of environment variables is useful in the following scenarios:
- Mounting truststores or keystores with TLS certificates
- Mounting a properties file that is used to configure Kafka Connect connectors
In the volumes
property of the externalConfiguration
resource, list the ConfigMaps or Secrets that will be mounted as volumes. Each volume must specify a name in the name
property and a reference to ConfigMap or Secret.
Example of volumes with external configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: volumes: - name: connector1 configMap: name: connector1-configuration - name: connector1-certificates secret: secretName: connector1-certificates
The volumes will be mounted inside the Kafka Connect containers in the path /opt/kafka/external-configuration/<volume-name>
. For example, the files from a volume named connector1
would appear in the directory /opt/kafka/external-configuration/connector1
.
The FileConfigProvider
has to be used to read the values from the mounted properties files in connector configurations.
3.3.13.2. Mounting Secrets as environment variables
You can create an OpenShift Secret and mount it to Kafka Connect as an environment variable.
Prerequisites
- A running Cluster Operator.
Procedure
Create a secret containing the information that will be mounted as an environment variable. For example:
apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=
Create or edit the Kafka Connect resource. Configure the
externalConfiguration
section of theKafkaConnect
orKafkaConnectS2I
custom resource to reference the secret. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey
Apply the changes to your Kafka Connect deployment.
On OpenShift use
oc apply
:oc apply -f your-file
The environment variables are now available for use when developing your connectors.
Additional resources
-
For more information about external configuration in Kafka Connect, see Section C.63, “
ExternalConfiguration
schema reference”.
3.3.13.3. Mounting Secrets as volumes
You can create an OpenShift Secret, mount it as a volume to Kafka Connect, and then use it to configure a Kafka Connect connector.
Prerequisites
- A running Cluster Operator.
Procedure
Create a secret containing a properties file that defines the configuration options for your connector configuration. For example:
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- dbUsername: my-user dbPassword: my-password
Create or edit the Kafka Connect resource. Configure the
FileConfigProvider
in theconfig
section and theexternalConfiguration
section of theKafkaConnect
orKafkaConnectS2I
custom resource to reference the secret. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider #... externalConfiguration: volumes: - name: connector-config secret: secretName: mysecret
Apply the changes to your Kafka Connect deployment.
On OpenShift use
oc apply
:oc apply -f your-file
Use the values from the mounted properties file in your JSON payload with connector configuration. For example:
{ "name":"my-connector", "config":{ "connector.class":"MyDbConnector", "tasks.max":"3", "database": "my-postgresql:5432" "username":"${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbUsername}", "password":"${file:/opt/kafka/external-configuration/connector-config/connector.properties:dbPassword}", # ... } }
Additional resources
-
For more information about external configuration in Kafka Connect, see Section C.63, “
ExternalConfiguration
schema reference”.
3.3.14. List of resources created as part of Kafka Connect cluster with Source2Image support
The following resources will created by the Cluster Operator in the OpenShift cluster:
- connect-cluster-name-connect-source
- ImageStream which is used as the base image for the newly-built Docker images.
- connect-cluster-name-connect
- BuildConfig which is responsible for building the new Kafka Connect Docker images.
- connect-cluster-name-connect
- ImageStream where the newly built Docker images will be pushed.
- connect-cluster-name-connect
- DeploymentConfig which is in charge of creating the Kafka Connect worker node pods.
- connect-cluster-name-connect-api
- Service which exposes the REST interface for managing the Kafka Connect cluster.
- connect-cluster-name-config
- ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods.
- connect-cluster-name-connect
- Pod Disruption Budget configured for the Kafka Connect worker nodes.
3.3.15. Creating a container image using OpenShift builds and Source-to-Image
You can use OpenShift builds and the Source-to-Image (S2I) framework to create new container images. An OpenShift build takes a builder image with S2I support, together with source code and binaries provided by the user, and uses them to build a new container image. Once built, container images are stored in OpenShift’s local container image repository and are available for use in deployments.
A Kafka Connect builder image with S2I support is provided on the Red Hat Container Catalog as part of the registry.redhat.io/amq7/amqstreams-kafka-22
image. This S2I image takes your binaries (with plug-ins and connectors) and stores them in the /tmp/kafka-plugins/s2i
directory. It creates a new Kafka Connect image from this directory, which can then be used with the Kafka Connect deployment. When started using the enhanced image, Kafka Connect loads any third-party plug-ins from the /tmp/kafka-plugins/s2i
directory.
Procedure
On the command line, use the
oc apply
command to create and deploy a Kafka Connect S2I cluster:oc apply -f examples/kafka-connect/kafka-connect-s2i.yaml
Create a directory with Kafka Connect plug-ins:
$ tree ./my-plugins/ ./my-plugins/ ├── debezium-connector-mongodb │ ├── bson-3.4.2.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mongodb-driver-3.4.2.jar │ ├── mongodb-driver-core-3.4.2.jar │ └── README.md ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-0.13.0.jar │ ├── mysql-connector-java-5.1.40.jar │ ├── README.md │ └── wkb-1.0.2.jar └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-0.7.1.jar ├── debezium-core-0.7.1.jar ├── LICENSE.txt ├── postgresql-42.0.0.jar ├── protobuf-java-2.6.1.jar └── README.md
Use the
oc start-build
command to start a new build of the image using the prepared directory:oc start-build my-connect-cluster-connect --from-dir ./my-plugins/
NoteThe name of the build is the same as the name of the deployed Kafka Connect cluster.
- Once the build has finished, the new image is used automatically by the Kafka Connect deployment.
3.4. Kafka Mirror Maker configuration
The full schema of the KafkaMirrorMaker
resource is described in the Section C.83, “KafkaMirrorMaker
schema reference”. All labels that apply to the desired KafkaMirrorMaker
resource will also be applied to the OpenShift resources making up Mirror Maker. This provides a convenient mechanism for resources to be labeled as required.
3.4.1. Replicas
It is possible to run multiple Mirror Maker replicas. The number of replicas is defined in the KafkaMirrorMaker
resource. You can run multiple Mirror Maker replicas to provide better availability and scalability. However, when running Kafka Mirror Maker on OpenShift it is not absolutely necessary to run multiple replicas of the Kafka Mirror Maker for high availability. When the node where the Kafka Mirror Maker has deployed crashes, OpenShift will automatically reschedule the Kafka Mirror Maker pod to a different node. However, running Kafka Mirror Maker with multiple replicas can provide faster failover times as the other nodes will be up and running.
3.4.1.1. Configuring the number of replicas
The number of Kafka Mirror Maker replicas can be configured using the replicas
property in KafkaMirrorMaker.spec
.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
replicas
property in theKafkaMirrorMaker
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... replicas: 3 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
3.4.2. Bootstrap servers
Kafka Mirror Maker always works together with two Kafka clusters (source and target). The source and the target Kafka clusters are specified in the form of two lists of comma-separated list of <hostname>:<port>
pairs. The bootstrap server lists can refer to Kafka clusters which do not need to be deployed in the same OpenShift cluster. They can even refer to any Kafka cluster not deployed by AMQ Streams or even deployed by AMQ Streams but on a different OpenShift cluster and accessible from outside.
If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named <cluster-name>-kafka-bootstrap
and a port of 9092 for plain traffic or 9093 for encrypted traffic. If deployed by AMQ Streams but on different OpenShift clusters, the list content depends on the way used for exposing the clusters (routes, nodeports or loadbalancers).
The list of bootstrap servers can be configured in the KafkaMirrorMaker.spec.consumer.bootstrapServers
and KafkaMirrorMaker.spec.producer.bootstrapServers
properties. The servers should be a comma-separated list containing one or more Kafka brokers or a Service
pointing to Kafka brokers specified as a <hostname>:<port>
pairs.
When using Kafka Mirror Maker with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the given cluster.
3.4.2.1. Configuring bootstrap servers
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
KafkaMirrorMaker.spec.consumer.bootstrapServers
andKafkaMirrorMaker.spec.producer.bootstrapServers
properties. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 # ... producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
3.4.3. Whitelist
You specify the list topics that the Kafka Mirror Maker has to mirror from the source to the target Kafka cluster in the KafkaMirrorMaker resource using the whitelist option. It allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using "A|B" or all topics using "*". You can also pass multiple regular expressions separated by commas to the Kafka Mirror Maker.
3.4.3.1. Configuring the topics whitelist
Specify the list topics that have to be mirrored by the Kafka Mirror Maker from source to target Kafka cluster using the whitelist
property in KafkaMirrorMaker.spec
.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
whitelist
property in theKafkaMirrorMaker
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... whitelist: "my-topic|other-topic" # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
3.4.4. Consumer group identifier
The Kafka Mirror Maker uses Kafka consumer to consume messages and it behaves like any other Kafka consumer client. It is in charge to consume the messages from the source Kafka cluster which will be mirrored to the target Kafka cluster. The consumer needs to be part of a consumer group for being assigned partitions.
3.4.4.1. Configuring the consumer group identifier
The consumer group identifier can be configured in the KafkaMirrorMaker.spec.consumer.groupId
property.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
KafkaMirrorMaker.spec.consumer.groupId
property. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: groupId: "my-group" # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
3.4.5. Number of consumer streams
You can increase the throughput in mirroring topics by increase the number of consumer threads. More consumer threads will belong to the same configured consumer group. The topic partitions will be assigned across these consumer threads which will consume messages in parallel.
3.4.5.1. Configuring the number of consumer streams
The number of consumer streams can be configured using the KafkaMirrorMaker.spec.consumer.numStreams
property.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
KafkaMirrorMaker.spec.consumer.numStreams
property. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: numStreams: 2 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
3.4.6. Connecting to Kafka brokers using TLS
By default, Kafka Mirror Maker will try to connect to Kafka brokers, in the source and target clusters, using a plain text connection. You must make additional configurations to use TLS.
3.4.6.1. TLS support in Kafka Mirror Maker
TLS support is configured in the tls
sub-property of consumer
and producer
properties in KafkaMirrorMaker.spec
. The tls
property contains a list of secrets with key names under which the certificates are stored. The certificates should be stored in X.509 format.
An example showing TLS configuration with multiple certificates
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: tls: trustedCertificates: - secretName: my-source-secret certificate: ca.crt - secretName: my-other-source-secret certificate: certificate.crt # ... producer: tls: trustedCertificates: - secretName: my-target-secret certificate: ca.crt - secretName: my-other-target-secret certificate: certificate.crt # ...
When multiple certificates are stored in the same secret, it can be listed multiple times.
An example showing TLS configuration with multiple certificates from the same secret
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: tls: trustedCertificates: - secretName: my-source-secret certificate: ca.crt - secretName: my-source-secret certificate: ca2.crt # ... producer: tls: trustedCertificates: - secretName: my-target-secret certificate: ca.crt - secretName: my-target-secret certificate: ca2.crt # ...
3.4.6.2. Configuring TLS encryption in Kafka Mirror Maker
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
-
If they exist, the name of the
Secret
for the certificate used for TLS Server Authentication and the key under which the certificate is stored in theSecret
Procedure
As the Kafka Mirror Maker connects to two Kafka clusters (source and target), you can choose to configure TLS for one or both the clusters. The following steps describe how to configure TLS on the consumer side for connecting to the source Kafka cluster:
(Optional) If they do not already exist, prepare the TLS certificate used for authentication in a file and create a
Secret
.NoteThe secrets created by the Cluster Operator for Kafka cluster may be used directly.
On OpenShift this can be done using
oc create
:oc create secret generic <my-secret> --from-file=<my-file.crt>
Edit the
KafkaMirrorMaker.spec.consumer.tls
property. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
Repeat the above steps for configuring TLS on the target Kafka cluster. In this case, the secret containing the certificate has to be configured in the KafkaMirrorMaker.spec.producer.tls
property.
3.4.7. Connecting to Kafka brokers with Authentication
By default, Kafka Mirror Maker will try to connect to Kafka brokers without any authentication. Authentication is enabled through the KafkaMirrorMaker
resource.
3.4.7.1. Authentication support in Kafka Mirror Maker
Authentication can be configured in the KafkaMirrorMaker.spec.consumer.authentication
and KafkaMirrorMaker.spec.producer.authentication
properties. The authentication
property specifies the type of the authentication method which should be used and additional configuration details depending on the mechanism. The currently supported authentication types are:
- TLS client authentication
- SASL-based authentication using the SCRAM-SHA-512 mechanism
- SASL-based authentication using the PLAIN mechanism
You can use different authentication mechanisms for the Kafka Mirror Maker producer and consumer.
3.4.7.1.1. TLS Client Authentication
To use TLS client authentication, set the type
property to the value tls
. TLS client authentication uses a TLS certificate to authenticate. The certificate is specified in the certificateAndKey
property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.
TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Mirror Maker see Section 3.4.6, “Connecting to Kafka brokers using TLS”.
An example TLS client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: authentication: type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key # ... producer: authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key # ...
3.4.7.1.2. SCRAM-SHA-512 authentication
To configure Kafka Mirror Maker to use SCRAM-SHA-512 authentication, set the type
property to scram-sha-512
. The broker listener to which clients will connect must also be configured to use SCRAM-SHA-512 SASL authentication. This authentication mechanism requires a username and password.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name of theSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example SCRAM-SHA-512 client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: authentication: type: scram-sha-512 username: my-source-user passwordSecret: secretName: my-source-user password: my-source-password-key # ... producer: authentication: type: scram-sha-512 username: my-producer-user passwordSecret: secretName: my-producer-user password: my-producer-password-key # ...
3.4.7.1.3. PLAIN authentication
To configure Kafka Mirror Maker to use PLAIN authentication, set the type
property to plain
. The broker listener to which clients will connect must also be configured to use SASL PLAIN authentication. This authentication mechanism requires a username and password.
The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name of theSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example PLAIN client authentication configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: authentication: type: plain username: my-source-user passwordSecret: secretName: my-source-user password: my-source-password-key # ... producer: authentication: type: plain username: my-producer-user passwordSecret: secretName: my-producer-user password: my-producer-password-key # ...
3.4.7.2. Configuring TLS client authentication in Kafka Mirror Maker
Prerequisites
- An OpenShift cluster
-
A running Cluster Operator with a
tls
listener withtls
authentication enabled -
If they exist, the name of the
Secret
with the public and private keys used for TLS Client Authentication, and the keys under which they are stored in theSecret
Procedure
As the Kafka Mirror Maker connects to two Kafka clusters (source and target), you can choose to configure TLS client authentication for one or both the clusters. The following steps describe how to configure TLS client authentication on the consumer side for connecting to the source Kafka cluster:
(Optional) If they do not already exist, prepare the keys used for authentication in a file and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift this can be done using
oc create
:oc create secret generic <my-secret> --from-file=<my-public.crt> --from-file=<my-private.key>
Edit the
KafkaMirrorMaker.spec.consumer.authentication
property. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public.crt key: my-private.key # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
Repeat the above steps for configuring TLS client authentication on the target Kafka cluster. In this case, the secret containing the certificate has to be configured in the KafkaMirrorMaker.spec.producer.authentication
property.
3.4.7.3. Configuring SCRAM-SHA-512 authentication in Kafka Mirror Maker
Prerequisites
- An OpenShift cluster
-
A running Cluster Operator with a
listener
configured for SCRAM-SHA-512 authentication - Username to be used for authentication
-
If they exist, the name of the
Secret
with the password used for authentication, and the key under which it is stored in theSecret
Procedure
As the Kafka Mirror Maker connects to two Kafka clusters (source and target), you can choose to configure SCRAM-SHA-512 authentication for one or both the clusters. The following steps describe how to configure SCRAM-SHA-512 authentication on the consumer side for connecting to the source Kafka cluster:
(Optional) If they do not already exist, prepare a file with the password used for authentication and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift this can be done using
oc create
:echo -n '1f2d1e2e67df' > <my-password.txt> oc create secret generic <my-secret> --from-file=<my-password.txt>
Edit the
KafkaMirrorMaker.spec.consumer.authentication
property. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: authentication: type: scram-sha-512 username: _<my-username>_ passwordSecret: secretName: _<my-secret>_ password: _<my-password.txt>_ # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
Repeat the above steps for configuring SCRAM-SHA-512 authentication on the target Kafka cluster. In this case, the secret containing the certificate has to be configured in the KafkaMirrorMaker.spec.producer.authentication
property.
3.4.8. Kafka Mirror Maker configuration
AMQ Streams allows you to customize the configuration of the Kafka Mirror Maker by editing most of the options for the related consumer and producer. Producer options are listed in Apache Kafka documentation. Consumer options are listed in Apache Kafka documentation.
The only options which cannot be configured are those related to the following areas:
- Kafka cluster bootstrap address
- Security (Encryption, Authentication, and Authorization)
- Consumer group identifier
These options are automatically configured by AMQ Streams.
3.4.8.1. Kafka Mirror Maker configuration
Kafka Mirror Maker can be configured using the config
sub-property in KafkaMirrorMaker.spec.consumer
and KafkaMirrorMaker.spec.producer
. This property should contain the Kafka Mirror Maker consumer and producer configuration options as keys. The values could be in one of the following JSON types:
- String
- Number
- Boolean
Users can specify and configure the options listed in the Apache Kafka documentation and Apache Kafka documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
bootstrap.servers
-
group.id
When one of the forbidden options is present in the config
property, it will be ignored and a warning message will be printed to the Custer Operator log file. All other options will be passed to Kafka Mirror Maker.
The Cluster Operator does not validate keys or values in the provided config
object. When an invalid configuration is provided, the Kafka Mirror Maker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.consumer.config
or KafkaMirrorMaker.spec.producer.config
object should be fixed and the cluster operator will roll out the new configuration for Kafka Mirror Maker.
An example showing Kafka Mirror Maker configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirroMaker metadata: name: my-mirror-maker spec: # ... consumer: config: max.poll.records: 100 receive.buffer.bytes: 32768 producer: config: compression.type: gzip batch.size: 8192 # ...
3.4.8.2. Configuring Kafka Mirror Maker
Prerequisites
- Two running Kafka clusters (source and target)
- A running Cluster Operator
Procedure
Edit the
KafkaMirrorMaker.spec.consumer.config
andKafkaMirrorMaker.spec.producer.config
properties. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirroMaker metadata: name: my-mirror-maker spec: # ... consumer: config: max.poll.records: 100 receive.buffer.bytes: 32768 producer: config: compression.type: gzip batch.size: 8192 # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f <your-file>
3.4.9. CPU and memory resources
For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.
AMQ Streams supports two types of resources:
- CPU
- Memory
AMQ Streams uses the OpenShift syntax for specifying CPU and memory resources.
3.4.9.1. Resource limits and requests
Resource limits and requests are configured using the resources
property in the following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Additional resources
- For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers.
3.4.9.1.1. Resource requests
Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.
If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.
Resources requests are specified in the requests
property. Resources requests currently supported by AMQ Streams:
-
cpu
-
memory
A request may be configured for one or more supported resources.
Example resource request configuration with all resources
# ... resources: requests: cpu: 12 memory: 64Gi # ...
3.4.9.1.2. Resource limits
Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.
Resource limits are specified in the limits
property. Resource limits currently supported by AMQ Streams:
-
cpu
-
memory
A resource may be configured for one or more supported limits.
Example resource limits configuration
# ... resources: limits: cpu: 12 memory: 64Gi # ...
3.4.9.1.3. Supported CPU formats
CPU requests and limits are supported in the following formats:
-
Number of CPU cores as integer (
5
CPU core) or decimal (2.5
CPU core). -
Number or millicpus / millicores (
100m
) where 1000 millicores is the same1
CPU core.
Example CPU units
# ... resources: requests: cpu: 500m limits: cpu: 2.5 # ...
The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.
Additional resources
- For more information on CPU specification, see the Meaning of CPU.
3.4.9.1.4. Supported memory formats
Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.
-
To specify memory in megabytes, use the
M
suffix. For example1000M
. -
To specify memory in gigabytes, use the
G
suffix. For example1G
. -
To specify memory in mebibytes, use the
Mi
suffix. For example1000Mi
. -
To specify memory in gibibytes, use the
Gi
suffix. For example1Gi
.
An example of using different memory units
# ... resources: requests: memory: 512Mi limits: memory: 2Gi # ...
Additional resources
- For more details about memory specification and additional supported units, see Meaning of memory.
3.4.9.2. Configuring resource requests and limits
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
resources
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... resources: requests: cpu: "8" memory: 64Gi limits: cpu: "12" memory: 128Gi # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Additional resources
-
For more information about the schema, see
Resources
schema reference.
3.4.10. Logging
This section provides information on loggers and how to configure log levels.
You can set the log levels by specifying the loggers and their levels directly (inline) or use a custom (external) config map.
3.4.10.1. Kafka Mirror Maker loggers
Kafka Mirror Maker has its own configurable logger:
-
mirrormaker.root.logger
3.4.10.2. Specifying inline logging
Procedure
Edit the YAML file to specify the loggers and logging level for the required components.
For example, the logging level here is set to INFO:
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: logger.name: "INFO" # ...
You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF.
For more information about the log levels, see the log4j manual.
Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.4.10.3. Specifying an external ConfigMap for logging
Procedure
Edit the YAML file to specify the name of the
ConfigMap
to use for the required components. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # ... logging: type: external name: customConfigMap # ...
Remember to place your custom ConfigMap under the
log4j.properties
orlog4j2.properties
key.Create or update the Kafka resource in OpenShift.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
Garbage collector (GC) logging can also be enabled (or disabled). For more information on GC, see Section 3.4.12.1, “JVM configuration”
3.4.11. Prometheus metrics
AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and Zookeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404.
3.4.11.1. Metrics configuration
Prometheus metrics are enabled by configuring the metrics
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
When the metrics
property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}
).
Example of enabling metrics without any further configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: {} # ... zookeeper: # ...
The metrics
property might contain additional configuration for the Prometheus JMX exporter.
Example of enabling metrics with additional Prometheus JMX Exporter configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: lowercaseOutputName: true rules: - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count" name: "kafka_server_$1_$2_total" - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count" name: "kafka_server_$1_$2_total" labels: topic: "$3" # ... zookeeper: # ...
3.4.11.2. Configuring Prometheus metrics
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
metrics
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... metrics: lowercaseOutputName: true # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.4.12. JVM Options
Apache Kafka and Apache Zookeeper run inside a Java Virtual Machine (JVM). JVM configuration options optimize the performance for different platforms and architectures. AMQ Streams allows you to configure some of these options.
3.4.12.1. JVM configuration
JVM options can be configured using the jvmOptions
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
Only a selected subset of available JVM options can be configured. The following options are supported:
-Xms and -Xmx
-Xms
configures the minimum initial allocation heap size when the JVM starts. -Xmx
configures the maximum heap size.
The units accepted by JVM settings such as -Xmx
and -Xms
are those accepted by the JDK java
binary in the corresponding image. Accordingly, 1g
or 1G
means 1,073,741,824 bytes, and Gi
is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G
means 1,000,000,000 bytes, and 1Gi
means 1,073,741,824 bytes
The default values used for -Xms
and -Xmx
depends on whether there is a memory request limit configured for the container:
- If there is a memory limit then the JVM’s minimum and maximum memory will be set to a value corresponding to the limit.
-
If there is no memory limit then the JVM’s minimum memory will be set to
128M
and the JVM’s maximum memory will not be defined. This allows for the JVM’s memory to grow as-needed, which is ideal for single node environments in test and development.
Setting -Xmx
explicitly requires some care:
-
The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by
-Xmx
. -
If
-Xmx
is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it). -
If
-Xmx
is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if-Xms
is set to-Xmx
, or some later time if not).
When setting -Xmx
explicitly, it is recommended to:
- set the memory request and the memory limit to the same value,
-
use a memory request that is at least 4.5 × the
-Xmx
, -
consider setting
-Xms
to the same value as-Xms
.
Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.
Example fragment configuring -Xmx
and -Xms
# ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ...
In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.
Setting the same value for initial (-Xms
) and maximum (-Xmx
) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and Zookeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation will be multiplied by the number of consumers.
-server
-server
enables the server JVM. This option can be set to true or false.
Example fragment configuring -server
# ... jvmOptions: "-server": true # ...
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
-XX
-XX
object can be used for configuring advanced runtime options of a JVM. The -server
and -XX
options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS
option of Apache Kafka.
Example showing the use of the -XX
object
jvmOptions: "-XX": "UseG1GC": true, "MaxGCPauseMillis": 20, "InitiatingHeapOccupancyPercent": 35, "ExplicitGCInvokesConcurrent": true, "UseParNewGC": false
The example configuration above will result in the following JVM options:
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
When neither of the two options (-server
and -XX
) is specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
will be used.
3.4.12.1.1. Garbage collector logging
The jvmOptions
section also allows you to enable and disable garbage collector (GC) logging. GC logging is enabled by default. To disable it, set the gcLoggingEnabled
property as follows:
Example of disabling GC logging
# ... jvmOptions: gcLoggingEnabled: false # ...
3.4.12.2. Configuring JVM options
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
jvmOptions
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jvmOptions: "-Xmx": "8g" "-Xms": "8g" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.4.13. Container images
AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.
3.4.13.1. Container image configurations
Container image which should be used for given components can be specified using the image
property in:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
3.4.13.1.1. Configuring the Kafka.spec.kafka.image
property
The Kafka.spec.kafka.image
property functions differently from the others, because AMQ Streams supports multiple versions of Kafka, each requiring the own image. The STRIMZI_KAFKA_IMAGES
environment variable of the Cluster Operator configuration is used to provide a mapping between Kafka versions and the corresponding images. This is used in combination with the Kafka.spec.kafka.image
and Kafka.spec.kafka.version
properties as follows:
-
If neither
Kafka.spec.kafka.image
norKafka.spec.kafka.version
are given in the custom resource then theversion
will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
If
Kafka.spec.kafka.image
is given butKafka.spec.kafka.version
is not then the given image will be used and theversion
will be assumed to be the Cluster Operator’s default Kafka version. -
If
Kafka.spec.kafka.version
is given butKafka.spec.kafka.image
is not then image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
Both
Kafka.spec.kafka.version
andKafka.spec.kafka.image
are given the given image will be used, and it will be assumed to contain a Kafka broker with the given version.
It is best to provide just Kafka.spec.kafka.version
and leave the Kafka.spec.kafka.image
property unspecified. This reduces the chances of making a mistake in configuring the Kafka
resource. If you need to change the images used for different versions of Kafka, it is better to configure the Cluster Operator’s STRIMZI_KAFKA_IMAGES
environment variable.
3.4.13.1.2. Configuring the image
property in other resources
For the image
property in the other custom resources, the given value will be used during deployment. If the image
property is missing, the image
specified in the Cluster Operator configuration will be used. If the image
name is not defined in the Cluster Operator configuration, then the default value will be used.
For Kafka broker TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper nodes:
-
Container image specified in the
STRIMZI_DEFAULT_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper node TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Topic Operator:
-
Container image specified in the
STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For User Operator:
-
Container image specified in the
STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For Entity Operator TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect with Source2image support:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.
Example of container image configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
3.4.13.2. Configuring container images
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
image
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.4.14. Configuring pod scheduling
When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.
3.4.14.1. Scheduling pods based on other applications
3.4.14.1.1. Avoid critical applications to share the node
Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.
3.4.14.1.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.4.14.1.3. Configuring pod anti-affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. ThetopologyKey
should be set tokubernetes.io/hostname
to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.4.14.2. Scheduling pods to specific nodes
3.4.14.2.1. Node scheduling
The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.
OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type
or custom labels to select the right node.
3.4.14.2.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.4.14.2.3. Configuring node affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Label the nodes where AMQ Streams components should be scheduled.
On OpenShift this can be done using
oc label
:oc label node your-node node-type=fast-network
Alternatively, some of the existing labels might be reused.
Edit the
affinity
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.4.14.3. Using dedicated nodes
3.4.14.3.1. Dedicated nodes
Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.
Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.
To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.
3.4.14.3.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.4.14.3.3. Tolerations
Tolerations can be configured using the tolerations
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The format of the tolerations
property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.
3.4.14.3.4. Setting up dedicated nodes and scheduling pods on them
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Select the nodes which should be used as dedicated.
- Make sure there are no workloads scheduled on these nodes.
Set the taints on the selected nodes:
On OpenShift this can be done using
oc adm taint
:oc adm taint node your-node dedicated=Kafka:NoSchedule
Additionally, add a label to the selected nodes as well.
On OpenShift this can be done using
oc label
:oc label node your-node dedicated=Kafka
Edit the
affinity
andtolerations
properties in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.4.15. List of resources created as part of Kafka Mirror Maker
The following resources will created by the Cluster Operator in the OpenShift cluster:
- <mirror-maker-name>-mirror-maker
- Deployment which is in charge to create the Kafka Mirror Maker pods.
- <mirror-maker-name>-config
- ConfigMap which contains the Kafka Mirror Maker ancillary configuration and is mounted as a volume by the Kafka broker pods.
- <mirror-maker-name>-mirror-maker
- Pod Disruption Budget configured for the Kafka Mirror Maker worker nodes.
3.5. Kafka Bridge cluster configuration
The full schema of the KafkaBridge
resource is described in the Section C.92, “KafkaBridge
schema reference”. All labels that are applied to the desired KafkaBridge
resource will also be applied to the OpenShift resources making up the Kafka Bridge cluster. This provides a convenient mechanism for resources to be labeled as required.
3.5.1. Replicas
Kafka Bridge can run multiple nodes. The number of nodes is defined in the KafkaBridge
resource. Running a Kafka Bridge with multiple nodes can provide better availability and scalability. However, when running Kafka Bridge on OpenShift it is not absolutely necessary to run multiple nodes of Kafka Bridge for high availability.
If a node where Kafka Bridge is deployed to crashes, OpenShift will automatically reschedule the Kafka Bridge pod to a different node. In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, addressed-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances.
3.5.1.1. Configuring the number of nodes
The number of Kafka Bridge nodes is configured using the replicas
property in KafkaBridge.spec
.
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
replicas
property in theKafkaBridge
resource. For example:apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... replicas: 3 # ...
Create or update the resource.
On OpenShift use:
oc apply -f your-file
3.5.2. Bootstrap servers
A Kafka Bridge always works in combination with a Kafka cluster. A Kafka cluster is specified as a list of bootstrap servers. On OpenShift, the list must ideally contain the Kafka cluster bootstrap service named cluster-name-kafka-bootstrap
, and a port of 9092 for plain traffic or 9093 for encrypted traffic.
The list of bootstrap servers is configured in the bootstrapServers
property in KafkaBridge.kafka.spec
. The servers must be defined as a comma-separated list specifying one or more Kafka brokers, or a service pointing to Kafka brokers specified as a hostname:_port_
pairs.
When using Kafka Bridge with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the cluster.
3.5.2.1. Configuring bootstrap servers
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
bootstrapServers
property in theKafkaBridge
resource. For example:apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... bootstrapServers: my-cluster-kafka-bootstrap:9092 # ...
Create or update the resource.
On OpenShift use:
oc apply -f your-file
3.5.3. Connecting to Kafka brokers using TLS
By default, Kafka Bridge tries to connect to Kafka brokers using a plain text connection. If you prefer to use TLS, additional configuration is required.
3.5.3.1. TLS support for Kafka connection to the Kafka Bridge
TLS support for Kafka connection is configured in the tls
property in KafkaBridge.spec.kafka
. The tls
property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format.
An example showing TLS configuration with multiple certificates
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... tls: trustedCertificates: - secretName: my-secret certificate: ca.crt - secretName: my-other-secret certificate: certificate.crt # ...
When multiple certificates are stored in the same secret, it can be listed multiple times.
An example showing TLS configuration with multiple certificates from the same secret
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... tls: trustedCertificates: - secretName: my-secret certificate: ca.crt - secretName: my-secret certificate: ca2.crt # ...
3.5.3.2. Configuring TLS in Kafka Bridge
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
-
If they exist, the name of the
Secret
for the certificate used for TLS Server Authentication, and the key under which the certificate is stored in theSecret
Procedure
(Optional) If they do not already exist, prepare the TLS certificate used in authentication in a file and create a
Secret
.NoteThe secrets created by the Cluster Operator for Kafka cluster may be used directly.
On OpenShift use:
oc create secret generic my-secret --from-file=my-file.crt
Edit the
tls
property in theKafkaBridge
resource. For example:apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt # ...
Create or update the resource.
On OpenShift use:
oc apply -f your-file
3.5.4. Connecting to Kafka brokers with Authentication
By default, Kafka Bridge will try to connect to Kafka brokers without authentication. Authentication is enabled through the KafkaBridge
resources.
3.5.4.1. Authentication support in Kafka Bridge
Authentication is configured through the authentication
property in KafkaBridge.spec.kafka
. The authentication
property specifies the type of the authentication mechanisms which should be used and additional configuration details depending on the mechanism. The currently supported authentication types are:
- TLS client authentication
- SASL-based authentication using the SCRAM-SHA-512 mechanism
- SASL-based authentication using the PLAIN mechanism
3.5.4.1.1. TLS Client Authentication
To use TLS client authentication, set the type
property to the value tls
. TLS client authentication uses a TLS certificate to authenticate. The certificate is specified in the certificateAndKey
property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.
TLS client authentication can be used only with TLS connections. For more details about TLS configuration in Kafka Bridge see Section 3.5.3, “Connecting to Kafka brokers using TLS”.
An example TLS client authentication configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key # ...
3.5.4.1.2. SCRAM-SHA-512 authentication
To configure Kafka Bridge to use SASL-based SCRAM-SHA-512 authentication, set the type
property to scram-sha-512
. This authentication mechanism requires a username and password.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name of theSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example SASL based SCRAM-SHA-512 client authentication configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: scram-sha-512 username: my-bridge-user passwordSecret: secretName: my-bridge-user password: my-bridge-password-key # ...
3.5.4.1.3. SASL-based PLAIN authentication
To configure Kafka Bridge to use SASL-based PLAIN authentication, set the type
property to plain
. This authentication mechanism requires a username and password.
The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.
-
Specify the username in the
username
property. -
In the
passwordSecret
property, specify a link to aSecret
containing the password. ThesecretName
property contains the name theSecret
and thepassword
property contains the name of the key under which the password is stored inside theSecret
.
Do not specify the actual password in the password
field.
An example showing SASL based PLAIN client authentication configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: plain username: my-bridge-user passwordSecret: secretName: my-bridge-user password: my-bridge-password-key # ...
3.5.4.2. Configuring TLS client authentication in Kafka Bridge
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
-
If they exist, the name of the
Secret
with the public and private keys used for TLS Client Authentication, and the keys under which they are stored in theSecret
Procedure
(Optional) If they do not already exist, prepare the keys used for authentication in a file and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift use:
oc create secret generic my-secret --from-file=my-public.crt --from-file=my-private.key
Edit the
authentication
property in theKafkaBridge
resource. For example:apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public.crt key: my-private.key # ...
Create or update the resource.
On OpenShift use:
oc apply -f your-file
3.5.4.3. Configuring SCRAM-SHA-512 authentication in Kafka Bridge
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
- Username of the user which should be used for authentication
-
If they exist, the name of the
Secret
with the password used for authentication and the key under which the password is stored in theSecret
Procedure
(Optional) If they do not already exist, prepare a file with the password used in authentication and create the
Secret
.NoteSecrets created by the User Operator may be used.
On OpenShift use:
echo -n '1f2d1e2e67df' > <my-password>.txt oc create secret generic <my-secret> --from-file=<my-password.txt>
Edit the
authentication
property in theKafkaBridge
resource. For example:apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: scram-sha-512 username: _<my-username>_ passwordSecret: secretName: _<my-secret>_ password: _<my-password.txt>_ # ...
Create or update the resource.
On OpenShift use:
oc apply -f your-file
3.5.5. Kafka Bridge configuration
AMQ Streams allows you to customize the configuration of Apache Kafka Bridge nodes by editing certain options listed in Apache Kafka documentation and Apache Kafka documentation.
Configuration options that can be configured relate to:
- Kafka cluster bootstrap address
- Security (Encryption, Authentication, and Authorization)
- Consumer configuration
- Producer configuration
- HTTP configuration
3.5.5.1. Kafka Bridge Consumer configuration
Kafka Bridge consumer is configured using the properties in KafkaBridge.spec.consumer
. This property contains the Kafka Bridge consumer configuration options as keys. The values can be one of the following JSON types:
- String
- Number
- Boolean
Users can specify and configure the options listed in the Apache Kafka documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
bootstrap.servers
-
group.id
When one of the forbidden options is present in the config
property, it will be ignored and a warning message will be printed to the Custer Operator log file. All other options will be passed to Kafka
The Cluster Operator does not validate keys or values in the config
object provided. When an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaBridge.spec.consumer.config
object, then the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes.
Example Kafka Bridge consumer configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true # ...
3.5.5.2. Kafka Bridge Producer configuration
Kafka Bridge producer is configured using the properties in KafkaBridge.spec.producer
. This property contains the Kafka Bridge producer configuration options as keys. The values can be one of the following JSON types:
- String
- Number
- Boolean
Users can specify and configure the options listed in the Apache Kafka documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
bootstrap.servers
The Cluster Operator does not validate keys or values in the config
object provided. When an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaBridge.spec.producer.config
object, then the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes.
Example Kafka Bridge producer configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: config: acks: 1 delivery.timeout.ms: 300000 # ...
3.5.5.3. Kafka Bridge HTTP configuration
Kafka Bridge HTTP configuration is set using the properties in KafkaBridge.spec.http
. This property contains the Kafka Bridge HTTP configuration options.
-
port
When configuring port
property avoid the value 8081
. This port is used for the health checks.
Example Kafka Bridge HTTP configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... http: port: 8080 # ...
The port must not be set to 8081 as that will cause a conflict with the healthcheck settings.
3.5.5.4. Configuring Kafka Bridge
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
kafka
,http
,consumer
orproducer
property in theKafkaBridge
resource. For example:apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... bootstrapServers: my-cluster-kafka:9092 http: port: 8080 consumer: config: auto.offset.reset: earliest producer: config: delivery.timeout.ms: 300000 # ...
Create or update the resource.
On OpenShift use:
oc apply -f your-file
3.5.6. Healthchecks
Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.
OpenShift supports two types of Healthcheck probes:
- Liveness probes
- Readiness probes
For more details about the probes, see Configure Liveness and Readiness Probes. Both types of probes are used in AMQ Streams components.
Users can configure selected options for liveness and readiness probes.
3.5.6.1. Healthcheck configurations
Liveness and readiness probes can be configured using the livenessProbe
and readinessProbe
properties in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
Both livenessProbe
and readinessProbe
support two additional options:
-
initialDelaySeconds
-
timeoutSeconds
The initialDelaySeconds
property defines the initial delay before the probe is tried for the first time. Default is 15 seconds.
The timeoutSeconds
property defines timeout of the probe. Default is 5 seconds.
An example of liveness and readiness probe configuration
# ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ...
3.5.6.2. Configuring healthchecks
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
livenessProbe
orreadinessProbe
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.5.7. Container images
AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.
3.5.7.1. Container image configurations
Container image which should be used for given components can be specified using the image
property in:
-
Kafka.spec.kafka
-
Kafka.spec.kafka.tlsSidecar
-
Kafka.spec.zookeeper
-
Kafka.spec.zookeeper.tlsSidecar
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaBridge.spec
3.5.7.1.1. Configuring the Kafka.spec.kafka.image
property
The Kafka.spec.kafka.image
property functions differently from the others, because AMQ Streams supports multiple versions of Kafka, each requiring the own image. The STRIMZI_KAFKA_IMAGES
environment variable of the Cluster Operator configuration is used to provide a mapping between Kafka versions and the corresponding images. This is used in combination with the Kafka.spec.kafka.image
and Kafka.spec.kafka.version
properties as follows:
-
If neither
Kafka.spec.kafka.image
norKafka.spec.kafka.version
are given in the custom resource then theversion
will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
If
Kafka.spec.kafka.image
is given butKafka.spec.kafka.version
is not then the given image will be used and theversion
will be assumed to be the Cluster Operator’s default Kafka version. -
If
Kafka.spec.kafka.version
is given butKafka.spec.kafka.image
is not then image will be the one corresponding to this version in theSTRIMZI_KAFKA_IMAGES
. -
Both
Kafka.spec.kafka.version
andKafka.spec.kafka.image
are given the given image will be used, and it will be assumed to contain a Kafka broker with the given version.
It is best to provide just Kafka.spec.kafka.version
and leave the Kafka.spec.kafka.image
property unspecified. This reduces the chances of making a mistake in configuring the Kafka
resource. If you need to change the images used for different versions of Kafka, it is better to configure the Cluster Operator’s STRIMZI_KAFKA_IMAGES
environment variable.
3.5.7.1.2. Configuring the image
property in other resources
For the image
property in the other custom resources, the given value will be used during deployment. If the image
property is missing, the image
specified in the Cluster Operator configuration will be used. If the image
name is not defined in the Cluster Operator configuration, then the default value will be used.
For Kafka broker TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper nodes:
-
Container image specified in the
STRIMZI_DEFAULT_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Zookeeper node TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Topic Operator:
-
Container image specified in the
STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For User Operator:
-
Container image specified in the
STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-operator:1.2.0
container image.
-
Container image specified in the
For Entity Operator TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
For Kafka Connect with Source2image support:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amqstreams-kafka-22
container image.
-
Container image specified in the
Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such case, you should either copy the AMQ Streams images or build them from source. In case the configured image is not compatible with AMQ Streams images, it might not work properly.
Example of container image configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
3.5.7.2. Configuring container images
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
image
property in theKafka
,KafkaConnect
orKafkaConnectS2I
resource. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.5.8. Configuring pod scheduling
When two application are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems.
3.5.8.1. Scheduling pods based on other applications
3.5.8.1.1. Avoid critical applications to share the node
Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases.
3.5.8.1.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.5.8.1.3. Configuring pod anti-affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Edit the
affinity
property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. ThetopologyKey
should be set tokubernetes.io/hostname
to specify that the selected pods should not be scheduled on nodes with the same hostname. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.5.8.2. Scheduling pods to specific nodes
3.5.8.2.1. Node scheduling
The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes.
OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type
or custom labels to select the right node.
3.5.8.2.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.5.8.2.3. Configuring node affinity in Kafka components
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
Label the nodes where AMQ Streams components should be scheduled.
On OpenShift this can be done using
oc label
:oc label node your-node node-type=fast-network
Alternatively, some of the existing labels might be reused.
Edit the
affinity
property in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.5.8.3. Using dedicated nodes
3.5.8.3.1. Dedicated nodes
Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks.
Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability.
To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations.
3.5.8.3.2. Affinity
Affinity can be configured using the affinity
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The affinity configuration can include different types of affinity:
- Pod affinity and anti-affinity
- Node affinity
The format of the affinity
property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation.
3.5.8.3.3. Tolerations
Tolerations can be configured using the tolerations
property in following resources:
-
Kafka.spec.kafka.template.pod
-
Kafka.spec.zookeeper.template.pod
-
Kafka.spec.entityOperator.template.pod
-
KafkaConnect.spec.template.pod
-
KafkaConnectS2I.spec.template.pod
-
KafkaBridge.spec.template.pod
The format of the tolerations
property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations.
3.5.8.3.4. Setting up dedicated nodes and scheduling pods on them
Prerequisites
- An OpenShift cluster
- A running Cluster Operator
Procedure
- Select the nodes which should be used as dedicated.
- Make sure there are no workloads scheduled on these nodes.
Set the taints on the selected nodes:
On OpenShift this can be done using
oc adm taint
:oc adm taint node your-node dedicated=Kafka:NoSchedule
Additionally, add a label to the selected nodes as well.
On OpenShift this can be done using
oc label
:oc label node your-node dedicated=Kafka
Edit the
affinity
andtolerations
properties in the resource specifying the cluster deployment. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ...
Create or update the resource.
On OpenShift this can be done using
oc apply
:oc apply -f your-file
3.5.9. List of resources created as part of Kafka Bridge cluster
The following resources are created by the Cluster Operator in the OpenShift cluster:
- bridge-cluster-name-bridge
- Deployment which is in charge to create the Kafka Bridge worker node pods.
- bridge-cluster-name-bridge-service
- Service which exposes the REST interface of the Kafka Bridge cluster.
- bridge-cluster-name-bridge-config
- ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods.
- bridge-cluster-name-bridge
- Pod Disruption Budget configured for the Kafka Bridge worker nodes.
3.6. Customizing deployments
AMQ Streams creates several OpenShift resources, such as Deployments
, StatefulSets
, Pods
, and Services
, which are managed by OpenShift operators. Only the operator that is responsible for managing a particular OpenShift resource can change that resource. If you try to manually change an operator-managed OpenShift resource, the operator will revert your changes back.
However, changing an operator-managed OpenShift resource can be useful if you want to perform certain tasks, such as:
-
Adding custom labels or annotations that control how
Pods
are treated by Istio or other services; -
Managing how
Loadbalancer
-type Services are created by the cluster.
You can make these types of changes using the template
property in the AMQ Streams custom resources.
3.6.1. Template properties
You can use the template
property to configure aspects of the resource creation process. You can include it in the following resources and properties:
- Kafka.spec.kafka
- Kafka.spec.zookeeper
- Kafka.spec.entityOperator
- KafkaConnect.spec
- KafkaConnectS2I.spec
- KafkaMirrorMakerSpec
In the following example, the template
property is used to modify the labels in a Kafka broker’s StatefulSet
:
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # ... template: statefulset: metadata: labels: mylabel: myvalue # ...
Supported resources in Kafka cluster
When defined in a Kafka cluster, the template
object can have the following fields:
statefulset
-
Configures the
StatefulSet
used by the Kafka broker. pod
-
Configures the Kafka broker
Pods
created by theStatefulSet
. bootstrapService
- Configures the bootstrap service used by clients running within OpenShift to connect to the Kafka broker.
brokersService
- Configures the headless service.
externalBootstrapService
- Configures the bootstrap service used by clients connecting to Kafka brokers from outside of OpenShift.
perPodService
- Configures the per-Pod services used by clients connecting to the Kafka broker from outside OpenShift to access individual brokers.
externalBootstrapRoute
-
Configures the bootstrap route used by clients connecting to the Kafka brokers from outside of OpenShift using OpenShift
Routes
. perPodRoute
-
Configures the per-Pod routes used by clients connecting to the Kafka broker from outside OpenShift to access individual brokers using OpenShift
Routes
. podDisruptionBudget
-
Configures the Pod Disruption Budget for Kafka broker
StatefulSet
.
Supported resources in Zookeeper cluster
When defined in a Zookeeper cluster, the template
object can have the following fields:
statefulset
-
Configures the Zookeeper
StatefulSet
. pod
-
Configures the Zookeeper
Pods
created by theStatefulSet
. clientsService
- Configures the service used by clients to access Zookeeper.
nodesService
- Configures the headless service.
podDisruptionBudget
-
Configures the Pod Disruption Budget for Zookeeper
StatefulSet
.
Supported resources in Entity Operator
When defined in an Entity Operator , the template object can have the following fields:
deployment
- Configures the Deployment used by the Entity Operator.
pod
-
Configures the Entity Operator
Pod
created by theDeployment
.
Supported resources in Kafka Connect and Kafka Connect with Source2Image support
When used with Kafka Connect and Kafka Connect with Source2Image support , the template object can have the following fields:
deployment
-
Configures the Kafka Connect
Deployment
. pod
-
Configures the Kafka Connect
Pods
created by theDeployment
. apiService
- Configures the service used by the Kafka Connect REST API.
podDisruptionBudget
-
Configures the Pod Disruption Budget for Kafka Connect
Deployment
.
Supported resource in Kafka Mirror Maker
When used with Kafka Mirror Maker , the template object can have the following fields:
deployment
-
Configures the Kafka Mirror Maker
Deployment
. pod
-
Configures the Kafka Mirror Maker
Pods
created by theDeployment
. podDisruptionBudget
-
Configures the Pod Disruption Budget for Kafka Mirror Maker
Deployment
.
3.6.2. Labels and Annotations
For every resource, you can configure additional Labels
and Annotations
. Labels
and Annotations
are configured in the metadata
property. For example:
# ... template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ...
The labels
and annotations
fields can contain any labels or annotations that do not contain the reserved string strimzi.io
. Labels and annotations containing strimzi.io
are used internally by AMQ Streams and cannot be configured by the user.
3.6.3. Customizing Pods
In addition to Labels and Annotations, you can customize some other fields on Pods. These fields are described in the following table and affect how the Pod is created.
Field | Description |
---|---|
|
Defines the period of time, in seconds, by which the Pod must have terminated gracefully. After the grace period, the Pod and its containers are forcefully terminated (killed). The default value is NOTE: You might need to increase the grace period for very large Kafka clusters, so that the Kafka brokers have enough time to transfer their work to another broker before they are terminated. |
| Defines a list of references to OpenShift Secrets that can be used for pulling container images from private repositories. For more information about how to create a Secret with the credentials, see Pull an Image from a Private Registry.
NOTE: When the |
| Configures pod-level security attributes for containers running as part of a given Pod. For more information about configuring SecurityContext, see Configure a Security Context for a Pod or Container. |
These fields are effective on each type of cluster (Kafka and Zookeeper; Kafka Connect and Kafka Connect with S2I support; and Kafka Mirror Maker).
The following example shows these customized fields on a template
property:
# ... template: pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 # ...
Additional resources
-
For more information, see Section C.40, “
PodTemplate
schema reference”.
3.6.4. Customizing the image pull policy
AMQ Streams allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY
in the Cluster Operator deployment. The STRIMZI_IMAGE_PULL_POLICY
environment variable can be set to three different values:
Always
- Container images are pulled from the registry every time the pod is started or restarted.
IfNotPresent
- Container images are pulled from the registry only when they were not pulled before.
Never
- Container images are never pulled from the registry.
The image pull policy can be currently customized only for all Kafka, Kafka Connect, and Kafka Mirror Maker clusters at once. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka Mirror Maker clusters.
Additional resources
- For more information about Cluster Operator configuration, see Section 4.1, “Cluster Operator”.
- For more information about Image Pull Policies, see Disruptions.
3.6.5. Customizing Pod Disruption Budgets
AMQ Streams creates a pod disruption budget for every new StatefulSet
or Deployment
. By default, these pod disruption budgets only allow a single pod to be unavailable at a given time by setting the maxUnavailable
value in the`PodDisruptionBudget.spec` resource to 1. You can change the amount of unavailable pods allowed by changing the default value of maxUnavailable
in the pod disruption budget template. This template applies to each type of cluster (Kafka and Zookeeper; Kafka Connect and Kafka Connect with S2I support; and Kafka Mirror Maker).
The following example shows customized podDisruptionBudget
fields on a template
property:
# ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ...
Additional resources
-
For more information, see Section C.41, “
PodDisruptionBudgetTemplate
schema reference”. - The Disruptions chapter of the Kubernetes documentation.
3.6.6. Customizing deployments
This procedure describes how to customize Labels
of a Kafka cluster.
Prerequisites
- An OpenShift cluster.
- A running Cluster Operator.
Procedure
Edit the
template
property in theKafka
,KafkaConnect
,KafkaConnectS2I
, orKafkaMirrorMaker
resource. For example, to modify the labels for the Kafka brokerStatefulSet
, use:apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # ... template: statefulset: metadata: labels: mylabel: myvalue # ...
Create or update the resource.
On OpenShift, use
oc apply
:oc apply -f your-file
Alternatively, use
oc edit
:oc edit Resource ClusterName