Search

Chapter 13. Custom resource API reference

download PDF

13.1. Common configuration properties

Common configuration properties apply to more than one resource.

13.1.1. replicas

Use the replicas property to configure replicas.

The type of replication depends on the resource.

  • KafkaTopic uses a replication factor to configure the number of replicas of each partition within a Kafka cluster.
  • Kafka components use replicas to configure the number of pods in a deployment to provide better availability and scalability.
Note

When running a Kafka component on OpenShift it may not be necessary to run multiple replicas for high availability. When the node where the component is deployed crashes, OpenShift will automatically reschedule the Kafka component pod to a different node. However, running Kafka components with multiple replicas can provide faster failover times as the other nodes will be up and running.

13.1.2. bootstrapServers

Use the bootstrapServers property to configure a list of bootstrap servers.

The bootstrap server lists can refer to Kafka clusters that are not deployed in the same OpenShift cluster. They can also refer to a Kafka cluster not deployed by AMQ Streams.

If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named CLUSTER-NAME-kafka-bootstrap and a port number. If deployed by AMQ Streams but on different OpenShift clusters, the list content depends on the approach used for exposing the clusters (routes, ingress, nodeports or loadbalancers).

When using Kafka with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the given cluster.

13.1.3. ssl

Use the three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer.

You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification.

Example SSL configuration

# ...
spec:
  config:
    ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 1
    ssl.enabled.protocols: "TLSv1.2" 2
    ssl.protocol: "TLSv1.2" 3
    ssl.endpoint.identification.algorithm: HTTPS 4
# ...

1
The cipher suite for TLS using a combination of ECDHE key exchange mechanism, RSA authentication algorithm, AES bulk encyption algorithm and SHA384 MAC algorithm.
2
The SSl protocol TLSv1.2 is enabled.
3
Specifies the TLSv1.2 protocol to generate the SSL context. Allowed values are TLSv1.1 and TLSv1.2.
4
Hostname verification is enabled by setting to HTTPS. An empty string disables the verification.

13.1.4. trustedCertificates

Having set tls to configure TLS encryption, use the trustedCertificates property to provide a list of secrets with key names under which the certificates are stored in X.509 format.

You can use the secrets created by the Cluster Operator for the Kafka cluster, or you can create your own TLS certificate file, then create a Secret from the file:

oc create secret generic MY-SECRET \
--from-file=MY-TLS-CERTIFICATE-FILE.crt

Example TLS encryption configuration

tls:
  trustedCertificates:
    - secretName: my-cluster-cluster-cert
      certificate: ca.crt
    - secretName: my-cluster-cluster-cert
      certificate: ca2.crt

If certificates are stored in the same secret, it can be listed multiple times.

If you want to enable TLS, but use the default set of public certification authorities shipped with Java, you can specify trustedCertificates as an empty array:

Example of enabling TLS with the default Java certificates

tls:
  trustedCertificates: []

For information on configuring TLS client authentication, see KafkaClientAuthenticationTls schema reference.

13.1.5. resources

You request CPU and memory resources for components. Limits specify the maximum resources that can be consumed by a given container.

Resource requests and limits for the Topic Operator and User Operator are set in the Kafka resource.

Use the reources.requests and resources.limits properties to configure resource requests and limits.

For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.

AMQ Streams supports requests and limits for the following types of resources:

  • cpu
  • memory

AMQ Streams uses the OpenShift syntax for specifying these resources.

For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers.

Resource requests

Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.

Important

If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.

A request may be configured for one or more supported resources.

Example resource requests configuration

# ...
resources:
  requests:
    cpu: 12
    memory: 64Gi
# ...

Resource limits

Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.

A resource may be configured for one or more supported limits.

Example resource limits configuration

# ...
resources:
  limits:
    cpu: 12
    memory: 64Gi
# ...

Supported CPU formats

CPU requests and limits are supported in the following formats:

  • Number of CPU cores as integer (5 CPU core) or decimal (2.5 CPU core).
  • Number or millicpus / millicores (100m) where 1000 millicores is the same 1 CPU core.

Example CPU units

# ...
resources:
  requests:
    cpu: 500m
  limits:
    cpu: 2.5
# ...

Note

The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.

For more information on CPU specification, see the Meaning of CPU.

Supported memory formats

Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.

  • To specify memory in megabytes, use the M suffix. For example 1000M.
  • To specify memory in gigabytes, use the G suffix. For example 1G.
  • To specify memory in mebibytes, use the Mi suffix. For example 1000Mi.
  • To specify memory in gibibytes, use the Gi suffix. For example 1Gi.

Example resources using different memory units

# ...
resources:
  requests:
    memory: 512Mi
  limits:
    memory: 2Gi
# ...

For more details about memory specification and additional supported units, see Meaning of memory.

13.1.6. image

Use the image property to configure the container image used by the component.

Overriding container images is recommended only in special situations where you need to use a different container registry or a customized image.

For example, if your network does not allow access to the container repository used by AMQ Streams, you can copy the AMQ Streams images or build them from the source. However, if the configured image is not compatible with AMQ Streams images, it might not work properly.

A copy of the container image might also be customized and used for debugging.

You can specify which container image to use for a component using the image property in the following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • Kafka.spec.entityOperator.topicOperator
  • Kafka.spec.entityOperator.userOperator
  • Kafka.spec.entityOperator.tlsSidecar
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaMirrorMaker.spec
  • KafkaMirrorMaker2.spec
  • KafkaBridge.spec

Configuring the image property for Kafka, Kafka Connect, and Kafka MirrorMaker

Kafka, Kafka Connect (including Kafka Connect with S2I support), and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables:

  • STRIMZI_KAFKA_IMAGES
  • STRIMZI_KAFKA_CONNECT_IMAGES
  • STRIMZI_KAFKA_CONNECT_S2I_IMAGES
  • STRIMZI_KAFKA_MIRROR_MAKER_IMAGES

These environment variables contain mappings between the Kafka versions and their corresponding images. The mappings are used together with the image and version properties:

  • If neither image nor version are given in the custom resource then the version will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in the environment variable.
  • If image is given but version is not, then the given image is used and the version is assumed to be the Cluster Operator’s default Kafka version.
  • If version is given but image is not, then the image that corresponds to the given version in the environment variable is used.
  • If both version and image are given, then the given image is used. The image is assumed to contain a Kafka image with the given version.

The image and version for the different components can be configured in the following properties:

  • For Kafka in spec.kafka.image and spec.kafka.version.
  • For Kafka Connect, Kafka Connect S2I, and Kafka MirrorMaker in spec.image and spec.version.
Warning

It is recommended to provide only the version and leave the image property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator’s environment variables.

Configuring the image property in other resources

For the image property in the other custom resources, the given value will be used during deployment. If the image property is missing, the image specified in the Cluster Operator configuration will be used. If the image name is not defined in the Cluster Operator configuration, then the default value will be used.

  • For Topic Operator:

    1. Container image specified in the STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image.
  • For User Operator:

    1. Container image specified in the STRIMZI_DEFAULT_USER_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image.
  • For Entity Operator TLS sidecar:

    1. Container image specified in the STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 container image.
  • For Kafka Exporter:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 container image.
  • For Kafka Bridge:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.7.0 container image.
  • For Kafka broker initializer:

    1. Container image specified in the STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable from the Cluster Operator configuration.
    2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image.

Example of container image configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    image: my-org/my-image:latest
    # ...
  zookeeper:
    # ...

13.1.7. livenessProbe and readinessProbe healthchecks

Use the livenessProbe and readinessProbe properties to configure healthcheck probes supported in AMQ Streams.

Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.

For more details about the probes, see Configure Liveness and Readiness Probes.

Both livenessProbe and readinessProbe support the following options:

  • initialDelaySeconds
  • timeoutSeconds
  • periodSeconds
  • successThreshold
  • failureThreshold

Example of liveness and readiness probe configuration

# ...
readinessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
livenessProbe:
  initialDelaySeconds: 15
  timeoutSeconds: 5
# ...

For more information about the livenessProbe and readinessProbe options, see Probe schema reference.

13.1.8. metricsConfig

Use the metricsConfig property to enable and configure Prometheus metrics.

The metricsConfig property contains a reference to a ConfigMap containing additional configuration for the Prometheus JMX exporter. AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics.

To enable Prometheus metrics export without further configuration, you can reference a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key. When referencing an empty file, all metrics are exposed as long as they have not been renamed.

Example ConfigMap with metrics configuration for Kafka

kind: ConfigMap
apiVersion: v1
metadata:
  name: my-configmap
data:
  my-key: |
    lowercaseOutputName: true
    rules:
    # Special cases and very specific rules
    - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
      name: kafka_server_$1_$2
      type: GAUGE
      labels:
       clientId: "$3"
       topic: "$4"
       partition: "$5"
    # further configuration

Example metrics configuration for Kafka

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    metricsConfig:
      type: jmxPrometheusExporter
      valueFrom:
        configMapKeyRef:
          name: my-config-map
          key: my-key
    # ...
  zookeeper:
    # ...

When metrics are enabled, they are exposed on port 9404.

When the metricsConfig (or deprecated metrics) property is not defined in the resource, the Prometheus metrics are disabled.

For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka in the Deploying and Upgrading AMQ Streams on OpenShift guide.

13.1.9. jvmOptions

The following AMQ Streams components run inside a Java Virtual Machine (JVM):

  • Apache Kafka
  • Apache ZooKeeper
  • Apache Kafka Connect
  • Apache Kafka MirrorMaker
  • AMQ Streams Kafka Bridge

To optimize their performance on different platforms and architectures, you configure the jvmOptions property in the following resources:

  • Kafka.spec.kafka
  • Kafka.spec.zookeeper
  • KafkaConnect.spec
  • KafkaConnectS2I.spec
  • KafkaMirrorMaker.spec
  • KafkaMirrorMaker2.spec
  • KafkaBridge.spec

You can specify the following options in your configuration:

-Xms
Minimum initial allocation heap size when the JVM starts.
-Xmx
Maximum heap size.
-XX
Advanced runtime options for the JVM.
javaSystemProperties
Additional system properties.
gcLoggingEnabled
Enables garbage collector logging.

The full schema of jvmOptions is described in JvmOptions schema reference.

Note

The units accepted by JVM settings, such as -Xmx and -Xms, are the same units accepted by the JDK java binary in the corresponding image. Therefore, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is different from the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes

-Xms and -Xmx options

The default values used for -Xms and -Xmx depend on whether there is a memory request limit configured for the container.

  • If there is a memory limit, the JVM’s minimum and maximum memory is set to a value corresponding to the limit.
  • If there is no memory limit, the JVM’s minimum memory is set to 128M. The JVM’s maximum memory is not defined to allow the memory to increase as needed. This is ideal for single node environments in test and development.

Before setting -Xmx explicitly consider the following:

  • The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by -Xmx.
  • If -Xmx is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure from other Pods running on it.
  • If -Xmx is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash immediately if -Xms is set to -Xmx, or at a later time if not.

It is recommended to:

  • Set the memory request and the memory limit to the same value
  • Use a memory request that is at least 4.5 × the -Xmx
  • Consider setting -Xms to the same value as -Xmx

In this example, the JVM uses 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage is approximately 8GiB.

Example -Xmx and -Xms configuration

# ...
jvmOptions:
  "-Xmx": "2g"
  "-Xms": "2g"
# ...

Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed.

Important

Containers performing lots of disk I/O, such as Kafka broker containers, require available memory for use as an operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.

-XX option

-XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.

Example -XX configuration

jvmOptions:
  "-XX":
    "UseG1GC": true
    "MaxGCPauseMillis": 20
    "InitiatingHeapOccupancyPercent": 35
    "ExplicitGCInvokesConcurrent": true

JVM options resulting from the -XX configuration

-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC

Note

When no -XX options are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS is used.

javaSystemProperties

javaSystemProperties are used to configure additional Java system properties, such as debugging utilities.

Example javaSystemProperties configuration

jvmOptions:
  javaSystemProperties:
    - name: javax.net.debug
      value: ssl

13.1.10. Garbage collector logging

The jvmOptions property also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows:

Example GC logging configuration

# ...
jvmOptions:
  gcLoggingEnabled: true
# ...

13.2. Schema properties

13.2.1. Kafka schema reference

PropertyDescription

spec

The specification of the Kafka and ZooKeeper clusters, and Topic Operator.

KafkaSpec

status

The status of the Kafka and ZooKeeper clusters, and Topic Operator.

KafkaStatus

13.2.2. KafkaSpec schema reference

Used in: Kafka

PropertyDescription

kafka

Configuration of the Kafka cluster.

KafkaClusterSpec

zookeeper

Configuration of the ZooKeeper cluster.

ZookeeperClusterSpec

topicOperator

The topicOperator property has been deprecated, and should now be configured using spec.entityOperator.topicOperator. The property topicOperator is removed in API version v1beta2. Configuration of the Topic Operator.

TopicOperatorSpec

entityOperator

Configuration of the Entity Operator.

EntityOperatorSpec

clusterCa

Configuration of the cluster certificate authority.

CertificateAuthority

clientsCa

Configuration of the clients certificate authority.

CertificateAuthority

cruiseControl

Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified.

CruiseControlSpec

kafkaExporter

Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition.

KafkaExporterSpec

maintenanceTimeWindows

A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression.

string array

13.2.3. KafkaClusterSpec schema reference

Used in: KafkaSpec

Full list of KafkaClusterSpec schema properties

Configures a Kafka cluster.

13.2.3.1. listeners

Use the listeners property to configure listeners to provide access to Kafka brokers.

Example configuration of a plain (unencrypted) listener without authentication

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  kafka:
    # ...
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
    # ...
  zookeeper:
    # ...

13.2.3.2. config

Use the config properties to configure Kafka broker options as keys.

Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.

Configuration options that cannot be configured relate to:

  • Security (Encryption, Authentication, and Authorization)
  • Listener configuration
  • Broker ID configuration
  • Configuration of log data directories
  • Inter-broker communication
  • ZooKeeper connectivity

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • listeners
  • advertised.
  • broker.
  • listener.
  • host.name
  • port
  • inter.broker.listener.name
  • sasl.
  • ssl.
  • security.
  • password.
  • principal.builder.class
  • log.dir
  • zookeeper.connect
  • zookeeper.set.acl
  • authorizer.
  • super.user

When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to Kafka.

There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties. You can also configure the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection.

Example Kafka broker configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    config:
      num.partitions: 1
      num.recovery.threads.per.data.dir: 1
      default.replication.factor: 3
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 1
      log.retention.hours: 168
      log.segment.bytes: 1073741824
      log.retention.check.interval.ms: 300000
      num.network.threads: 3
      num.io.threads: 8
      socket.send.buffer.bytes: 102400
      socket.receive.buffer.bytes: 102400
      socket.request.max.bytes: 104857600
      group.initial.rebalance.delay.ms: 0
      ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
      ssl.enabled.protocols: "TLSv1.2"
      ssl.protocol: "TLSv1.2"
      zookeeper.connection.timeout.ms: 6000
    # ...

13.2.3.3. brokerRackInitImage

When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image used for this container can be configured using the brokerRackInitImage property. When the brokerRackInitImage field is missing, the following images are used in order of priority:

  1. Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration.
  2. registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 container image.

Example brokerRackInitImage configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    rack:
      topologyKey: topology.kubernetes.io/zone
    brokerRackInitImage: my-org/my-image:latest
    # ...

Note

Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container registry used by AMQ Streams. In this case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly.

13.2.3.4. logging

Kafka has its own configurable loggers:

  • log4j.logger.org.I0Itec.zkclient.ZkClient
  • log4j.logger.org.apache.zookeeper
  • log4j.logger.kafka
  • log4j.logger.org.apache.kafka
  • log4j.logger.kafka.request.logger
  • log4j.logger.kafka.network.Processor
  • log4j.logger.kafka.server.KafkaApis
  • log4j.logger.kafka.network.RequestChannel$
  • log4j.logger.kafka.controller
  • log4j.logger.kafka.log.LogCleaner
  • log4j.logger.state.change.logger
  • log4j.logger.kafka.authorizer.logger

Kafka uses the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  kafka:
    # ...
    logging:
      type: inline
      loggers:
        kafka.root.logger.level: "INFO"
  # ...

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        name: customConfigMap
        key: kafka-log4j.properties
  # ...

Any available loggers that are not configured have their level set to OFF.

If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically.

If you use external logging, a rolling update is triggered when logging appenders are changed.

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

13.2.3.5. KafkaClusterSpec schema properties

PropertyDescription

version

The kafka broker version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version.

string

replicas

The number of pods in the cluster.

integer

image

The docker image for the pods. The default value depends on the configured Kafka.spec.kafka.version.

string

listeners

Configures listeners of Kafka brokers.

GenericKafkaListener array or KafkaListeners

config

Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., principal.builder.class, log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers (with the exception of: zookeeper.connection.timeout.ms, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols,cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms,cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms,cruise.control.metrics.topic.min.insync.replicas).

map

storage

Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim, jbod].

EphemeralStorage, PersistentClaimStorage, JbodStorage

authorization

Authorization configuration for Kafka brokers. The type depends on the value of the authorization.type property within the given object, which must be one of [simple, opa, keycloak].

KafkaAuthorizationSimple, KafkaAuthorizationOpa, KafkaAuthorizationKeycloak

rack

Configuration of the broker.rack broker config.

Rack

brokerRackInitImage

The image of the init container used for initializing the broker.rack.

string

affinity

The affinity property has been deprecated, and should now be configured using spec.kafka.template.pod.affinity. The property affinity is removed in API version v1beta2. The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The tolerations property has been deprecated, and should now be configured using spec.kafka.template.pod.tolerations. The property tolerations is removed in API version v1beta2. The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

jvmOptions

JVM Options for pods.

JvmOptions

jmxOptions

JMX Options for Kafka brokers.

KafkaJmxOptions

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

metrics

The metrics property has been deprecated, and should now be configured using spec.kafka.metricsConfig. The property metrics is removed in API version v1beta2. The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration.

map

metricsConfig

Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter].

JmxPrometheusExporterMetrics

logging

Logging configuration for Kafka. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

tlsSidecar

The tlsSidecar property has been deprecated. The property tlsSidecar is removed in API version v1beta2. TLS sidecar configuration.

TlsSidecar

template

Template for Kafka cluster resources. The template allows users to specify how are the StatefulSet, Pods and Services generated.

KafkaClusterTemplate

13.2.4. GenericKafkaListener schema reference

Used in: KafkaClusterSpec

Full list of GenericKafkaListener schema properties

Configures listeners to connect to Kafka brokers within and outside OpenShift.

You configure the listeners in the Kafka resource.

Example Kafka resource showing listener configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    #...
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: tls
      - name: external1
        port: 9094
        type: route
        tls: true
      - name: external2
        port: 9095
        type: ingress
        tls: true
        authentication:
          type: tls
        configuration:
          bootstrap:
            host: bootstrap.myingress.com
          brokers:
          - broker: 0
            host: broker-0.myingress.com
          - broker: 1
            host: broker-1.myingress.com
          - broker: 2
            host: broker-2.myingress.com
    #...

13.2.4.1. listeners

You configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array.

Example listener configuration

listeners:
  - name: plain
    port: 9092
    type: internal
    tls: false

The name and port must be unique within the Kafka cluster. The name can be up to 25 characters long, comprising lower-case letters and numbers. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX.

By specifying a unique name and port for each listener, you can configure multiple listeners.

13.2.4.2. type

The type is set as internal, or for external listeners, as route, loadbalancer, nodeport or ingress.

internal

You can configure internal listeners with or without encryption using the tls property.

Example internal listener configuration

#...
spec:
  kafka:
    #...
    listeners:
      #...
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
        authentication:
          type: tls
    #...

route

Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router.

A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example.

Example route listener configuration

#...
spec:
  kafka:
    #...
    listeners:
      #...
      - name: external1
        port: 9094
        type: route
        tls: true
    #...

ingress

Configures an external listener to expose Kafka using Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes.

A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example.

You must specify the hostnames used by the bootstrap and per-broker services using GenericKafkaListenerConfigurationBootstrap and GenericKafkaListenerConfigurationBroker properties.

Example ingress listener configuration

#...
spec:
  kafka:
    #...
    listeners:
      #...
      - name: external2
        port: 9095
        type: ingress
        tls: true
        authentication:
          type: tls
        configuration:
          bootstrap:
            host: bootstrap.myingress.com
          brokers:
          - broker: 0
            host: broker-0.myingress.com
          - broker: 1
            host: broker-1.myingress.com
          - broker: 2
            host: broker-2.myingress.com
  #...

Note

External listeners using Ingress are currently only tested with the NGINX Ingress Controller for Kubernetes.

loadbalancer

Configures an external listener to expose Kafka Loadbalancer type Services.

A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example.

You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses.

Example loadbalancer listener configuration

#...
spec:
  kafka:
    #...
    listeners:
      - name: external3
        port: 9094
        type: loadbalancer
        tls: true
        configuration:
          loadBalancerSourceRanges:
            - 10.0.0.0/8
            - 88.208.76.87/32
    #...

nodeport

Configures an external listener to expose Kafka using NodePort type Services.

Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address.

When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address.

Example nodeport listener configuration

#...
spec:
  kafka:
    #...
    listeners:
      #...
      - name: external4
        port: 9095
        type: nodeport
        tls: false
        configuration:
          preferredNodePortAddressType: InternalDNS
    #...

Note

TLS hostname verification is not currently supported when exposing Kafka clusters using node ports.

13.2.4.3. port

The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client.

  • loadbalancer listeners use the specified port number, as do internal listeners
  • ingress and route listeners use port 443 for access
  • nodeport listeners use the port number assigned by OpenShift

For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource.

Example command to retrieve the address and port for client connection

oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'

Note

Listeners cannot be configured to use the ports set aside for interbroker communication (9091) and metrics (9404).

13.2.4.4. tls

The TLS property is required.

By default, TLS encryption is not enabled. To enable it, set the tls property to true.

TLS encryption is always used with route listeners.

13.2.4.5. authentication

Authentication for the listener can be specified as:

  • Mutual TLS (tls)
  • SCRAM-SHA-512 (scram-sha-512)
  • Token-based OAuth 2.0 (oauth).

13.2.4.6. networkPolicyPeers

Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener.

listeners:
  #...
  - name: plain
    port: 9092
    type: internal
    tls: true
    authentication:
      type: scram-sha-512
    networkPolicyPeers:
      - podSelector:
          matchLabels:
            app: kafka-sasl-consumer
      - podSelector:
          matchLabels:
            app: kafka-sasl-producer
  - name: tls
    port: 9093
    type: internal
    tls: true
    authentication:
      type: tls
    networkPolicyPeers:
      - namespaceSelector:
          matchLabels:
            project: myproject
      - namespaceSelector:
          matchLabels:
            project: myproject2
# ...

In the example:

  • Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker.
  • Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener.

The syntax of the networkPolicyPeers field is the same as the from field in NetworkPolicy resources.

Backwards compatibility with KafkaListeners

GenericKafkaListener replaces the KafkaListeners schema, which is now deprecated.

To convert the listeners configured using the KafkaListeners schema into the format of the GenericKafkaListener schema, with backwards compatibility, use the following names, ports and types:

listeners:
  #...
  - name: plain
    port: 9092
    type: internal
    tls: false
  - name: tls
    port: 9093
    type: internal
    tls: true
  - name: external
    port: 9094
    type: EXTERNAL-LISTENER-TYPE 1
    tls: true
# ...
1
Options: ingress, loadbalancer, nodeport, route

13.2.4.7. GenericKafkaListener schema properties

PropertyDescription

name

Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long.

string

port

Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients.

integer

type

Type of the listener. Currently the supported types are internal, route, loadbalancer, nodeport and ingress.

* internal type exposes Kafka internally only within the OpenShift cluster. * route type uses OpenShift Routes to expose Kafka. * loadbalancer type uses LoadBalancer type services to expose Kafka. * nodeport type uses NodePort type services to expose Kafka. * ingress type uses OpenShift Nginx Ingress to expose Kafka. .

string (one of [ingress, internal, route, loadbalancer, nodeport])

tls

Enables TLS encryption on the listener. This is a required property.

boolean

authentication

Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth].

KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth

configuration

Additional listener configuration.

GenericKafkaListenerConfiguration

networkPolicyPeers

List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer.

NetworkPolicyPeer array

13.2.5. KafkaListenerAuthenticationTls schema reference

Used in: GenericKafkaListener, KafkaListenerExternalIngress, KafkaListenerExternalLoadBalancer, KafkaListenerExternalNodePort, KafkaListenerExternalRoute, KafkaListenerPlain, KafkaListenerTls

The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationTls type from KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth. It must have the value tls for the type KafkaListenerAuthenticationTls.

PropertyDescription

type

Must be tls.

string

13.2.6. KafkaListenerAuthenticationScramSha512 schema reference

Used in: GenericKafkaListener, KafkaListenerExternalIngress, KafkaListenerExternalLoadBalancer, KafkaListenerExternalNodePort, KafkaListenerExternalRoute, KafkaListenerPlain, KafkaListenerTls

The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationScramSha512 type from KafkaListenerAuthenticationTls, KafkaListenerAuthenticationOAuth. It must have the value scram-sha-512 for the type KafkaListenerAuthenticationScramSha512.

PropertyDescription

type

Must be scram-sha-512.

string

13.2.7. KafkaListenerAuthenticationOAuth schema reference

Used in: GenericKafkaListener, KafkaListenerExternalIngress, KafkaListenerExternalLoadBalancer, KafkaListenerExternalNodePort, KafkaListenerExternalRoute, KafkaListenerPlain, KafkaListenerTls

The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationOAuth type from KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512. It must have the value oauth for the type KafkaListenerAuthenticationOAuth.

PropertyDescription

accessTokenIsJwt

Configure whether the access token is treated as JWT. This must be set to false if the authorization server returns opaque tokens. Defaults to true.

boolean

checkAccessTokenType

Configure whether the access token type check is performed or not. This should be set to false if the authorization server does not include 'typ' claim in JWT token. Defaults to true.

boolean

checkAudience

Enable or disable audience checking. Audience checks identify the recipients of tokens. If audience checking is enabled, the OAuth Client ID also has to be configured using the clientId property. The Kafka broker will reject tokens that do not have its clientId in their aud (audience) claim.Default value is false.

boolean

checkIssuer

Enable or disable issuer checking. By default issuer is checked using the value configured by validIssuerUri. Default value is true.

boolean

clientId

OAuth Client ID which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI.

string

clientSecret

Link to OpenShift Secret containing the OAuth client secret which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI.

GenericSecretSource

customClaimCheck

JsonPath filter query to be applied to the JWT token or to the response of the introspection endpoint for additional token validation. Not set by default.

string

disableTlsHostnameVerification

Enable or disable TLS hostname verification. Default value is false.

boolean

enableECDSA

Enable or disable ECDSA support by installing BouncyCastle crypto provider. Default value is false.

boolean

enableOauthBearer

Enable or disable OAuth authentication over SASL_OAUTHBEARER. Default value is true.

boolean

enablePlain

Enable or disable OAuth authentication over SASL_PLAIN. There is no re-authentication support when this mechanism is used. Default value is false.

boolean

fallbackUserNameClaim

The fallback username claim to be used for the user id if the claim specified by userNameClaim is not present. This is useful when client_credentials authentication only results in the client id being provided in another claim. It only takes effect if userNameClaim is set.

string

fallbackUserNamePrefix

The prefix to use with the value of fallbackUserNameClaim to construct the user id. This only takes effect if fallbackUserNameClaim is true, and the value is present for the claim. Mapping usernames and client ids into the same user id space is useful in preventing name collisions.

string

introspectionEndpointUri

URI of the token introspection endpoint which can be used to validate opaque non-JWT tokens.

string

jwksEndpointUri

URI of the JWKS certificate endpoint, which can be used for local JWT validation.

string

jwksExpirySeconds

Configures how often are the JWKS certificates considered valid. The expiry interval has to be at least 60 seconds longer then the refresh interval specified in jwksRefreshSeconds. Defaults to 360 seconds.

integer

jwksMinRefreshPauseSeconds

The minimum pause between two consecutive refreshes. When an unknown signing key is encountered the refresh is scheduled immediately, but will always wait for this minimum pause. Defaults to 1 second.

integer

jwksRefreshSeconds

Configures how often are the JWKS certificates refreshed. The refresh interval has to be at least 60 seconds shorter then the expiry interval specified in jwksExpirySeconds. Defaults to 300 seconds.

integer

maxSecondsWithoutReauthentication

Maximum number of seconds the authenticated session remains valid without re-authentication. This enables Apache Kafka re-authentication feature, and causes sessions to expire when the access token expires. If the access token expires before max time or if max time is reached, the client has to re-authenticate, otherwise the server will drop the connection. Not set by default - the authenticated session does not expire when the access token expires. This option only applies to SASL_OAUTHBEARER authentication mechanism (when enableOauthBearer is true).

integer

tlsTrustedCertificates

Trusted certificates for TLS connection to the OAuth server.

CertSecretSource array

tokenEndpointUri

URI of the Token Endpoint to use with SASL_PLAIN mechanism when the client authenticates with clientId and a secret.

string

type

Must be oauth.

string

userInfoEndpointUri

URI of the User Info Endpoint to use as a fallback to obtaining the user id when the Introspection Endpoint does not return information that can be used for the user id.

string

userNameClaim

Name of the claim from the JWT authentication token, Introspection Endpoint response or User Info Endpoint response which will be used to extract the user id. Defaults to sub.

string

validIssuerUri

URI of the token issuer used for authentication.

string

validTokenType

Valid value for the token_type attribute returned by the Introspection Endpoint. No default value, and not checked by default.

string

13.2.8. GenericSecretSource schema reference

Used in: KafkaClientAuthenticationOAuth, KafkaListenerAuthenticationOAuth

PropertyDescription

key

The key under which the secret value is stored in the OpenShift Secret.

string

secretName

The name of the OpenShift Secret containing the secret value.

string

13.2.9. CertSecretSource schema reference

Used in: KafkaAuthorizationKeycloak, KafkaBridgeTls, KafkaClientAuthenticationOAuth, KafkaConnectTls, KafkaListenerAuthenticationOAuth, KafkaMirrorMaker2Tls, KafkaMirrorMakerTls

PropertyDescription

certificate

The name of the file certificate in the Secret.

string

secretName

The name of the Secret containing the certificate.

string

13.2.10. GenericKafkaListenerConfiguration schema reference

Used in: GenericKafkaListener

Full list of GenericKafkaListenerConfiguration schema properties

Configuration for Kafka listeners.

13.2.10.1. brokerCertChainAndKey

The brokerCertChainAndKey property is only used with listeners that have TLS encryption enabled. You can use the property to providing your own Kafka listener certificates.

Example configuration for a loadbalancer external listener with TLS encryption enabled

listeners:
  #...
  - name: external
    port: 9094
    type: loadbalancer
    tls: true
    authentication:
      type: tls
    configuration:
      brokerCertChainAndKey:
        secretName: my-secret
        certificate: my-listener-certificate.crt
        key: my-listener-key.key
# ...

13.2.10.2. externalTrafficPolicy

The externalTrafficPolicy property is used with loadbalancer and nodeport listeners. When exposing Kafka outside of OpenShift you can choose Local or Cluster. Local avoids hops to other nodes and preserves the client IP, whereas Cluster does neither. The default is Cluster.

13.2.10.3. loadBalancerSourceRanges

The loadBalancerSourceRanges property is only used with loadbalancer listeners. When exposing Kafka outside of OpenShift use source ranges, in addition to labels and annotations, to customize how a service is created.

Example source ranges configured for a loadbalancer listener

listeners:
  #...
  - name: external
    port: 9094
    type: loadbalancer
    tls: false
    configuration:
      externalTrafficPolicy: Local
      loadBalancerSourceRanges:
        - 10.0.0.0/8
        - 88.208.76.87/32
      # ...
# ...

13.2.10.4. class

The class property is only used with ingress listeners. You can configure the Ingress class using the class property.

Example of an external listener of type ingress using Ingress class nginx-internal

listeners:
  #...
  - name: external
    port: 9094
    type: ingress
    tls: true
    configuration:
      class: nginx-internal
    # ...
# ...

13.2.10.5. preferredNodePortAddressType

The preferredNodePortAddressType property is only used with nodeport listeners.

Use the preferredNodePortAddressType property in your listener configuration to specify the first address type checked as the node address. This property is useful, for example, if your deployment does not have DNS support, or you only want to expose a broker internally through an internal DNS or IP address. If an address of this type is found, it is used. If the preferred address type is not found, AMQ Streams proceeds through the types in the standard order of priority:

  1. ExternalDNS
  2. ExternalIP
  3. Hostname
  4. InternalDNS
  5. InternalIP

Example of an external listener configured with a preferred node port address type

listeners:
  #...
  - name: external
    port: 9094
    type: nodeport
    tls: false
    configuration:
      preferredNodePortAddressType: InternalDNS
      # ...
# ...

13.2.10.6. useServiceDnsDomain

The useServiceDnsDomain property is only used with internal listeners. It defines whether the fully-qualified DNS names that include the cluster service suffix (usually .cluster.local) are used. With useServiceDnsDomain set as false, the advertised addresses are generated without the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc. With useServiceDnsDomain set as true, the advertised addresses are generated with the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc.cluster.local. Default is false.

Example of an internal listener configured to use the Service DNS domain

listeners:
  #...
  - name: plain
    port: 9092
    type: internal
    tls: false
    configuration:
      useServiceDnsDomain: true
      # ...
# ...

If your OpenShift cluster uses a different service suffix than .cluster.local, you can configure the suffix using the KUBERNETES_SERVICE_DNS_DOMAIN environment variable in the Cluster Operator configuration. See Section 5.1.1, “Cluster Operator configuration” for more details.

13.2.10.7. GenericKafkaListenerConfiguration schema properties

PropertyDescription

brokerCertChainAndKey

Reference to the Secret which holds the certificate and private key pair which will be used for this listener. The certificate can optionally contain the whole chain. This field can be used only with listeners with enabled TLS encryption.

CertAndKeySecretSource

externalTrafficPolicy

Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default.This field can be used only with loadbalancer or nodeport type listener.

string (one of [Local, Cluster])

loadBalancerSourceRanges

A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. For more information, see https://v1-17.docs.kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/. This field can be used only with loadbalancer type listener.

string array

bootstrap

Bootstrap configuration.

GenericKafkaListenerConfigurationBootstrap

brokers

Per-broker configurations.

GenericKafkaListenerConfigurationBroker array

class

Configures the Ingress class that defines which Ingress controller will be used. This field can be used only with ingress type listener. If not specified, the default Ingress controller will be used.

string

preferredNodePortAddressType

Defines which address type should be used as the node address. Available types are: ExternalDNS, ExternalIP, InternalDNS, InternalIP and Hostname. By default, the addresses will be used in the following order (the first one found will be used): * ExternalDNS * ExternalIP * InternalDNS * InternalIP * Hostname

This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order.This field can be used only with nodeport type listener..

string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS])

useServiceDnsDomain

Configures whether the OpenShift service DNS domain should be used or not. If set to true, the generated addresses will contain the service DNS domain suffix (by default .cluster.local, can be configured using environment variable KUBERNETES_SERVICE_DNS_DOMAIN). Defaults to false.This field can be used only with internal type listener.

boolean

13.2.11. CertAndKeySecretSource schema reference

Used in: GenericKafkaListenerConfiguration, IngressListenerConfiguration, KafkaClientAuthenticationTls, KafkaListenerExternalConfiguration, NodePortListenerConfiguration, TlsListenerConfiguration

PropertyDescription

certificate

The name of the file certificate in the Secret.

string

key

The name of the private key in the Secret.

string

secretName

The name of the Secret containing the certificate.

string

13.2.12. GenericKafkaListenerConfigurationBootstrap schema reference

Used in: GenericKafkaListenerConfiguration

Full list of GenericKafkaListenerConfigurationBootstrap schema properties

Broker service equivalents of nodePort, host, loadBalancerIP and annotations properties are configured in the GenericKafkaListenerConfigurationBroker schema.

13.2.12.1. alternativeNames

You can specify alternative names for the bootstrap service. The names are added to the broker certificates and can be used for TLS hostname verification. The alternativeNames property is applicable to all types of listeners.

Example of an external route listener configured with an additional bootstrap address

listeners:
  #...
  - name: external
    port: 9094
    type: route
    tls: true
    authentication:
      type: tls
    configuration:
      bootstrap:
        alternativeNames:
          - example.hostname1
          - example.hostname2
# ...

13.2.12.2. host

The host property is used with route and ingress listeners to specify the hostnames used by the bootstrap and per-broker services.

A host property value is mandatory for ingress listener configuration, as the Ingress controller does not assign any hostnames automatically. Make sure that the hostnames resolve to the Ingress endpoints. AMQ Streams will not perform any validation that the requested hosts are available and properly routed to the Ingress endpoints.

Example of host configuration for an ingress listener

listeners:
  #...
  - name: external
    port: 9094
    type: ingress
    tls: true
    authentication:
      type: tls
    configuration:
      bootstrap:
        host: bootstrap.myingress.com
      brokers:
      - broker: 0
        host: broker-0.myingress.com
      - broker: 1
        host: broker-1.myingress.com
      - broker: 2
        host: broker-2.myingress.com
# ...

By default, route listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts.

AMQ Streams does not perform any validation that the requested hosts are available. You must ensure that they are free and can be used.

Example of host configuration for a route listener

# ...
listeners:
  #...
  - name: external
    port: 9094
    type: route
    tls: true
    authentication:
      type: tls
    configuration:
      bootstrap:
        host: bootstrap.myrouter.com
      brokers:
      - broker: 0
        host: broker-0.myrouter.com
      - broker: 1
        host: broker-1.myrouter.com
      - broker: 2
        host: broker-2.myrouter.com
# ...

13.2.12.3. nodePort

By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. You can override the assigned node ports for nodeport listeners by specifying the requested port numbers.

AMQ Streams does not perform any validation on the requested ports. You must ensure that they are free and available for use.

Example of an external listener configured with overrides for node ports

# ...
listeners:
  #...
  - name: external
    port: 9094
    type: nodeport
    tls: true
    authentication:
      type: tls
    configuration:
      bootstrap:
        nodePort: 32100
      brokers:
      - broker: 0
        nodePort: 32000
      - broker: 1
        nodePort: 32001
      - broker: 2
        nodePort: 32002
# ...

13.2.12.4. loadBalancerIP

Use the loadBalancerIP property to request a specific IP address when creating a loadbalancer. Use this property when you need to use a loadbalancer with a specific IP address. The loadBalancerIP field is ignored if the cloud provider does not support the feature.

Example of an external listener of type loadbalancer with specific loadbalancer IP address requests

# ...
listeners:
  #...
  - name: external
    port: 9094
    type: loadbalancer
    tls: true
    authentication:
      type: tls
    configuration:
      bootstrap:
        loadBalancerIP: 172.29.3.10
      brokers:
      - broker: 0
        loadBalancerIP: 172.29.3.1
      - broker: 1
        loadBalancerIP: 172.29.3.2
      - broker: 2
        loadBalancerIP: 172.29.3.3
# ...

13.2.12.5. annotations

Use the annotations property to add annotations to OpenShift resources related to the listeners. You can use these annotations, for example, to instrument DNS tooling such as External DNS, which automatically assigns DNS names to the loadbalancer services.

Example of an external listener of type loadbalancer using annotations

# ...
listeners:
  #...
  - name: external
    port: 9094
    type: loadbalancer
    tls: true
    authentication:
      type: tls
    configuration:
      bootstrap:
        annotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      brokers:
      - broker: 0
        annotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      - broker: 1
        annotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
      - broker: 2
        annotations:
          external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com.
          external-dns.alpha.kubernetes.io/ttl: "60"
# ...

13.2.12.6. GenericKafkaListenerConfigurationBootstrap schema properties

PropertyDescription

alternativeNames

Additional alternative names for the bootstrap service. The alternative names will be added to the list of subject alternative names of the TLS certificates.

string array

host

The bootstrap host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners.

string

nodePort

Node port for the bootstrap service. This field can be used only with nodeport type listener.

integer

loadBalancerIP

The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener.

string

annotations

Annotations that will be added to the Ingress, Route, or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer, nodeport, route, or ingress type listeners.

map

labels

Labels that will be added to the Ingress, Route, or Service resource. This field can be used only with loadbalancer, nodeport, route, or ingress type listeners.

map

13.2.13. GenericKafkaListenerConfigurationBroker schema reference

Used in: GenericKafkaListenerConfiguration

Full list of GenericKafkaListenerConfigurationBroker schema properties

You can see example configuration for the nodePort, host, loadBalancerIP and annotations properties in the GenericKafkaListenerConfigurationBootstrap schema, which configures bootstrap service overrides.

Advertised addresses for brokers

By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed.

You can specify a broker ID and customize the advertised hostname and port in the configuration property of the listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of listeners.

Example of an external route listener configured with overrides for advertised addresses

listeners:
  #...
  - name: external
    port: 9094
    type: route
    tls: true
    authentication:
      type: tls
    configuration:
      brokers:
      - broker: 0
        advertisedHost: example.hostname.0
        advertisedPort: 12340
      - broker: 1
        advertisedHost: example.hostname.1
        advertisedPort: 12341
      - broker: 2
        advertisedHost: example.hostname.2
        advertisedPort: 12342
# ...

13.2.13.1. GenericKafkaListenerConfigurationBroker schema properties

PropertyDescription

broker

ID of the kafka broker (broker identifier). Broker IDs start from 0 and correspond to the number of broker replicas.

integer

advertisedHost

The host name which will be used in the brokers' advertised.brokers.

string

advertisedPort

The port number which will be used in the brokers' advertised.brokers.

integer

host

The broker host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with route (optional) or ingress (required) type listeners.

string

nodePort

Node port for the per-broker service. This field can be used only with nodeport type listener.

integer

loadBalancerIP

The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.This field can be used only with loadbalancer type listener.

string

annotations

Annotations that will be added to the Ingress or Service resource. You can use this field to configure DNS providers such as External DNS. This field can be used only with loadbalancer, nodeport, or ingress type listeners.

map

labels

Labels that will be added to the Ingress, Route, or Service resource. This field can be used only with loadbalancer, nodeport, route, or ingress type listeners.

map

13.2.14. KafkaListeners schema reference

The type KafkaListeners has been deprecated and is removed in API version v1beta2. Please use GenericKafkaListener instead.

Used in: KafkaClusterSpec

Refer to previous documentation for example configuration.

PropertyDescription

plain

Configures plain listener on port 9092.

KafkaListenerPlain

tls

Configures TLS listener on port 9093.

KafkaListenerTls

external

Configures external listener on port 9094. The type depends on the value of the external.type property within the given object, which must be one of [route, loadbalancer, nodeport, ingress].

KafkaListenerExternalRoute, KafkaListenerExternalLoadBalancer, KafkaListenerExternalNodePort, KafkaListenerExternalIngress

13.2.15. KafkaListenerPlain schema reference

Used in: KafkaListeners

PropertyDescription

authentication

Authentication configuration for this listener. Since this listener does not use TLS transport you cannot configure an authentication with type: tls. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth].

KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth

networkPolicyPeers

List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer.

NetworkPolicyPeer array

13.2.16. KafkaListenerTls schema reference

Used in: KafkaListeners

PropertyDescription

authentication

Authentication configuration for this listener. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth].

KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth

configuration

Configuration of TLS listener.

TlsListenerConfiguration

networkPolicyPeers

List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer.

NetworkPolicyPeer array

13.2.17. TlsListenerConfiguration schema reference

Used in: KafkaListenerTls

PropertyDescription

brokerCertChainAndKey

Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain.

CertAndKeySecretSource

13.2.18. KafkaListenerExternalRoute schema reference

Used in: KafkaListeners

The type property is a discriminator that distinguishes use of the KafkaListenerExternalRoute type from KafkaListenerExternalLoadBalancer, KafkaListenerExternalNodePort, KafkaListenerExternalIngress. It must have the value route for the type KafkaListenerExternalRoute.

PropertyDescription

type

Must be route.

string

authentication

Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth].

KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth

overrides

Overrides for external bootstrap and broker services and externally advertised addresses.

RouteListenerOverride

configuration

External listener configuration.

KafkaListenerExternalConfiguration

networkPolicyPeers

List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer.

NetworkPolicyPeer array

13.2.19. RouteListenerOverride schema reference

Used in: KafkaListenerExternalRoute

PropertyDescription

bootstrap

External bootstrap service configuration.

RouteListenerBootstrapOverride

brokers

External broker services configuration.

RouteListenerBrokerOverride array

13.2.20. RouteListenerBootstrapOverride schema reference

Used in: RouteListenerOverride

PropertyDescription

address

Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates.

string

host

Host for the bootstrap route. This field will be used in the spec.host field of the OpenShift Route.

string

13.2.21. RouteListenerBrokerOverride schema reference

Used in: RouteListenerOverride

PropertyDescription

broker

Id of the kafka broker (broker identifier).

integer

advertisedHost

The host name which will be used in the brokers' advertised.brokers.

string

advertisedPort

The port number which will be used in the brokers' advertised.brokers.

integer

host

Host for the broker route. This field will be used in the spec.host field of the OpenShift Route.

string

13.2.22. KafkaListenerExternalConfiguration schema reference

Used in: KafkaListenerExternalLoadBalancer, KafkaListenerExternalRoute

PropertyDescription

brokerCertChainAndKey

Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain.

CertAndKeySecretSource

13.2.23. KafkaListenerExternalLoadBalancer schema reference

Used in: KafkaListeners

The type property is a discriminator that distinguishes use of the KafkaListenerExternalLoadBalancer type from KafkaListenerExternalRoute, KafkaListenerExternalNodePort, KafkaListenerExternalIngress. It must have the value loadbalancer for the type KafkaListenerExternalLoadBalancer.

PropertyDescription

type

Must be loadbalancer.

string

authentication

Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth].

KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth

overrides

Overrides for external bootstrap and broker services and externally advertised addresses.

LoadBalancerListenerOverride

configuration

External listener configuration.

KafkaListenerExternalConfiguration

networkPolicyPeers

List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer.

NetworkPolicyPeer array

tls

Enables TLS encryption on the listener. By default set to true for enabled TLS encryption.

boolean

13.2.24. LoadBalancerListenerOverride schema reference

Used in: KafkaListenerExternalLoadBalancer

PropertyDescription

bootstrap

External bootstrap service configuration.

LoadBalancerListenerBootstrapOverride

brokers

External broker services configuration.

LoadBalancerListenerBrokerOverride array

13.2.25. LoadBalancerListenerBootstrapOverride schema reference

Used in: LoadBalancerListenerOverride

PropertyDescription

address

Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates.

string

dnsAnnotations

Annotations that will be added to the Service resource. You can use this field to configure DNS providers such as External DNS.

map

loadBalancerIP

The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.

string

13.2.26. LoadBalancerListenerBrokerOverride schema reference

Used in: LoadBalancerListenerOverride

PropertyDescription

broker

Id of the kafka broker (broker identifier).

integer

advertisedHost

The host name which will be used in the brokers' advertised.brokers.

string

advertisedPort

The port number which will be used in the brokers' advertised.brokers.

integer

dnsAnnotations

Annotations that will be added to the Service resources for individual brokers. You can use this field to configure DNS providers such as External DNS.

map

loadBalancerIP

The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the loadBalancerIP when a load balancer is created. This field is ignored if the cloud provider does not support the feature.

string

13.2.27. KafkaListenerExternalNodePort schema reference

Used in: KafkaListeners

The type property is a discriminator that distinguishes use of the KafkaListenerExternalNodePort type from KafkaListenerExternalRoute, KafkaListenerExternalLoadBalancer, KafkaListenerExternalIngress. It must have the value nodeport for the type KafkaListenerExternalNodePort.

PropertyDescription

type

Must be nodeport.

string

authentication

Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth].

KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth

overrides

Overrides for external bootstrap and broker services and externally advertised addresses.

NodePortListenerOverride

configuration

External listener configuration.

NodePortListenerConfiguration

networkPolicyPeers

List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer.

NetworkPolicyPeer array

tls

Enables TLS encryption on the listener. By default set to true for enabled TLS encryption.

boolean

13.2.28. NodePortListenerOverride schema reference

Used in: KafkaListenerExternalNodePort

PropertyDescription

bootstrap

External bootstrap service configuration.

NodePortListenerBootstrapOverride

brokers

External broker services configuration.

NodePortListenerBrokerOverride array

13.2.29. NodePortListenerBootstrapOverride schema reference

Used in: NodePortListenerOverride

PropertyDescription

address

Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates.

string

dnsAnnotations

Annotations that will be added to the Service resource. You can use this field to configure DNS providers such as External DNS.

map

nodePort

Node port for the bootstrap service.

integer

13.2.30. NodePortListenerBrokerOverride schema reference

Used in: NodePortListenerOverride

PropertyDescription

broker

Id of the kafka broker (broker identifier).

integer

advertisedHost

The host name which will be used in the brokers' advertised.brokers.

string

advertisedPort

The port number which will be used in the brokers' advertised.brokers.

integer

nodePort

Node port for the broker service.

integer

dnsAnnotations

Annotations that will be added to the Service resources for individual brokers. You can use this field to configure DNS providers such as External DNS.

map

13.2.31. NodePortListenerConfiguration schema reference

Used in: KafkaListenerExternalNodePort

PropertyDescription

brokerCertChainAndKey

Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain.

CertAndKeySecretSource

preferredAddressType

Defines which address type should be used as the node address. Available types are: ExternalDNS, ExternalIP, InternalDNS, InternalIP and Hostname. By default, the addresses will be used in the following order (the first one found will be used): * ExternalDNS * ExternalIP * InternalDNS * InternalIP * Hostname

This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order..

string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS])

13.2.32. KafkaListenerExternalIngress schema reference

Used in: KafkaListeners

The type property is a discriminator that distinguishes use of the KafkaListenerExternalIngress type from KafkaListenerExternalRoute, KafkaListenerExternalLoadBalancer, KafkaListenerExternalNodePort. It must have the value ingress for the type KafkaListenerExternalIngress.

PropertyDescription

type

Must be ingress.

string

authentication

Authentication configuration for Kafka brokers. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, oauth].

KafkaListenerAuthenticationTls, KafkaListenerAuthenticationScramSha512, KafkaListenerAuthenticationOAuth

class

Configures the Ingress class that defines which Ingress controller will be used.

string

configuration

External listener configuration.

IngressListenerConfiguration

networkPolicyPeers

List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. For more information, see the external documentation for networking.k8s.io/v1 networkpolicypeer.

NetworkPolicyPeer array

13.2.33. IngressListenerConfiguration schema reference

Used in: KafkaListenerExternalIngress

PropertyDescription

bootstrap

External bootstrap ingress configuration.

IngressListenerBootstrapConfiguration

brokers

External broker ingress configuration.

IngressListenerBrokerConfiguration array

brokerCertChainAndKey

Reference to the Secret which holds the certificate and private key pair. The certificate can optionally contain the whole chain.

CertAndKeySecretSource

13.2.34. IngressListenerBootstrapConfiguration schema reference

Used in: IngressListenerConfiguration

PropertyDescription

address

Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates.

string

dnsAnnotations

Annotations that will be added to the Ingress resource. You can use this field to configure DNS providers such as External DNS.

map

host

Host for the bootstrap route. This field will be used in the Ingress resource.

string

13.2.35. IngressListenerBrokerConfiguration schema reference

Used in: IngressListenerConfiguration

PropertyDescription

broker

Id of the kafka broker (broker identifier).

integer

advertisedHost

The host name which will be used in the brokers' advertised.brokers.

string

advertisedPort

The port number which will be used in the brokers' advertised.brokers.

integer

host

Host for the broker ingress. This field will be used in the Ingress resource.

string

dnsAnnotations

Annotations that will be added to the Ingress resources for individual brokers. You can use this field to configure DNS providers such as External DNS.

map

13.2.36. EphemeralStorage schema reference

Used in: JbodStorage, KafkaClusterSpec, ZookeeperClusterSpec

The type property is a discriminator that distinguishes use of the EphemeralStorage type from PersistentClaimStorage. It must have the value ephemeral for the type EphemeralStorage.

PropertyDescription

id

Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'.

integer

sizeLimit

When type=ephemeral, defines the total amount of local storage required for this EmptyDir volume (for example 1Gi).

string

type

Must be ephemeral.

string

13.2.37. PersistentClaimStorage schema reference

Used in: JbodStorage, KafkaClusterSpec, ZookeeperClusterSpec

The type property is a discriminator that distinguishes use of the PersistentClaimStorage type from EphemeralStorage. It must have the value persistent-claim for the type PersistentClaimStorage.

PropertyDescription

type

Must be persistent-claim.

string

size

When type=persistent-claim, defines the size of the persistent volume claim (i.e 1Gi). Mandatory when type=persistent-claim.

string

selector

Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume.

map

deleteClaim

Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed.

boolean

class

The storage class to use for dynamic volume allocation.

string

id

Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'.

integer

overrides

Overrides for individual brokers. The overrides field allows to specify a different configuration for different brokers.

PersistentClaimStorageOverride array

13.2.38. PersistentClaimStorageOverride schema reference

Used in: PersistentClaimStorage

PropertyDescription

class

The storage class to use for dynamic volume allocation for this broker.

string

broker

Id of the kafka broker (broker identifier).

integer

13.2.39. JbodStorage schema reference

Used in: KafkaClusterSpec

The type property is a discriminator that distinguishes use of the JbodStorage type from EphemeralStorage, PersistentClaimStorage. It must have the value jbod for the type JbodStorage.

PropertyDescription

type

Must be jbod.

string

volumes

List of volumes as Storage objects representing the JBOD disks array.

EphemeralStorage, PersistentClaimStorage array

13.2.40. KafkaAuthorizationSimple schema reference

Used in: KafkaClusterSpec

Full list of KafkaAuthorizationSimple schema properties

Simple authorization in AMQ Streams uses the AclAuthorizer plugin, the default Access Control Lists (ACLs) authorization plugin provided with Apache Kafka. ACLs allow you to define which users have access to which resources at a granular level.

Configure the Kafka custom resource to use simple authorization. Set the type property in the authorization section to the value simple, and configure a list of super users.

Access rules are configured for the KafkaUser, as described in the ACLRule schema reference.

13.2.40.1. superUsers

A list of user principals treated as super users, so that they are always allowed without querying ACL rules. For more information see Kafka authorization.

An example of simple authorization configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: myproject
spec:
  kafka:
    # ...
    authorization:
      type: simple
      superUsers:
        - CN=client_1
        - user_2
        - CN=client_3
    # ...

Note

The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration.

13.2.40.2. KafkaAuthorizationSimple schema properties

The type property is a discriminator that distinguishes use of the KafkaAuthorizationSimple type from KafkaAuthorizationOpa, KafkaAuthorizationKeycloak. It must have the value simple for the type KafkaAuthorizationSimple.

PropertyDescription

type

Must be simple.

string

superUsers

List of super users. Should contain list of user principals which should get unlimited access rights.

string array

13.2.41. KafkaAuthorizationOpa schema reference

Used in: KafkaClusterSpec

Full list of KafkaAuthorizationOpa schema properties

To use Open Policy Agent authorization, set the type property in the authorization section to the value opa, and configure OPA properties as required.

13.2.41.1. url

The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. Required.

13.2.41.2. allowOnError

Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable. Defaults to false - all actions will be denied.

13.2.41.3. initialCacheCapacity

Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 5000.

13.2.41.4. maximumCacheSize

Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000.

13.2.41.5. expireAfterMs

The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000 milliseconds (1 hour).

13.2.41.6. superUsers

A list of user principals treated as super users, so that they are always allowed without querying the open Policy Agent policy. For more information see Kafka authorization.

An example of Open Policy Agent authorizer configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
  namespace: myproject
spec:
  kafka:
    # ...
    authorization:
      type: opa
      url: http://opa:8181/v1/data/kafka/allow
      allowOnError: false
      initialCacheCapacity: 1000
      maximumCacheSize: 10000
      expireAfterMs: 60000
      superUsers:
        - CN=fred
        - sam
        - CN=edward
    # ...

13.2.41.7. KafkaAuthorizationOpa schema properties

The type property is a discriminator that distinguishes use of the KafkaAuthorizationOpa type from KafkaAuthorizationSimple, KafkaAuthorizationKeycloak. It must have the value opa for the type KafkaAuthorizationOpa.

PropertyDescription

type

Must be opa.

string

url

The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. This option is required.

string

allowOnError

Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable). Defaults to false - all actions will be denied.

boolean

initialCacheCapacity

Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request Defaults to 5000.

integer

maximumCacheSize

Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000.

integer

expireAfterMs

The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000.

integer

superUsers

List of super users, which is specifically a list of user principals that have unlimited access rights.

string array

13.2.42. KafkaAuthorizationKeycloak schema reference

Used in: KafkaClusterSpec

The type property is a discriminator that distinguishes use of the KafkaAuthorizationKeycloak type from KafkaAuthorizationSimple, KafkaAuthorizationOpa. It must have the value keycloak for the type KafkaAuthorizationKeycloak.

PropertyDescription

type

Must be keycloak.

string

clientId

OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI.

string

tokenEndpointUri

Authorization server token endpoint URI.

string

tlsTrustedCertificates

Trusted certificates for TLS connection to the OAuth server.

CertSecretSource array

disableTlsHostnameVerification

Enable or disable TLS hostname verification. Default value is false.

boolean

delegateToKafkaAcls

Whether authorization decision should be delegated to the 'Simple' authorizer if DENIED by Red Hat Single Sign-On Authorization Services policies. Default value is false.

boolean

grantsRefreshPeriodSeconds

The time between two consecutive grants refresh runs in seconds. The default value is 60.

integer

grantsRefreshPoolSize

The number of threads to use to refresh grants for active sessions. The more threads, the more parallelism, so the sooner the job completes. However, using more threads places a heavier load on the authorization server. The default value is 5.

integer

superUsers

List of super users. Should contain list of user principals which should get unlimited access rights.

string array

13.2.43. Rack schema reference

Used in: KafkaClusterSpec, KafkaConnectS2ISpec, KafkaConnectSpec

Full list of Rack schema properties

Configures rack awareness to spread partition replicas across different racks.

A rack can represent an availability zone, data center, or an actual rack in your data center. By configuring a rack for a Kafka cluster, consumers can fetch data from the closest replica. This is useful for reducing the load on your network when a Kafka cluster spans multiple datacenters.

To configure Kafka brokers for rack awareness, you specify a topologyKey value to match the label of the cluster node used by OpenShift when scheduling Kafka broker pods to nodes.

If the OpenShift cluster is running on a cloud provider platform, the label must represent the availability zone where the node is running. Usually, nodes are labeled with the topology.kubernetes.io/zone label (or failure-domain.beta.kubernetes.io/zone on older OpenShift versions), which can be used as the topologyKey value.

The rack awareness configuration spreads the broker pods and partition replicas across zones, improving resiliency, and also sets a broker.rack configuration for each Kafka broker. The broker.rack configuration assigns a rack ID to each broker.

Consult your OpenShift administrator regarding the node label that represents the zone or rack into which the node is deployed.

Example rack configuration for Kafka

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    rack:
      topologyKey: topology.kubernetes.io/zone
    config:
      # ...
      replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector
    # ...

Use the RackAwareReplicaSelector implementation for the Kafka ReplicaSelector plugin if you want clients to consume from the closest replica. The ReplicaSelector plugin provides the logic that enables clients to consume from the nearest replica. Specify RackAwareReplicaSelector for the replica.selector.class to switch from the default implementation. The default implementation uses LeaderSelector to always select the leader replica for the client. By switching from the leader replica to the replica follower, there is some cost to latency. If required, you can also customize your own implementation.

For clients, including Kafka Connect, you specify the same topology key as the broker that the client will use to consume messages.

Example rack configuration for Kafka Connect

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
# ...
spec:
  kafka:
    # ...
    rack:
      topologyKey: topology.kubernetes.io/zone
    # ...

The client is assigned a client.rack ID.

RackAwareReplicaSelector associates matching broker.rack and client.rack IDs, so the client can consume from the nearest replica.

Figure 13.1. Example showing client consuming from replicas in the same availability zone

consuming from replicas in the same availability zone

If there are multiple replicas in the same rack, RackAwareReplicaSelector always selects the most up-to-date replica. If the rack ID is not specified, or if it cannot find a replica with the same rack ID, it will fall back to the leader replica.

For more information about OpenShift node labels, see Well-Known Labels, Annotations and Taints.

13.2.43.1. Rack schema properties

PropertyDescription

topologyKey

A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set the broker’s broker.rack config and client.rack in Kafka Connect.

string

13.2.44. Probe schema reference

Used in: CruiseControlSpec, EntityTopicOperatorSpec, EntityUserOperatorSpec, KafkaBridgeSpec, KafkaClusterSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaExporterSpec, KafkaMirrorMaker2Spec, KafkaMirrorMakerSpec, TlsSidecar, TopicOperatorSpec, ZookeeperClusterSpec

PropertyDescription

failureThreshold

Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.

integer

initialDelaySeconds

The initial delay before first the health is first checked. Default to 15 seconds. Minimum value is 0.

integer

periodSeconds

How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.

integer

successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.

integer

timeoutSeconds

The timeout for each attempted health check. Default to 5 seconds. Minimum value is 1.

integer

13.2.45. JvmOptions schema reference

Used in: CruiseControlSpec, EntityTopicOperatorSpec, EntityUserOperatorSpec, KafkaBridgeSpec, KafkaClusterSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec, KafkaMirrorMakerSpec, TopicOperatorSpec, ZookeeperClusterSpec

PropertyDescription

-XX

A map of -XX options to the JVM.

map

-Xms

-Xms option to to the JVM.

string

-Xmx

-Xmx option to to the JVM.

string

gcLoggingEnabled

Specifies whether the Garbage Collection logging is enabled. The default is false.

boolean

javaSystemProperties

A map of additional system properties which will be passed using the -D option to the JVM.

SystemProperty array

13.2.46. SystemProperty schema reference

Used in: JvmOptions

PropertyDescription

name

The system property name.

string

value

The system property value.

string

13.2.47. KafkaJmxOptions schema reference

Used in: KafkaClusterSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec

Full list of KafkaJmxOptions schema properties

Configures JMX connection options.

JMX metrics are obtained from Kafka brokers, Kafka Connect, and MirrorMaker 2.0 by opening a JMX port on 9999. Use the jmxOptions property to configure a password-protected or an unprotected JMX port. Using password protection prevents unauthorized pods from accessing the port.

You can then obtain metrics about the component.

For example, for each Kafka broker you can obtain bytes-per-second usage data from clients, or the request rate of the network of the broker.

To enable security for the JMX port, set the type parameter in the authentication field to password.

Example password-protected JMX configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    jmxOptions:
      authentication:
        type: "password"
    # ...
  zookeeper:
    # ...

You can then deploy a pod into a cluster and obtain JMX metrics using the headless service by specifying which broker you want to address.

For example, to get JMX metrics from broker 0 you specify:

"CLUSTER-NAME-kafka-0.CLUSTER-NAME-kafka-brokers"

CLUSTER-NAME-kafka-0 is name of the broker pod, and CLUSTER-NAME-kafka-brokers is the name of the headless service to return the IPs of the broker pods.

If the JMX port is secured, you can get the username and password by referencing them from the JMX Secret in the deployment of your pod.

For an unprotected JMX port, use an empty object {} to open the JMX port on the headless service. You deploy a pod and obtain metrics in the same way as for the protected port, but in this case any pod can read from the JMX port.

Example open port JMX configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
    jmxOptions: {}
    # ...
  zookeeper:
    # ...

Additional resources

13.2.47.1. KafkaJmxOptions schema properties

PropertyDescription

authentication

Authentication configuration for connecting to the JMX port. The type depends on the value of the authentication.type property within the given object, which must be one of [password].

KafkaJmxAuthenticationPassword

13.2.48. KafkaJmxAuthenticationPassword schema reference

Used in: KafkaJmxOptions

The type property is a discriminator that distinguishes use of the KafkaJmxAuthenticationPassword type from other subtypes which may be added in the future. It must have the value password for the type KafkaJmxAuthenticationPassword.

PropertyDescription

type

Must be password.

string

13.2.49. JmxPrometheusExporterMetrics schema reference

Used in: CruiseControlSpec, KafkaClusterSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec, KafkaMirrorMakerSpec, ZookeeperClusterSpec

The type property is a discriminator that distinguishes use of the JmxPrometheusExporterMetrics type from other subtypes which may be added in the future. It must have the value jmxPrometheusExporter for the type JmxPrometheusExporterMetrics.

PropertyDescription

type

Must be jmxPrometheusExporter.

string

valueFrom

ConfigMap entry where the Prometheus JMX Exporter configuration is stored. For details of the structure of this configuration, see the JMX Exporter documentation.

ExternalConfigurationReference

13.2.50. ExternalConfigurationReference schema reference

Used in: ExternalLogging, JmxPrometheusExporterMetrics

PropertyDescription

configMapKeyRef

Reference to the key in the ConfigMap containing the configuration. For more information, see the external documentation for core/v1 configmapkeyselector.

ConfigMapKeySelector

13.2.51. InlineLogging schema reference

Used in: CruiseControlSpec, EntityTopicOperatorSpec, EntityUserOperatorSpec, KafkaBridgeSpec, KafkaClusterSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec, KafkaMirrorMakerSpec, TopicOperatorSpec, ZookeeperClusterSpec

The type property is a discriminator that distinguishes use of the InlineLogging type from ExternalLogging. It must have the value inline for the type InlineLogging.

PropertyDescription

type

Must be inline.

string

loggers

A Map from logger name to logger level.

map

13.2.52. ExternalLogging schema reference

Used in: CruiseControlSpec, EntityTopicOperatorSpec, EntityUserOperatorSpec, KafkaBridgeSpec, KafkaClusterSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec, KafkaMirrorMakerSpec, TopicOperatorSpec, ZookeeperClusterSpec

The type property is a discriminator that distinguishes use of the ExternalLogging type from InlineLogging. It must have the value external for the type ExternalLogging.

PropertyDescription

type

Must be external.

string

name

The name property has been deprecated, and should now be configured using valueFrom. The property name is removed in API version v1beta2. The name of the ConfigMap from which to get the logging configuration.

string

valueFrom

ConfigMap entry where the logging configuration is stored.

ExternalConfigurationReference

13.2.53. TlsSidecar schema reference

Used in: CruiseControlSpec, EntityOperatorSpec, KafkaClusterSpec, TopicOperatorSpec, ZookeeperClusterSpec

Full list of TlsSidecar schema properties

Configures a TLS sidecar, which is a container that runs in a pod, but serves a supporting purpose. In AMQ Streams, the TLS sidecar uses TLS to encrypt and decrypt communication between components and ZooKeeper.

The TLS sidecar is used in:

  • Entity Operator
  • Cruise Control

The TLS sidecar is configured using the tlsSidecar property in:

  • Kafka.spec.entityOperator
  • Kafka.spec.cruiseControl

The TLS sidecar supports the following additional options:

  • image
  • resources
  • logLevel
  • readinessProbe
  • livenessProbe

The resources property specifies the memory and CPU resources allocated for the TLS sidecar.

The image property configures the container image which will be used.

The readinessProbe and livenessProbe properties configure healthcheck probes for the TLS sidecar.

The logLevel property specifies the logging level. The following logging levels are supported:

  • emerg
  • alert
  • crit
  • err
  • warning
  • notice
  • info
  • debug

The default value is notice.

Example TLS sidecar configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  # ...
  entityOperator:
    # ...
    tlsSidecar:
      resources:
        requests:
          cpu: 200m
          memory: 64Mi
        limits:
          cpu: 500m
          memory: 128Mi
    # ...
  cruiseControl:
    # ...
    tlsSidecar:
      image: my-org/my-image:latest
      resources:
        requests:
          cpu: 200m
          memory: 64Mi
        limits:
          cpu: 500m
          memory: 128Mi
      logLevel: debug
      readinessProbe:
        initialDelaySeconds: 15
        timeoutSeconds: 5
      livenessProbe:
        initialDelaySeconds: 15
        timeoutSeconds: 5
    # ...

13.2.53.1. TlsSidecar schema properties

PropertyDescription

image

The docker image for the container.

string

livenessProbe

Pod liveness checking.

Probe

logLevel

The log level for the TLS sidecar. Default value is notice.

string (one of [emerg, debug, crit, err, alert, warning, notice, info])

readinessProbe

Pod readiness checking.

Probe

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

13.2.54. KafkaClusterTemplate schema reference

Used in: KafkaClusterSpec

PropertyDescription

statefulset

Template for Kafka StatefulSet.

StatefulSetTemplate

pod

Template for Kafka Pods.

PodTemplate

bootstrapService

Template for Kafka bootstrap Service.

ResourceTemplate

brokersService

Template for Kafka broker Service.

ResourceTemplate

externalBootstrapService

Template for Kafka external bootstrap Service.

ExternalServiceTemplate

perPodService

Template for Kafka per-pod Services used for access from outside of OpenShift.

ExternalServiceTemplate

externalBootstrapRoute

Template for Kafka external bootstrap Route.

ResourceTemplate

perPodRoute

Template for Kafka per-pod Routes used for access from outside of OpenShift.

ResourceTemplate

externalBootstrapIngress

Template for Kafka external bootstrap Ingress.

ResourceTemplate

perPodIngress

Template for Kafka per-pod Ingress used for access from outside of OpenShift.

ResourceTemplate

persistentVolumeClaim

Template for all Kafka PersistentVolumeClaims.

ResourceTemplate

podDisruptionBudget

Template for Kafka PodDisruptionBudget.

PodDisruptionBudgetTemplate

kafkaContainer

Template for the Kafka broker container.

ContainerTemplate

tlsSidecarContainer

The tlsSidecarContainer property has been deprecated. The property tlsSidecarContainer is removed in API version v1beta2. Template for the Kafka broker TLS sidecar container.

ContainerTemplate

initContainer

Template for the Kafka init container.

ContainerTemplate

clusterCaCert

Template for Secret with Kafka Cluster certificate public key.

ResourceTemplate

clusterRoleBinding

Template for the Kafka ClusterRoleBinding.

ResourceTemplate

13.2.55. StatefulSetTemplate schema reference

Used in: KafkaClusterTemplate, ZookeeperClusterTemplate

PropertyDescription

metadata

Metadata applied to the resource.

MetadataTemplate

podManagementPolicy

PodManagementPolicy which will be used for this StatefulSet. Valid values are Parallel and OrderedReady. Defaults to Parallel.

string (one of [OrderedReady, Parallel])

13.2.56. MetadataTemplate schema reference

Used in: DeploymentTemplate, ExternalServiceTemplate, PodDisruptionBudgetTemplate, PodTemplate, ResourceTemplate, StatefulSetTemplate

Full list of MetadataTemplate schema properties

Labels and Annotations are used to identify and organize resources, and are configured in the metadata property.

For example:

# ...
template:
  statefulset:
    metadata:
      labels:
        label1: value1
        label2: value2
      annotations:
        annotation1: value1
        annotation2: value2
# ...

The labels and annotations fields can contain any labels or annotations that do not contain the reserved string strimzi.io. Labels and annotations containing strimzi.io are used internally by AMQ Streams and cannot be configured.

13.2.56.1. MetadataTemplate schema properties

PropertyDescription

labels

Labels added to the resource template. Can be applied to different resources such as StatefulSets, Deployments, Pods, and Services.

map

annotations

Annotations added to the resource template. Can be applied to different resources such as StatefulSets, Deployments, Pods, and Services.

map

13.2.57. PodTemplate schema reference

Used in: CruiseControlTemplate, EntityOperatorTemplate, KafkaBridgeTemplate, KafkaClusterTemplate, KafkaConnectTemplate, KafkaExporterTemplate, KafkaMirrorMakerTemplate, ZookeeperClusterTemplate

Full list of PodTemplate schema properties

Configures the template for Kafka pods.

Example PodTemplate configuration

# ...
template:
  pod:
    metadata:
      labels:
        label1: value1
      annotations:
        anno1: value1
    imagePullSecrets:
      - name: my-docker-credentials
    securityContext:
      runAsUser: 1000001
      fsGroup: 0
    terminationGracePeriodSeconds: 120
# ...

13.2.57.1. hostAliases

Use the hostAliases property to a specify a list of hosts and IP addresses, which are injected into the /etc/hosts file of the pod.

This configuration is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users.

Example hostAliases configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
#...
spec:
  # ...
  template:
    pod:
      hostAliases:
      - ip: "192.168.1.86"
        hostnames:
        - "my-host-1"
        - "my-host-2"
      #...

13.2.57.2. PodTemplate schema properties

PropertyDescription

metadata

Metadata applied to the resource.

MetadataTemplate

imagePullSecrets

List of references to secrets in the same namespace to use for pulling any of the images used by this Pod. When the STRIMZI_IMAGE_PULL_SECRETS environment variable in Cluster Operator and the imagePullSecrets option are specified, only the imagePullSecrets variable is used and the STRIMZI_IMAGE_PULL_SECRETS variable is ignored. For more information, see the external documentation for core/v1 localobjectreference.

LocalObjectReference array

securityContext

Configures pod-level security attributes and common container settings. For more information, see the external documentation for core/v1 podsecuritycontext.

PodSecurityContext

terminationGracePeriodSeconds

The grace period is the duration in seconds after the processes running in the pod are sent a termination signal, and the time when the processes are forcibly halted with a kill signal. Set this value to longer than the expected cleanup time for your process. Value must be a non-negative integer. A zero value indicates delete immediately. You might need to increase the grace period for very large Kafka clusters, so that the Kafka brokers have enough time to transfer their work to another broker before they are terminated. Defaults to 30 seconds.

integer

affinity

The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

priorityClassName

The name of the priority class used to assign priority to the pods. For more information about priority classes, see Pod Priority and Preemption.

string

schedulerName

The name of the scheduler used to dispatch this Pod. If not specified, the default scheduler will be used.

string

hostAliases

The pod’s HostAliases. HostAliases is an optional list of hosts and IPs that will be injected into the pod’s hosts file if specified. For more information, see the external documentation for core/v1 HostAlias.

HostAlias array

topologySpreadConstraints

The pod’s topology spread constraints. For more information, see the external documentation for core/v1 topologyspreadconstraint.

TopologySpreadConstraint array

13.2.58. ResourceTemplate schema reference

Used in: CruiseControlTemplate, EntityOperatorTemplate, KafkaBridgeTemplate, KafkaClusterTemplate, KafkaConnectTemplate, KafkaExporterTemplate, KafkaUserTemplate, ZookeeperClusterTemplate

PropertyDescription

metadata

Metadata applied to the resource.

MetadataTemplate

13.2.59. ExternalServiceTemplate schema reference

Used in: KafkaClusterTemplate

Full list of ExternalServiceTemplate schema properties

When exposing Kafka outside of OpenShift using loadbalancers or node ports, you can use properties, in addition to labels and annotations, to customize how a Service is created.

An example showing customized external services

# ...
template:
  externalBootstrapService:
    externalTrafficPolicy: Local
    loadBalancerSourceRanges:
      - 10.0.0.0/8
      - 88.208.76.87/32
  perPodService:
    externalTrafficPolicy: Local
    loadBalancerSourceRanges:
      - 10.0.0.0/8
      - 88.208.76.87/32
# ...

13.2.59.1. ExternalServiceTemplate schema properties

PropertyDescription

metadata

Metadata applied to the resource.

MetadataTemplate

externalTrafficPolicy

The externalTrafficPolicy property has been deprecated, and should now be configured using spec.kafka.listeners[].configuration. The property externalTrafficPolicy is removed in API version v1beta2. Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default.

string (one of [Local, Cluster])

loadBalancerSourceRanges

The loadBalancerSourceRanges property has been deprecated, and should now be configured using spec.kafka.listeners[].configuration. The property loadBalancerSourceRanges is removed in API version v1beta2. A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. For more information, see https://v1-17.docs.kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/.

string array

13.2.60. PodDisruptionBudgetTemplate schema reference

Used in: CruiseControlTemplate, KafkaBridgeTemplate, KafkaClusterTemplate, KafkaConnectTemplate, KafkaMirrorMakerTemplate, ZookeeperClusterTemplate

Full list of PodDisruptionBudgetTemplate schema properties

AMQ Streams creates a PodDisruptionBudget for every new StatefulSet or Deployment. By default, pod disruption budgets only allow a single pod to be unavailable at a given time. You can increase the amount of unavailable pods allowed by changing the default value of the maxUnavailable property in the PodDisruptionBudget.spec resource.

An example of PodDisruptionBudget template

# ...
template:
    podDisruptionBudget:
        metadata:
            labels:
                key1: label1
                key2: label2
            annotations:
                key1: label1
                key2: label2
        maxUnavailable: 1
# ...

13.2.60.1. PodDisruptionBudgetTemplate schema properties

PropertyDescription

metadata

Metadata to apply to the PodDistruptionBugetTemplate resource.

MetadataTemplate

maxUnavailable

Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1.

integer

13.2.61. ContainerTemplate schema reference

Used in: CruiseControlTemplate, EntityOperatorTemplate, KafkaBridgeTemplate, KafkaClusterTemplate, KafkaConnectTemplate, KafkaExporterTemplate, KafkaMirrorMakerTemplate, ZookeeperClusterTemplate

Full list of ContainerTemplate schema properties

You can set custom security context and environment variables for a container.

The environment variables are defined under the env property as a list of objects with name and value fields. The following example shows two custom environment variables and a custom security context set for the Kafka broker containers:

# ...
template:
  kafkaContainer:
    env:
    - name: EXAMPLE_ENV_1
      value: example.env.one
    - name: EXAMPLE_ENV_2
      value: example.env.two
    securityContext:
      runAsUser: 2000
# ...

Environment variables prefixed with KAFKA_ are internal to AMQ Streams and should be avoided. If you set a custom environment variable that is already in use by AMQ Streams, it is ignored and a warning is recorded in the log.

13.2.61.1. ContainerTemplate schema properties

PropertyDescription

env

Environment variables which should be applied to the container.

ContainerEnvVar array

securityContext

Security context for the container. For more information, see the external documentation for core/v1 securitycontext.

SecurityContext

13.2.62. ContainerEnvVar schema reference

Used in: ContainerTemplate

PropertyDescription

name

The environment variable key.

string

value

The environment variable value.

string

13.2.63. ZookeeperClusterSpec schema reference

Used in: KafkaSpec

Full list of ZookeeperClusterSpec schema properties

Configures a ZooKeeper cluster.

13.2.63.1. config

Use the config properties to configure ZooKeeper options as keys.

Standard Apache ZooKeeper configuration may be provided, restricted to those properties not managed directly by AMQ Streams.

Configuration options that cannot be configured relate to:

  • Security (Encryption, Authentication, and Authorization)
  • Listener configuration
  • Configuration of data directories
  • ZooKeeper cluster composition

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure the options listed in the ZooKeeper documentation with the exception of those managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • server.
  • dataDir
  • dataLogDir
  • clientPort
  • authProvider
  • quorum.auth
  • requireClientAuthScheme

When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to ZooKeeper.

There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties.

Example ZooKeeper configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  kafka:
    # ...
  zookeeper:
    # ...
    config:
      autopurge.snapRetainCount: 3
      autopurge.purgeInterval: 1
      ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
      ssl.enabled.protocols: "TLSv1.2"
      ssl.protocol: "TLSv1.2"
    # ...

13.2.63.2. logging

ZooKeeper has a configurable logger:

  • zookeeper.root.logger

ZooKeeper uses the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  zookeeper:
    # ...
    logging:
      type: inline
      loggers:
        zookeeper.root.logger: "INFO"
    # ...

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
spec:
  # ...
  zookeeper:
    # ...
    logging:
      type: external
      valueFrom:
        configMapKeyRef:
          name: customConfigMap
          key: zookeeper-log4j.properties
  # ...

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

13.2.63.3. ZookeeperClusterSpec schema properties

PropertyDescription

replicas

The number of pods in the cluster.

integer

image

The docker image for the pods.

string

storage

Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim].

EphemeralStorage, PersistentClaimStorage

config

The ZooKeeper broker config. Properties with the following prefixes cannot be set: server., dataDir, dataLogDir, clientPort, authProvider, quorum.auth, requireClientAuthScheme, snapshot.trust.empty, standaloneEnabled, reconfigEnabled, 4lw.commands.whitelist, secureClientPort, ssl., serverCnxnFactory, sslQuorum (with the exception of: ssl.protocol, ssl.quorum.protocol, ssl.enabledProtocols, ssl.quorum.enabledProtocols, ssl.ciphersuites, ssl.quorum.ciphersuites, ssl.hostnameVerification, ssl.quorum.hostnameVerification).

map

affinity

The affinity property has been deprecated, and should now be configured using spec.zookeeper.template.pod.affinity. The property affinity is removed in API version v1beta2. The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The tolerations property has been deprecated, and should now be configured using spec.zookeeper.template.pod.tolerations. The property tolerations is removed in API version v1beta2. The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

jvmOptions

JVM Options for pods.

JvmOptions

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

metrics

The metrics property has been deprecated, and should now be configured using spec.zookeeper.metricsConfig. The property metrics is removed in API version v1beta2. The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration.

map

metricsConfig

Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter].

JmxPrometheusExporterMetrics

logging

Logging configuration for ZooKeeper. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

template

Template for ZooKeeper cluster resources. The template allows users to specify how are the StatefulSet, Pods and Services generated.

ZookeeperClusterTemplate

tlsSidecar

The tlsSidecar property has been deprecated. The property tlsSidecar is removed in API version v1beta2. TLS sidecar configuration. The TLS sidecar is not used anymore and this option will be ignored.

TlsSidecar

13.2.64. ZookeeperClusterTemplate schema reference

Used in: ZookeeperClusterSpec

PropertyDescription

statefulset

Template for ZooKeeper StatefulSet.

StatefulSetTemplate

pod

Template for ZooKeeper Pods.

PodTemplate

clientService

Template for ZooKeeper client Service.

ResourceTemplate

nodesService

Template for ZooKeeper nodes Service.

ResourceTemplate

persistentVolumeClaim

Template for all ZooKeeper PersistentVolumeClaims.

ResourceTemplate

podDisruptionBudget

Template for ZooKeeper PodDisruptionBudget.

PodDisruptionBudgetTemplate

zookeeperContainer

Template for the ZooKeeper container.

ContainerTemplate

tlsSidecarContainer

The tlsSidecarContainer property has been deprecated. The property tlsSidecarContainer is removed in API version v1beta2. Template for the Zookeeper server TLS sidecar container. The TLS sidecar is not used anymore and this option will be ignored.

ContainerTemplate

13.2.65. TopicOperatorSpec schema reference

The type TopicOperatorSpec has been deprecated and is removed in API version v1beta2. Please use EntityTopicOperatorSpec instead.

Used in: KafkaSpec

PropertyDescription

watchedNamespace

The namespace the Topic Operator should watch.

string

image

The image to use for the Topic Operator.

string

reconciliationIntervalSeconds

Interval between periodic reconciliations.

integer

zookeeperSessionTimeoutSeconds

Timeout for the ZooKeeper session.

integer

affinity

The affinity property has been deprecated, and should now be configured using spec.entityOperator.template.pod.affinity. The property affinity is removed in API version v1beta2. Pod affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

topicMetadataMaxAttempts

The number of attempts at getting topic metadata.

integer

tlsSidecar

The tlsSidecar property has been deprecated, and should now be configured using spec.entityOperator.tlsSidecar. The property tlsSidecar is removed in API version v1beta2. TLS sidecar configuration.

TlsSidecar

logging

Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

jvmOptions

JVM Options for pods.

JvmOptions

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

startupProbe

Pod startup checking.

Probe

13.2.66. EntityOperatorSpec schema reference

Used in: KafkaSpec

PropertyDescription

topicOperator

Configuration of the Topic Operator.

EntityTopicOperatorSpec

userOperator

Configuration of the User Operator.

EntityUserOperatorSpec

affinity

The affinity property has been deprecated, and should now be configured using spec.entityOperator.template.pod.affinity. The property affinity is removed in API version v1beta2. The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The tolerations property has been deprecated, and should now be configured using spec.entityOperator.template.pod.tolerations. The property tolerations is removed in API version v1beta2. The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

tlsSidecar

TLS sidecar configuration.

TlsSidecar

template

Template for Entity Operator resources. The template allows users to specify how is the Deployment and Pods generated.

EntityOperatorTemplate

13.2.67. EntityTopicOperatorSpec schema reference

Used in: EntityOperatorSpec

Full list of EntityTopicOperatorSpec schema properties

Configures the Topic Operator.

13.2.67.1. logging

The Topic Operator has a configurable logger:

  • rootLogger.level

The Topic Operator uses the Apache log4j2 logger implementation.

Use the logging property in the entityOperator.topicOperator field of the Kafka resource Kafka resource to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    topicOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
      logging:
        type: inline
        loggers:
          rootLogger.level: INFO
  # ...

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    topicOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
      logging:
        type: external
        valueFrom:
          configMapKeyRef:
            name: customConfigMap
            key: topic-operator-log4j2.properties
  # ...

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

13.2.67.2. EntityTopicOperatorSpec schema properties

PropertyDescription

watchedNamespace

The namespace the Topic Operator should watch.

string

image

The image to use for the Topic Operator.

string

reconciliationIntervalSeconds

Interval between periodic reconciliations.

integer

zookeeperSessionTimeoutSeconds

Timeout for the ZooKeeper session.

integer

startupProbe

Pod startup checking.

Probe

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

topicMetadataMaxAttempts

The number of attempts at getting topic metadata.

integer

logging

Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

jvmOptions

JVM Options for pods.

JvmOptions

13.2.68. EntityUserOperatorSpec schema reference

Used in: EntityOperatorSpec

Full list of EntityUserOperatorSpec schema properties

Configures the User Operator.

13.2.68.1. logging

The User Operator has a configurable logger:

  • rootLogger.level

The User Operator uses the Apache log4j2 logger implementation.

Use the logging property in the entityOperator.userOperator field of the Kafka resource to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    userOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
      logging:
        type: inline
        loggers:
          rootLogger.level: INFO
  # ...

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    # ...
  zookeeper:
    # ...
  entityOperator:
    # ...
    userOperator:
      watchedNamespace: my-topic-namespace
      reconciliationIntervalSeconds: 60
      logging:
        type: external
        valueFrom:
          configMapKeyRef:
            name: customConfigMap
            key: user-operator-log4j2.properties
   # ...

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

13.2.68.2. EntityUserOperatorSpec schema properties

PropertyDescription

watchedNamespace

The namespace the User Operator should watch.

string

image

The image to use for the User Operator.

string

reconciliationIntervalSeconds

Interval between periodic reconciliations.

integer

zookeeperSessionTimeoutSeconds

Timeout for the ZooKeeper session.

integer

secretPrefix

The prefix that will be added to the KafkaUser name to be used as the Secret name.

string

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

logging

Logging configuration. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

jvmOptions

JVM Options for pods.

JvmOptions

13.2.69. EntityOperatorTemplate schema reference

Used in: EntityOperatorSpec

PropertyDescription

deployment

Template for Entity Operator Deployment.

ResourceTemplate

pod

Template for Entity Operator Pods.

PodTemplate

tlsSidecarContainer

Template for the Entity Operator TLS sidecar container.

ContainerTemplate

topicOperatorContainer

Template for the Entity Topic Operator container.

ContainerTemplate

userOperatorContainer

Template for the Entity User Operator container.

ContainerTemplate

13.2.70. CertificateAuthority schema reference

Used in: KafkaSpec

Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls.

PropertyDescription

generateCertificateAuthority

If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true.

boolean

generateSecretOwnerReference

If true, the Cluster and Client CA Secrets are configured with the ownerReference set to the Kafka resource. If the Kafka resource is deleted when true, the CA Secrets are also deleted. If false, the ownerReference is disabled. If the Kafka resource is deleted when false, the CA Secrets are retained and available for reuse. Default is true.

boolean

validityDays

The number of days generated certificates should be valid for. The default is 365.

integer

renewalDays

The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30.

integer

certificateExpirationPolicy

How should CA certificate expiration be handled when generateCertificateAuthority=true. The default is for a new CA certificate to be generated reusing the existing private key.

string (one of [replace-key, renew-certificate])

13.2.71. CruiseControlSpec schema reference

Used in: KafkaSpec

PropertyDescription

image

The docker image for the pods.

string

tlsSidecar

TLS sidecar configuration.

TlsSidecar

resources

CPU and memory resources to reserve for the Cruise Control container. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

livenessProbe

Pod liveness checking for the Cruise Control container.

Probe

readinessProbe

Pod readiness checking for the Cruise Control container.

Probe

jvmOptions

JVM Options for the Cruise Control container.

JvmOptions

logging

Logging configuration (Log4j 2) for Cruise Control. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

template

Template to specify how Cruise Control resources, Deployments and Pods, are generated.

CruiseControlTemplate

brokerCapacity

The Cruise Control brokerCapacity configuration.

BrokerCapacity

config

The Cruise Control configuration. For a full list of configuration options refer to https://github.com/linkedin/cruise-control/wiki/Configurations. Note that properties with the following prefixes cannot be set: bootstrap.servers, client.id, zookeeper., network., security., failed.brokers.zk.path,webserver.http., webserver.api.urlprefix, webserver.session.path, webserver.accesslog., two.step., request.reason.required,metric.reporter.sampler.bootstrap.servers, metric.reporter.topic, partition.metric.sample.store.topic, broker.metric.sample.store.topic,capacity.config.file, self.healing., anomaly.detection., ssl. (with the exception of: ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, webserver.http.cors.enabled,webserver.http.cors.origin, webserver.http.cors.exposeheaders).

map

metrics

The metrics property has been deprecated, and should now be configured using spec.cruiseControl.metricsConfig. The property metrics is removed in API version v1beta2. The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration.

map

metricsConfig

Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter].

JmxPrometheusExporterMetrics

13.2.72. CruiseControlTemplate schema reference

Used in: CruiseControlSpec

PropertyDescription

deployment

Template for Cruise Control Deployment.

ResourceTemplate

pod

Template for Cruise Control Pods.

PodTemplate

apiService

Template for Cruise Control API Service.

ResourceTemplate

podDisruptionBudget

Template for Cruise Control PodDisruptionBudget.

PodDisruptionBudgetTemplate

cruiseControlContainer

Template for the Cruise Control container.

ContainerTemplate

tlsSidecarContainer

Template for the Cruise Control TLS sidecar container.

ContainerTemplate

13.2.73. BrokerCapacity schema reference

Used in: CruiseControlSpec

PropertyDescription

disk

Broker capacity for disk in bytes, for example, 100Gi.

string

cpuUtilization

Broker capacity for CPU resource utilization as a percentage (0 - 100).

integer

inboundNetwork

Broker capacity for inbound network throughput in bytes per second, for example, 10000KB/s.

string

outboundNetwork

Broker capacity for outbound network throughput in bytes per second, for example 10000KB/s.

string

13.2.74. KafkaExporterSpec schema reference

Used in: KafkaSpec

PropertyDescription

image

The docker image for the pods.

string

groupRegex

Regular expression to specify which consumer groups to collect. Default value is .*.

string

topicRegex

Regular expression to specify which topics to collect. Default value is .*.

string

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

logging

Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal]. Default log level is info.

string

enableSaramaLogging

Enable Sarama logging, a Go client library used by the Kafka Exporter.

boolean

template

Customization of deployment templates and pods.

KafkaExporterTemplate

livenessProbe

Pod liveness check.

Probe

readinessProbe

Pod readiness check.

Probe

13.2.75. KafkaExporterTemplate schema reference

Used in: KafkaExporterSpec

PropertyDescription

deployment

Template for Kafka Exporter Deployment.

ResourceTemplate

pod

Template for Kafka Exporter Pods.

PodTemplate

service

Template for Kafka Exporter Service.

ResourceTemplate

container

Template for the Kafka Exporter container.

ContainerTemplate

13.2.76. KafkaStatus schema reference

Used in: Kafka

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

listeners

Addresses of the internal and external listeners.

ListenerStatus array

clusterId

Kafka cluster Id.

string

13.2.77. Condition schema reference

Used in: KafkaBridgeStatus, KafkaConnectorStatus, KafkaConnectS2IStatus, KafkaConnectStatus, KafkaMirrorMaker2Status, KafkaMirrorMakerStatus, KafkaRebalanceStatus, KafkaStatus, KafkaTopicStatus, KafkaUserStatus

PropertyDescription

type

The unique identifier of a condition, used to distinguish between other conditions in the resource.

string

status

The status of the condition, either True, False or Unknown.

string

lastTransitionTime

Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone.

string

reason

The reason for the condition’s last transition (a single word in CamelCase).

string

message

Human-readable message indicating details about the condition’s last transition.

string

13.2.78. ListenerStatus schema reference

Used in: KafkaStatus

PropertyDescription

type

The type of the listener. Can be one of the following three types: plain, tls, and external.

string

addresses

A list of the addresses for this listener.

ListenerAddress array

bootstrapServers

A comma-separated list of host:port pairs for connecting to the Kafka cluster using this listener.

string

certificates

A list of TLS certificates which can be used to verify the identity of the server when connecting to the given listener. Set only for tls and external listeners.

string array

13.2.79. ListenerAddress schema reference

Used in: ListenerStatus

PropertyDescription

host

The DNS name or IP address of the Kafka bootstrap service.

string

port

The port of the Kafka bootstrap service.

integer

13.2.80. KafkaConnect schema reference

PropertyDescription

spec

The specification of the Kafka Connect cluster.

KafkaConnectSpec

status

The status of the Kafka Connect cluster.

KafkaConnectStatus

13.2.81. KafkaConnectSpec schema reference

Used in: KafkaConnect

Full list of KafkaConnectSpec schema properties

Configures a Kafka Connect cluster.

13.2.81.1. config

Use the config properties to configure Kafka options as keys.

Standard Apache Kafka Connect configuration may be provided, restricted to those properties not managed directly by AMQ Streams.

Configuration options that cannot be configured relate to:

  • Kafka cluster bootstrap address
  • Security (Encryption, Authentication, and Authorization)
  • Listener / REST interface configuration
  • Plugin path configuration

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • listeners
  • plugin.path
  • rest.
  • bootstrap.servers

When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka Connect.

Important

The Cluster Operator does not validate keys or values in the config object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.

Certain options have default values:

  • group.id with default value connect-cluster
  • offset.storage.topic with default value connect-cluster-offsets
  • config.storage.topic with default value connect-cluster-configs
  • status.storage.topic with default value connect-cluster-status
  • key.converter with default value org.apache.kafka.connect.json.JsonConverter
  • value.converter with default value org.apache.kafka.connect.json.JsonConverter

These options are automatically configured in case they are not present in the KafkaConnect.spec.config or KafkaConnectS2I.spec.config properties.

There are exceptions to the forbidden options. You can use three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification.

Example Kafka Connect configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  config:
    group.id: my-connect-cluster
    offset.storage.topic: my-connect-cluster-offsets
    config.storage.topic: my-connect-cluster-configs
    status.storage.topic: my-connect-cluster-status
    key.converter: org.apache.kafka.connect.json.JsonConverter
    value.converter: org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable: true
    value.converter.schemas.enable: true
    config.storage.replication.factor: 3
    offset.storage.replication.factor: 3
    status.storage.replication.factor: 3
    ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
    ssl.enabled.protocols: "TLSv1.2"
    ssl.protocol: "TLSv1.2"
    ssl.endpoint.identification.algorithm: HTTPS
  # ...

For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification.

13.2.81.2. logging

Kafka Connect (and Kafka Connect with Source2Image support) has its own configurable loggers:

  • connect.root.logger.level
  • log4j.logger.org.reflections

Further loggers are added depending on the Kafka Connect plugins running.

Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod:

curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/

Kafka Connect uses the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
spec:
  # ...
  logging:
    type: inline
    loggers:
      connect.root.logger.level: "INFO"
  # ...

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        name: customConfigMap
        key: connect-logging.log4j
  # ...

Any available loggers that are not configured have their level set to OFF.

If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically.

If you use external logging, a rolling update is triggered when logging appenders are changed.

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

13.2.81.3. KafkaConnectSpec schema properties

PropertyDescription

version

The Kafka Connect version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version.

string

replicas

The number of pods in the Kafka Connect group.

integer

image

The docker image for the pods.

string

bootstrapServers

Bootstrap servers to connect to. This should be given as a comma separated list of <hostname>:‍<port> pairs.

string

tls

TLS configuration.

KafkaConnectTls

authentication

Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth].

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

config

The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

map

resources

The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

jvmOptions

JVM Options for pods.

JvmOptions

jmxOptions

JMX Options.

KafkaJmxOptions

affinity

The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity. The property affinity is removed in API version v1beta2. The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations. The property tolerations is removed in API version v1beta2. The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

logging

Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

metrics

The metrics property has been deprecated, and should now be configured using spec.metricsConfig. The property metrics is removed in API version v1beta2. The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration.

map

tracing

The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger].

JaegerTracing

template

Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment, Pods and Service are generated.

KafkaConnectTemplate

externalConfiguration

Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors.

ExternalConfiguration

build

Configures how the Connect container image should be built. Optional.

Build

clientRackInitImage

The image of the init container used for initializing the client.rack.

string

metricsConfig

Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter].

JmxPrometheusExporterMetrics

rack

Configuration of the node label which will be used as the client.rack consumer configuration.

Rack

13.2.82. KafkaConnectTls schema reference

Used in: KafkaConnectS2ISpec, KafkaConnectSpec

Full list of KafkaConnectTls schema properties

Configures TLS trusted certificates for connecting Kafka Connect to the cluster.

13.2.82.1. trustedCertificates

Provide a list of secrets using the trustedCertificates property.

13.2.82.2. KafkaConnectTls schema properties

PropertyDescription

trustedCertificates

Trusted certificates for TLS connection.

CertSecretSource array

13.2.83. KafkaClientAuthenticationTls schema reference

Used in: KafkaBridgeSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2ClusterSpec, KafkaMirrorMakerConsumerSpec, KafkaMirrorMakerProducerSpec

Full list of KafkaClientAuthenticationTls schema properties

To configure TLS client authentication, set the type property to the value tls. TLS client authentication uses a TLS certificate to authenticate.

13.2.83.1. certificateAndKey

The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.

You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file:

oc create secret generic MY-SECRET \
--from-file=MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \
--from-file=MY-PRIVATE.key
Note

TLS client authentication can only be used with TLS connections.

Example TLS client authentication configuration

authentication:
  type: tls
  certificateAndKey:
    secretName: my-secret
    certificate: my-public-tls-certificate-file.crt
    key: private.key

13.2.83.2. KafkaClientAuthenticationTls schema properties

The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationTls type from KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth. It must have the value tls for the type KafkaClientAuthenticationTls.

PropertyDescription

certificateAndKey

Reference to the Secret which holds the certificate and private key pair.

CertAndKeySecretSource

type

Must be tls.

string

13.2.84. KafkaClientAuthenticationScramSha512 schema reference

Used in: KafkaBridgeSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2ClusterSpec, KafkaMirrorMakerConsumerSpec, KafkaMirrorMakerProducerSpec

Full list of KafkaClientAuthenticationScramSha512 schema properties

To configure SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512. The SCRAM-SHA-512 authentication mechanism requires a username and password.

13.2.84.1. username

Specify the username in the username property.

13.2.84.2. passwordSecret

In the passwordSecret property, specify a link to a Secret containing the password.

You can use the secrets created by the User Operator.

If required, you can create a text file that contains the password, in cleartext, to use for authentication:

echo -n PASSWORD > MY-PASSWORD.txt

You can then create a Secret from the text file, setting your own field name (key) for the password:

oc create secret generic MY-CONNECT-SECRET-NAME --from-file=MY-PASSWORD-FIELD-NAME=./MY-PASSWORD.txt

Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect

apiVersion: v1
kind: Secret
metadata:
  name: my-connect-secret-name
type: Opaque
data:
  my-connect-password-field: LFTIyFRFlMmU2N2Tm

The secretName property contains the name of the Secret, and the password property contains the name of the key under which the password is stored inside the Secret.

Important

Do not specify the actual password in the password property.

Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect

authentication:
  type: scram-sha-512
  username: my-connect-username
  passwordSecret:
    secretName: my-connect-secret-name
    password: my-connect-password-field

13.2.84.3. KafkaClientAuthenticationScramSha512 schema properties

The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationScramSha512 type from KafkaClientAuthenticationTls, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth. It must have the value scram-sha-512 for the type KafkaClientAuthenticationScramSha512.

PropertyDescription

passwordSecret

Reference to the Secret which holds the password.

PasswordSecretSource

type

Must be scram-sha-512.

string

username

Username used for the authentication.

string

13.2.85. PasswordSecretSource schema reference

Used in: KafkaClientAuthenticationPlain, KafkaClientAuthenticationScramSha512

PropertyDescription

password

The name of the key in the Secret under which the password is stored.

string

secretName

The name of the Secret containing the password.

string

13.2.86. KafkaClientAuthenticationPlain schema reference

Used in: KafkaBridgeSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2ClusterSpec, KafkaMirrorMakerConsumerSpec, KafkaMirrorMakerProducerSpec

Full list of KafkaClientAuthenticationPlain schema properties

To configure SASL-based PLAIN authentication, set the type property to plain. SASL PLAIN authentication mechanism requires a username and password.

Warning

The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.

13.2.86.1. username

Specify the username in the username property.

13.2.86.2. passwordSecret

In the passwordSecret property, specify a link to a Secret containing the password.

You can use the secrets created by the User Operator.

If required, create a text file that contains the password, in cleartext, to use for authentication:

echo -n PASSWORD > MY-PASSWORD.txt

You can then create a Secret from the text file, setting your own field name (key) for the password:

oc create secret generic MY-CONNECT-SECRET-NAME --from-file=MY-PASSWORD-FIELD-NAME=./MY-PASSWORD.txt

Example Secret for PLAIN client authentication for Kafka Connect

apiVersion: v1
kind: Secret
metadata:
  name: my-connect-secret-name
type: Opaque
data:
  my-password-field-name: LFTIyFRFlMmU2N2Tm

The secretName property contains the name of the Secret and the password property contains the name of the key under which the password is stored inside the Secret.

Important

Do not specify the actual password in the password property.

An example SASL based PLAIN client authentication configuration

authentication:
  type: plain
  username: my-connect-username
  passwordSecret:
    secretName: my-connect-secret-name
    password: my-password-field-name

13.2.86.3. KafkaClientAuthenticationPlain schema properties

The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationPlain type from KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationOAuth. It must have the value plain for the type KafkaClientAuthenticationPlain.

PropertyDescription

passwordSecret

Reference to the Secret which holds the password.

PasswordSecretSource

type

Must be plain.

string

username

Username used for the authentication.

string

13.2.87. KafkaClientAuthenticationOAuth schema reference

Used in: KafkaBridgeSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2ClusterSpec, KafkaMirrorMakerConsumerSpec, KafkaMirrorMakerProducerSpec

Full list of KafkaClientAuthenticationOAuth schema properties

To configure OAuth client authentication, set the type property to oauth.

OAuth authentication can be configured using one of the following options:

  • Client ID and secret
  • Client ID and refresh token
  • Access token
  • TLS

Client ID and secret

You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret property, specify a link to a Secret containing the client secret.

An example of OAuth client authentication using client ID and client secret

authentication:
  type: oauth
  tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token
  clientId: my-client-id
  clientSecret:
    secretName: my-client-oauth-secret
    key: client-secret

Client ID and refresh token

You can configure the address of your OAuth server in the tokenEndpointUri property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken property, specify a link to a Secret containing the refresh token.

+ .An example of OAuth client authentication using client ID and refresh token

authentication:
  type: oauth
  tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token
  clientId: my-client-id
  refreshToken:
    secretName: my-refresh-token-secret
    key: refresh-token

Access token

You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri. In the accessToken property, specify a link to a Secret containing the access token.

An example of OAuth client authentication using only an access token

authentication:
  type: oauth
  accessToken:
    secretName: my-access-token-secret
    key: access-token

TLS

Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate.

If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resoruce. The tlsTrustedCertificates property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format.

An example of TLS certificates provided

authentication:
  type: oauth
  tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token
  clientId: my-client-id
  refreshToken:
    secretName: my-refresh-token-secret
    key: refresh-token
  tlsTrustedCertificates:
    - secretName: oauth-server-ca
      certificate: tls.crt

The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification.

An example of disabled TLS hostname verification

authentication:
  type: oauth
  tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token
  clientId: my-client-id
  refreshToken:
    secretName: my-refresh-token-secret
    key: refresh-token
  disableTlsHostnameVerification: true

13.2.87.1. KafkaClientAuthenticationOAuth schema properties

The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationOAuth type from KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain. It must have the value oauth for the type KafkaClientAuthenticationOAuth.

PropertyDescription

accessToken

Link to OpenShift Secret containing the access token which was obtained from the authorization server.

GenericSecretSource

accessTokenIsJwt

Configure whether access token should be treated as JWT. This should be set to false if the authorization server returns opaque tokens. Defaults to true.

boolean

clientId

OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI.

string

clientSecret

Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI.

GenericSecretSource

disableTlsHostnameVerification

Enable or disable TLS hostname verification. Default value is false.

boolean

maxTokenExpirySeconds

Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens.

integer

refreshToken

Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server.

GenericSecretSource

scope

OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default scope is not specified when doing the token endpoint request.

string

tlsTrustedCertificates

Trusted certificates for TLS connection to the OAuth server.

CertSecretSource array

tokenEndpointUri

Authorization server token endpoint URI.

string

type

Must be oauth.

string

13.2.88. JaegerTracing schema reference

Used in: KafkaBridgeSpec, KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec, KafkaMirrorMakerSpec

The type property is a discriminator that distinguishes use of the JaegerTracing type from other subtypes which may be added in the future. It must have the value jaeger for the type JaegerTracing.

PropertyDescription

type

Must be jaeger.

string

13.2.89. KafkaConnectTemplate schema reference

Used in: KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec

PropertyDescription

deployment

Template for Kafka Connect Deployment.

DeploymentTemplate

pod

Template for Kafka Connect Pods.

PodTemplate

apiService

Template for Kafka Connect API Service.

ResourceTemplate

buildConfig

Template for the Kafka Connect BuildConfig used to build new container images. The BuildConfig is used only on OpenShift.

ResourceTemplate

buildContainer

Template for the Kafka Connect Build container. The build container is used only on OpenShift.

ContainerTemplate

buildPod

Template for Kafka Connect Build Pods. The build pod is used only on OpenShift.

PodTemplate

clusterRoleBinding

Template for the Kafka Connect ClusterRoleBinding.

ResourceTemplate

connectContainer

Template for the Kafka Connect container.

ContainerTemplate

initContainer

Template for the Kafka init container.

ContainerTemplate

podDisruptionBudget

Template for Kafka Connect PodDisruptionBudget.

PodDisruptionBudgetTemplate

13.2.90. DeploymentTemplate schema reference

Used in: KafkaBridgeTemplate, KafkaConnectTemplate, KafkaMirrorMakerTemplate

PropertyDescription

metadata

Metadata applied to the resource.

MetadataTemplate

deploymentStrategy

DeploymentStrategy which will be used for this Deployment. Valid values are RollingUpdate and Recreate. Defaults to RollingUpdate.

string (one of [RollingUpdate, Recreate])

13.2.91. ExternalConfiguration schema reference

Used in: KafkaConnectS2ISpec, KafkaConnectSpec, KafkaMirrorMaker2Spec

Full list of ExternalConfiguration schema properties

Configures external storage properties that define configuration options for Kafka Connect connectors.

You can mount ConfigMaps or Secrets into a Kafka Connect pod as environment variables or volumes. Volumes and environment variables are configured in the externalConfiguration property in KafkaConnect.spec and KafkaConnectS2I.spec.

When applied, the environment variables and volumes are available for use when developing your connectors.

13.2.91.1. env

The env property is used to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret.

Example Secret containing values for environment variables

apiVersion: v1
kind: Secret
metadata:
  name: aws-creds
type: Opaque
data:
  awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg=
  awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=

Note

The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_.

To mount a value from a Secret to an environment variable, use the valueFrom property and the secretKeyRef.

Example environment variables set to values from a Secret

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  externalConfiguration:
    env:
      - name: AWS_ACCESS_KEY_ID
        valueFrom:
          secretKeyRef:
            name: aws-creds
            key: awsAccessKey
      - name: AWS_SECRET_ACCESS_KEY
        valueFrom:
          secretKeyRef:
            name: aws-creds
            key: awsSecretAccessKey

A common use case for mounting Secrets to environment variables is when your connector needs to communicate with Amazon AWS and needs to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables with credentials.

To mount a value from a ConfigMap to an environment variable, use configMapKeyRef in the valueFrom property as shown in the following example.

Example environment variables set to values from a ConfigMap

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  externalConfiguration:
    env:
      - name: MY_ENVIRONMENT_VARIABLE
        valueFrom:
          configMapKeyRef:
            name: my-config-map
            key: my-key

13.2.91.2. volumes

You can also mount ConfigMaps or Secrets to a Kafka Connect pod as volumes.

Using volumes instead of environment variables is useful in the following scenarios:

  • Mounting truststores or keystores with TLS certificates
  • Mounting a properties file that is used to configure Kafka Connect connectors

Example Secret with properties

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
stringData:
  connector.properties: |- 1
    dbUsername: my-user 2
    dbPassword: my-password

1
The connector configuration in properties file format.
2
Database username and password properties used in the configuration.

In this example, a Secret named mysecret is mounted to a volume named connector-config. In the config property, a configuration provider (FileConfigProvider) is specified, which will load configuration values from external sources. The Kafka FileConfigProvider is given the alias file, and will read and extract database username and password property values from the file to use in the connector configuration.

Example external volumes set to values from a Secret

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect
spec:
  # ...
  config:
    config.providers: file 1
    config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2
  #...
  externalConfiguration:
    volumes:
      - name: connector-config 3
        secret:
          secretName: mysecret 4

1
The alias for the configuration provider, which is used to define other configuration parameters. Use a comma-separated list if you want to add more than one provider.
2
The FileConfigProvider is the configuration provider that provides values from properties files. The parameter uses the alias from config.providers, taking the form config.providers.${alias}.class.
3
The name of the volume containing the Secret. Each volume must specify a name in the name property and a reference to ConfigMap or Secret.
4
The name of the Secret.

The volumes are mounted inside the Kafka Connect containers in the path /opt/kafka/external-configuration/<volume-name>. For example, the files from a volume named connector-config would appear in the directory /opt/kafka/external-configuration/connector-config.

The FileConfigProvider is used to read the values from the mounted properties files in connector configurations.

13.2.91.3. ExternalConfiguration schema properties

PropertyDescription

env

Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as environment variables.

ExternalConfigurationEnv array

volumes

Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as volumes.

ExternalConfigurationVolumeSource array

13.2.92. ExternalConfigurationEnv schema reference

Used in: ExternalConfiguration

PropertyDescription

name

Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_.

string

valueFrom

Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap.

ExternalConfigurationEnvVarSource

13.2.93. ExternalConfigurationEnvVarSource schema reference

Used in: ExternalConfigurationEnv

PropertyDescription

configMapKeyRef

Reference to a key in a ConfigMap. For more information, see the external documentation for core/v1 configmapkeyselector.

ConfigMapKeySelector

secretKeyRef

Reference to a key in a Secret. For more information, see the external documentation for core/v1 secretkeyselector.

SecretKeySelector

13.2.94. ExternalConfigurationVolumeSource schema reference

Used in: ExternalConfiguration

PropertyDescription

configMap

Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 configmapvolumesource.

ConfigMapVolumeSource

name

Name of the volume which will be added to the Kafka Connect pods.

string

secret

Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 secretvolumesource.

SecretVolumeSource

13.2.95. Build schema reference

Used in: KafkaConnectS2ISpec, KafkaConnectSpec

Full list of Build schema properties

Configures additional connectors for Kafka Connect deployments.

13.2.95.1. output

To build new container images with additional connector plugins, AMQ Streams requires a container registry where the images can be pushed to, stored, and pulled from. AMQ Streams does not run its own container registry, so a registry must be provided. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub. The container registry is configured in the .spec.build.output section of the KafkaConnect custom resource. The output configuration, which is required, supports two types: docker and imagestream.

Using Docker registry

To use a Docker registry, you have to specify the type as docker, and the image field with the full name of the new container image. The full name must include:

  • The address of the registry
  • Port number (if listening on a non-standard port)
  • The tag of the new container image

Example valid container image names:

  • docker.io/my-org/my-image/my-tag
  • quay.io/my-org/my-image/my-tag
  • image-registry.image-registry.svc:5000/myproject/kafka-connect-build:latest

Each Kafka Connect deployment must use a separate image, which can mean different tags at the most basic level.

If the registry requires authentication, use the pushSecret to set a name of the Secret with the registry credentials. For the Secret, use the kubernetes.io/dockerconfigjson type and a .dockerconfigjson file to contain the Docker credentials. For more information on pulling an image from a private registry, see Create a Secret based on existing Docker credentials.

Example output configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  build:
    output:
      type: docker 1
      image: my-registry.io/my-org/my-connect-cluster:latest 2
      pushSecret: my-registry-credentials 3
  #...

1
(Required) Type of output used by AMQ Streams.
2
(Required) Full name of the image used, including the repository and tag.
3
(Optional) Name of the secret with the container registry credentials.

Using OpenShift ImageStream

Instead of Docker, you can use OpenShift ImageStream to store a new container image. The ImageStream has to be created manually before deploying Kafka Connect. To use ImageStream, set the type to imagestream, and use the image property to specify the name of the ImageStream and the tag used. For example, my-connect-image-stream:latest.

Example output configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  build:
    output:
      type: imagestream 1
      image: my-connect-build:latest 2
  #...

1
(Required) Type of output used by AMQ Streams.
2
(Required) Name of the ImageStream and tag.

13.2.95.2. plugins

Connector plugins are a set of files that define the implementation required to connect to certain types of external system. The connector plugins required for a container image must be configured using the .spec.build.plugins property of the KafkaConnect custom resource. Each connector plugin must have a name which is unique within the Kafka Connect deployment. Additionally, the plugin artifacts must be listed. These artifacts are downloaded by AMQ Streams, added to the new container image, and used in the Kafka Connect deployment. The connector plugin artifacts can also include additional components, such as (de)serializers. Each connector plugin is downloaded into a separate directory so that the different connectors and their dependencies are properly sandboxed. Each plugin must be configured with at least one artifact.

Example plugins configuration with two connector plugins

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  build:
    output:
      #...
    plugins: 1
      - name: debezium-postgres-connector
        artifacts:
          - type: tgz
            url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz
            sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03
      - name: camel-telegram
        artifacts:
          - type: tgz
            url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz
            sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479
  #...

1
(Required) List of connector plugins and their artifacts.

AMQ Streams supports two types of artifacts: * JAR files, which are downloaded and used directly * TGZ archives, which are downloaded and unpacked

Important

AMQ Streams does not perform any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually, and configure the checksum verification to make sure the same artifact is used in the automated build and in the Kafka Connect deployment.

Using JAR artifacts

JAR artifacts represent a resource which is downloaded and added to a container image. JAR artifacts are mainly used for downloading JAR files, but they can also used to download other file types. To use a JAR artifacts, set the type property to jar, and specify the download location using the url property.

Additionally, you can specify a SHA-512 checksum of the artifact. If specified, AMQ Streams will verify the checksum of the artifact while building the new container image.

Example JAR artifact

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  build:
    output:
      #...
    plugins:
      - name: my-plugin
        artifacts:
          - type: jar 1
            url: https://my-domain.tld/my-jar.jar 2
            sha512sum: 589...ab4 3
          - type: jar
            url: https://my-domain.tld/my-jar2.jar
  #...

1
(Required) Type of artifact.
2
(Required) URL from which the artifact is downloaded.
3
(Optional) SHA-512 checksum to verify the artifact.

Using TGZ artifacts

TGZ artifacts are used to download TAR archives that have been compressed using Gzip compression. The TGZ artifact can contain the whole Kafka Connect connector, even when comprising multiple different files. The TGZ artifact is automatically downloaded and unpacked by AMQ Streams while building the new container image. To use TGZ artifacts, set the type property to tgz, and specify the download location using the url property.

Additionally, you can specify a SHA-512 checksum of the artifact. If specified, AMQ Streams will verify the checksum before unpacking it and building the new container image.

Example TGZ artifact

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
spec:
  #...
  build:
    output:
      #...
    plugins:
      - name: my-plugin
        artifacts:
          - type: tgz 1
            url: https://my-domain.tld/my-connector-archive.jar 2
            sha512sum: 158...jg10 3
  #...

1
(Required) Type of artifact.
2
(Required) URL from which the archive is downloaded.
3
(Optional) SHA-512 checksum to verify the artifact.

13.2.95.3. Build schema properties

PropertyDescription

output

Configures where should the newly built image be stored. Required. The type depends on the value of the output.type property within the given object, which must be one of [docker, imagestream].

DockerOutput, ImageStreamOutput

resources

CPU and memory resources to reserve for the build. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

plugins

List of connector plugins which should be added to the Kafka Connect. Required.

Plugin array

13.2.96. DockerOutput schema reference

Used in: Build

The type property is a discriminator that distinguishes use of the DockerOutput type from ImageStreamOutput. It must have the value docker for the type DockerOutput.

PropertyDescription

image

The full name which should be used for tagging and pushing the newly built image. For example quay.io/my-organization/my-custom-connect:latest. Required.

string

pushSecret

Container Registry Secret with the credentials for pushing the newly built image.

string

additionalKanikoOptions

Configures additional options which will be passed to the Kaniko executor when building the new Connect image. Allowed options are: --customPlatform, --insecure, --insecure-pull, --insecure-registry, --log-format, --log-timestamp, --registry-mirror, --reproducible, --single-snapshot, --skip-tls-verify, --skip-tls-verify-pull, --skip-tls-verify-registry, --verbosity, --snapshotMode, --use-new-run. These options will be used only on OpenShift where the Kaniko executor is used. They will be ignored on OpenShift. The options are described in the Kaniko GitHub repository. Changing this field does not trigger new build of the Kafka Connect image.

string array

type

Must be docker.

string

13.2.97. ImageStreamOutput schema reference

Used in: Build

The type property is a discriminator that distinguishes use of the ImageStreamOutput type from DockerOutput. It must have the value imagestream for the type ImageStreamOutput.

PropertyDescription

image

The name and tag of the ImageStream where the newly built image will be pushed. For example my-custom-connect:latest. Required.

string

type

Must be imagestream.

string

13.2.98. Plugin schema reference

Used in: Build

PropertyDescription

name

The unique name of the connector plugin. Will be used to generate the path where the connector artifacts will be stored. The name has to be unique within the KafkaConnect resource. The name has to follow the following pattern: ^[a-z][-_a-z0-9]*[a-z]$. Required.

string

artifacts

List of artifacts which belong to this connector plugin. Required.

JarArtifact, TgzArtifact, ZipArtifact array

13.2.99. JarArtifact schema reference

Used in: Plugin

PropertyDescription

url

URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required.

string

sha512sum

SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified.

string

type

Must be jar.

string

13.2.100. TgzArtifact schema reference

Used in: Plugin

PropertyDescription

url

URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required.

string

sha512sum

SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified.

string

type

Must be tgz.

string

13.2.101. ZipArtifact schema reference

Used in: Plugin

PropertyDescription

url

URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required.

string

sha512sum

SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified.

string

type

Must be zip.

string

13.2.102. KafkaConnectStatus schema reference

Used in: KafkaConnect

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

url

The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors.

string

connectorPlugins

The list of connector plugins available in this Kafka Connect deployment.

ConnectorPlugin array

labelSelector

Label selector for pods providing this resource.

string

replicas

The current number of pods being used to provide this resource.

integer

13.2.103. ConnectorPlugin schema reference

Used in: KafkaConnectS2IStatus, KafkaConnectStatus, KafkaMirrorMaker2Status

PropertyDescription

type

The type of the connector plugin. The available types are sink and source.

string

version

The version of the connector plugin.

string

class

The class of the connector plugin.

string

13.2.104. KafkaConnectS2I schema reference

The type KafkaConnectS2I has been deprecated. Please use Build instead.

PropertyDescription

spec

The specification of the Kafka Connect Source-to-Image (S2I) cluster.

KafkaConnectS2ISpec

status

The status of the Kafka Connect Source-to-Image (S2I) cluster.

KafkaConnectS2IStatus

13.2.105. KafkaConnectS2ISpec schema reference

Used in: KafkaConnectS2I

Full list of KafkaConnectS2ISpec schema properties

Configures a Kafka Connect cluster with Source-to-Image (S2I) support.

When extending Kafka Connect with connector plugins on OpenShift (only), you can use OpenShift builds and S2I to create a container image that is used by the Kafka Connect deployment.

The configuration options are similar to Kafka Connect configuration using the KafkaConnectSpec schema.

13.2.105.1. KafkaConnectS2ISpec schema properties

PropertyDescription

version

The Kafka Connect version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version.

string

replicas

The number of pods in the Kafka Connect group.

integer

image

The docker image for the pods.

string

buildResources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

bootstrapServers

Bootstrap servers to connect to. This should be given as a comma separated list of <hostname>:‍<port> pairs.

string

tls

TLS configuration.

KafkaConnectTls

authentication

Authentication configuration for Kafka Connect. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth].

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

config

The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

map

resources

The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

jvmOptions

JVM Options for pods.

JvmOptions

jmxOptions

JMX Options.

KafkaJmxOptions

affinity

The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity. The property affinity is removed in API version v1beta2. The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations. The property tolerations is removed in API version v1beta2. The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

logging

Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

metrics

The metrics property has been deprecated, and should now be configured using spec.metricsConfig. The property metrics is removed in API version v1beta2. The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration.

map

tracing

The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger].

JaegerTracing

template

Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment, Pods and Service are generated.

KafkaConnectTemplate

externalConfiguration

Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors.

ExternalConfiguration

build

Configures how the Connect container image should be built. Optional.

Build

clientRackInitImage

The image of the init container used for initializing the client.rack.

string

insecureSourceRepository

When true this configures the source repository with the 'Local' reference policy and an import policy that accepts insecure source tags.

boolean

metricsConfig

Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter].

JmxPrometheusExporterMetrics

rack

Configuration of the node label which will be used as the client.rack consumer configuration.

Rack

13.2.106. KafkaConnectS2IStatus schema reference

Used in: KafkaConnectS2I

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

url

The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors.

string

connectorPlugins

The list of connector plugins available in this Kafka Connect deployment.

ConnectorPlugin array

buildConfigName

The name of the build configuration.

string

labelSelector

Label selector for pods providing this resource.

string

replicas

The current number of pods being used to provide this resource.

integer

13.2.107. KafkaTopic schema reference

PropertyDescription

spec

The specification of the topic.

KafkaTopicSpec

status

The status of the topic.

KafkaTopicStatus

13.2.108. KafkaTopicSpec schema reference

Used in: KafkaTopic

PropertyDescription

partitions

The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning.

integer

replicas

The number of replicas the topic should have.

integer

config

The topic configuration.

map

topicName

The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name.

string

13.2.109. KafkaTopicStatus schema reference

Used in: KafkaTopic

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

13.2.110. KafkaUser schema reference

PropertyDescription

spec

The specification of the user.

KafkaUserSpec

status

The status of the Kafka User.

KafkaUserStatus

13.2.111. KafkaUserSpec schema reference

Used in: KafkaUser

PropertyDescription

authentication

Authentication mechanism enabled for this Kafka user. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512].

KafkaUserTlsClientAuthentication, KafkaUserScramSha512ClientAuthentication

authorization

Authorization rules for this Kafka user. The type depends on the value of the authorization.type property within the given object, which must be one of [simple].

KafkaUserAuthorizationSimple

quotas

Quotas on requests to control the broker resources used by clients. Network bandwidth and request rate quotas can be enforced.Kafka documentation for Kafka User quotas can be found at http://kafka.apache.org/documentation/#design_quotas.

KafkaUserQuotas

template

Template to specify how Kafka User Secrets are generated.

KafkaUserTemplate

13.2.112. KafkaUserTlsClientAuthentication schema reference

Used in: KafkaUserSpec

The type property is a discriminator that distinguishes use of the KafkaUserTlsClientAuthentication type from KafkaUserScramSha512ClientAuthentication. It must have the value tls for the type KafkaUserTlsClientAuthentication.

PropertyDescription

type

Must be tls.

string

13.2.113. KafkaUserScramSha512ClientAuthentication schema reference

Used in: KafkaUserSpec

The type property is a discriminator that distinguishes use of the KafkaUserScramSha512ClientAuthentication type from KafkaUserTlsClientAuthentication. It must have the value scram-sha-512 for the type KafkaUserScramSha512ClientAuthentication.

PropertyDescription

type

Must be scram-sha-512.

string

13.2.114. KafkaUserAuthorizationSimple schema reference

Used in: KafkaUserSpec

The type property is a discriminator that distinguishes use of the KafkaUserAuthorizationSimple type from other subtypes which may be added in the future. It must have the value simple for the type KafkaUserAuthorizationSimple.

PropertyDescription

type

Must be simple.

string

acls

List of ACL rules which should be applied to this user.

AclRule array

13.2.115. AclRule schema reference

Used in: KafkaUserAuthorizationSimple

Full list of AclRule schema properties

Configures access control rule for a KafkaUser when brokers are using the AclAuthorizer.

Example KafkaUser configuration with authorization

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
  name: my-user
  labels:
    strimzi.io/cluster: my-cluster
spec:
  # ...
  authorization:
    type: simple
    acls:
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Read
      - resource:
          type: topic
          name: my-topic
          patternType: literal
        operation: Describe
      - resource:
          type: group
          name: my-group
          patternType: prefix
        operation: Read

13.2.115.1. resource

Use the resource property to specify the resource that the rule applies to.

Simple authorization supports four resource types, which are specified in the type property:

  • Topics (topic)
  • Consumer Groups (group)
  • Clusters (cluster)
  • Transactional IDs (transactionalId)

For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name property.

Cluster type resources have no name.

A name is specified as a literal or a prefix using the patternType property.

  • Literal names are taken exactly as they are specified in the name field.
  • Prefix names use the value from the name as a prefix, and will apply the rule to all resources with names starting with the value.

13.2.115.2. type

The type of rule, which is to allow or deny (not currently supported) an operation.

The type field is optional. If type is unspecified, the ACL rule is treated as an allow rule.

13.2.115.3. operation

Specify an operation for the rule to allow or deny.

The following operations are supported:

  • Read
  • Write
  • Delete
  • Alter
  • Describe
  • All
  • IdempotentWrite
  • ClusterAction
  • Create
  • AlterConfigs
  • DescribeConfigs

Only certain operations work with each resource.

For more details about AclAuthorizer, ACLs and supported combinations of resources and operations, see Authorization and ACLs.

13.2.115.4. host

Use the host property to specify a remote host from which the rule is allowed or denied.

Use an asterisk (*) to allow or deny the operation from all hosts. The host field is optional. If host is unspecified, the * value is used by default.

13.2.115.5. AclRule schema properties

PropertyDescription

host

The host from which the action described in the ACL rule is allowed or denied.

string

operation

Operation which will be allowed or denied. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All.

string (one of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs])

resource

Indicates the resource for which given ACL rule applies. The type depends on the value of the resource.type property within the given object, which must be one of [topic, group, cluster, transactionalId].

AclRuleTopicResource, AclRuleGroupResource, AclRuleClusterResource, AclRuleTransactionalIdResource

type

The type of the rule. Currently the only supported type is allow. ACL rules with type allow are used to allow user to execute the specified operations. Default value is allow.

string (one of [allow, deny])

13.2.116. AclRuleTopicResource schema reference

Used in: AclRule

The type property is a discriminator that distinguishes use of the AclRuleTopicResource type from AclRuleGroupResource, AclRuleClusterResource, AclRuleTransactionalIdResource. It must have the value topic for the type AclRuleTopicResource.

PropertyDescription

type

Must be topic.

string

name

Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern.

string

patternType

Describes the pattern used in the resource field. The supported types are literal and prefix. With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal.

string (one of [prefix, literal])

13.2.117. AclRuleGroupResource schema reference

Used in: AclRule

The type property is a discriminator that distinguishes use of the AclRuleGroupResource type from AclRuleTopicResource, AclRuleClusterResource, AclRuleTransactionalIdResource. It must have the value group for the type AclRuleGroupResource.

PropertyDescription

type

Must be group.

string

name

Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern.

string

patternType

Describes the pattern used in the resource field. The supported types are literal and prefix. With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal.

string (one of [prefix, literal])

13.2.118. AclRuleClusterResource schema reference

Used in: AclRule

The type property is a discriminator that distinguishes use of the AclRuleClusterResource type from AclRuleTopicResource, AclRuleGroupResource, AclRuleTransactionalIdResource. It must have the value cluster for the type AclRuleClusterResource.

PropertyDescription

type

Must be cluster.

string

13.2.119. AclRuleTransactionalIdResource schema reference

Used in: AclRule

The type property is a discriminator that distinguishes use of the AclRuleTransactionalIdResource type from AclRuleTopicResource, AclRuleGroupResource, AclRuleClusterResource. It must have the value transactionalId for the type AclRuleTransactionalIdResource.

PropertyDescription

type

Must be transactionalId.

string

name

Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern.

string

patternType

Describes the pattern used in the resource field. The supported types are literal and prefix. With literal pattern type, the resource field will be used as a definition of a full name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal.

string (one of [prefix, literal])

13.2.120. KafkaUserQuotas schema reference

Used in: KafkaUserSpec

Full list of KafkaUserQuotas schema properties

Kafka allows a user to set quotas to control the use of resources by clients.

13.2.120.1. quotas

Quotas split into two categories:

  • Network usage quotas, which are defined as the byte rate threshold for each group of clients sharing a quota
  • CPU utilization quotas, which are defined as the percentage of time a client can utilize on request handler I/O threads and network threads of each broker within a quota window

Using quotas for Kafka clients might be useful in a number of situations. Consider a wrongly configured Kafka producer which is sending requests at too high a rate. Such misconfiguration can cause a denial of service to other clients, so the problematic client ought to be blocked. By using a network limiting quota, it is possible to prevent this situation from significantly impacting other clients.

AMQ Streams supports user-level quotas, but not client-level quotas.

An example Kafka user quotas

spec:
  quotas:
    producerByteRate: 1048576
    consumerByteRate: 2097152
    requestPercentage: 55

For more info about Kafka user quotas, refer to the Apache Kafka documentation.

13.2.120.2. KafkaUserQuotas schema properties

PropertyDescription

consumerByteRate

A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis.

integer

producerByteRate

A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis.

integer

requestPercentage

A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads.

integer

13.2.121. KafkaUserTemplate schema reference

Used in: KafkaUserSpec

Full list of KafkaUserTemplate schema properties

Specify additional labels and annotations for the secret created by the User Operator.

An example showing the KafkaUserTemplate

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
  name: my-user
  labels:
    strimzi.io/cluster: my-cluster
spec:
  authentication:
    type: tls
  template:
    secret:
      metadata:
        labels:
          label1: value1
        annotations:
          anno1: value1
  # ...

13.2.121.1. KafkaUserTemplate schema properties

PropertyDescription

secret

Template for KafkaUser resources. The template allows users to specify how the Secret with password or TLS certificates is generated.

ResourceTemplate

13.2.122. KafkaUserStatus schema reference

Used in: KafkaUser

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

username

Username.

string

secret

The name of Secret where the credentials are stored.

string

13.2.123. KafkaMirrorMaker schema reference

PropertyDescription

spec

The specification of Kafka MirrorMaker.

KafkaMirrorMakerSpec

status

The status of Kafka MirrorMaker.

KafkaMirrorMakerStatus

13.2.124. KafkaMirrorMakerSpec schema reference

Used in: KafkaMirrorMaker

Full list of KafkaMirrorMakerSpec schema properties

Configures Kafka MirrorMaker.

13.2.124.1. whitelist

Use the whitelist property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster.

The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using "A|B" or all topics using "*". You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker.

13.2.124.2. KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec

Use the KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec to configure source (consumer) and target (producer) clusters.

Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT pairs. Each comma-separated list contains one or more Kafka brokers or a Service pointing to Kafka brokers specified as a HOSTNAME:PORT pair.

13.2.124.3. logging

Kafka MirrorMaker has its own configurable logger:

  • mirrormaker.root.logger

MirrorMaker uses the Apache log4j logger implementation.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties. Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging:

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaMirrorMaker
spec:
  # ...
  logging:
    type: inline
    loggers:
      mirrormaker.root.logger: "INFO"
  # ...
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaMirrorMaker
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        name: customConfigMap
        key: mirror-maker-log4j.properties
  # ...

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

13.2.124.4. KafkaMirrorMakerSpec schema properties

PropertyDescription

version

The Kafka MirrorMaker version. Defaults to 2.7.0. Consult the documentation to understand the process required to upgrade or downgrade the version.

string

replicas

The number of pods in the Deployment.

integer

image

The docker image for the pods.

string

consumer

Configuration of source cluster.

KafkaMirrorMakerConsumerSpec

producer

Configuration of target cluster.

KafkaMirrorMakerProducerSpec

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

whitelist

List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the whitelist 'A|B'. Or, as a special case, you can mirror all topics using the whitelist '*'. You can also specify multiple regular expressions separated by commas.

string

affinity

The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity. The property affinity is removed in API version v1beta2. The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations. The property tolerations is removed in API version v1beta2. The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

jvmOptions

JVM Options for pods.

JvmOptions

logging

Logging configuration for MirrorMaker. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

metrics

The metrics property has been deprecated, and should now be configured using spec.metricsConfig. The property metrics is removed in API version v1beta2. The Prometheus JMX Exporter configuration. See JMX Exporter documentation for details of the structure of this configuration.

map

metricsConfig

Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter].

JmxPrometheusExporterMetrics

tracing

The configuration of tracing in Kafka MirrorMaker. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger].

JaegerTracing

template

Template to specify how Kafka MirrorMaker resources, Deployments and Pods, are generated.

KafkaMirrorMakerTemplate

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

13.2.125. KafkaMirrorMakerConsumerSpec schema reference

Used in: KafkaMirrorMakerSpec

Full list of KafkaMirrorMakerConsumerSpec schema properties

Configures a MirrorMaker consumer.

13.2.125.1. numStreams

Use the consumer.numStreams property to configure the number of streams for the consumer.

You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel.

13.2.125.2. offsetCommitInterval

Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer.

You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000.

13.2.125.3. config

Use the consumer.config properties to configure Kafka options for the consumer.

The config property contains the Kafka MirrorMaker consumer configuration options as keys, with values set in one of the following JSON types:

  • String
  • Number
  • Boolean

For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification.

Exceptions

You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers.

However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to:

  • Kafka cluster bootstrap address
  • Security (encryption, authentication, and authorization)
  • Consumer group identifier
  • Interceptors

Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker.

Important

The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.consumer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker.

13.2.125.4. groupId

Use the consumer.groupId property to configure a consumer group identifier for the consumer.

Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions.

13.2.125.5. KafkaMirrorMakerConsumerSpec schema properties

PropertyDescription

numStreams

Specifies the number of consumer stream threads to create.

integer

offsetCommitInterval

Specifies the offset auto-commit interval in ms. Default value is 60000.

integer

bootstrapServers

A list of host:port pairs for establishing the initial connection to the Kafka cluster.

string

groupId

A unique string that identifies the consumer group this consumer belongs to.

string

authentication

Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth].

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

config

The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

map

tls

TLS configuration for connecting MirrorMaker to the cluster.

KafkaMirrorMakerTls

13.2.126. KafkaMirrorMakerTls schema reference

Used in: KafkaMirrorMakerConsumerSpec, KafkaMirrorMakerProducerSpec

Full list of KafkaMirrorMakerTls schema properties

Configures TLS trusted certificates for connecting MirrorMaker to the cluster.

13.2.126.1. trustedCertificates

Provide a list of secrets using the trustedCertificates property.

13.2.126.2. KafkaMirrorMakerTls schema properties

PropertyDescription

trustedCertificates

Trusted certificates for TLS connection.

CertSecretSource array

13.2.127. KafkaMirrorMakerProducerSpec schema reference

Used in: KafkaMirrorMakerSpec

Full list of KafkaMirrorMakerProducerSpec schema properties

Configures a MirrorMaker producer.

13.2.127.1. abortOnSendFailure

Use the producer.abortOnSendFailure property to configure how to handle message send failure from the producer.

By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster:

  • The Kafka MirrorMaker container is terminated in OpenShift.
  • The container is then recreated.

If the abortOnSendFailure option is set to false, message sending errors are ignored.

13.2.127.2. config

Use the producer.config properties to configure Kafka options for the producer.

The config property contains the Kafka MirrorMaker producer configuration options as keys, with values set in one of the following JSON types:

  • String
  • Number
  • Boolean

For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification.

Exceptions

You can specify and configure the options listed in the Apache Kafka configuration documentation for producers.

However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to:

  • Kafka cluster bootstrap address
  • Security (encryption, authentication, and authorization)
  • Interceptors

Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

When a forbidden option is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker.

Important

The Cluster Operator does not validate keys or values in the provided config object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.producer.config object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker.

13.2.127.3. KafkaMirrorMakerProducerSpec schema properties

PropertyDescription

bootstrapServers

A list of host:port pairs for establishing the initial connection to the Kafka cluster.

string

abortOnSendFailure

Flag to set the MirrorMaker to exit on a failed send. Default value is true.

boolean

authentication

Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth].

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

config

The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

map

tls

TLS configuration for connecting MirrorMaker to the cluster.

KafkaMirrorMakerTls

13.2.128. KafkaMirrorMakerTemplate schema reference

Used in: KafkaMirrorMakerSpec

PropertyDescription

deployment

Template for Kafka MirrorMaker Deployment.

DeploymentTemplate

pod

Template for Kafka MirrorMaker Pods.

PodTemplate

mirrorMakerContainer

Template for Kafka MirrorMaker container.

ContainerTemplate

podDisruptionBudget

Template for Kafka MirrorMaker PodDisruptionBudget.

PodDisruptionBudgetTemplate

13.2.129. KafkaMirrorMakerStatus schema reference

Used in: KafkaMirrorMaker

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

labelSelector

Label selector for pods providing this resource.

string

replicas

The current number of pods being used to provide this resource.

integer

13.2.130. KafkaBridge schema reference

PropertyDescription

spec

The specification of the Kafka Bridge.

KafkaBridgeSpec

status

The status of the Kafka Bridge.

KafkaBridgeStatus

13.2.131. KafkaBridgeSpec schema reference

Used in: KafkaBridge

Full list of KafkaBridgeSpec schema properties

Configures a Kafka Bridge cluster.

Configuration options relate to:

  • Kafka cluster bootstrap address
  • Security (Encryption, Authentication, and Authorization)
  • Consumer configuration
  • Producer configuration
  • HTTP configuration

13.2.131.1. logging

Kafka Bridge has its own configurable loggers:

  • logger.bridge
  • logger.<operation-id>

You can replace <operation-id> in the logger.<operation-id> logger to set log levels for specific operations:

  • createConsumer
  • deleteConsumer
  • subscribe
  • unsubscribe
  • poll
  • assign
  • commit
  • send
  • sendToPartition
  • seekToBeginning
  • seekToEnd
  • seek
  • healthy
  • ready
  • openapi

Each operation is defined according OpenAPI specification, and has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to create fine-grained logging information about the incoming and outgoing HTTP requests.

Each logger has to be configured assigning it a name as http.openapi.operation.<operation-id>. For example, configuring the logging level for the send operation logger means defining the following:

logger.send.name = http.openapi.operation.send
logger.send.level = DEBUG

Kafka Bridge uses the Apache log4j2 logger implementation. Loggers are defined in the log4j2.properties file, which has the following default configuration for healthy and ready endpoints:

logger.healthy.name = http.openapi.operation.healthy
logger.healthy.level = WARN
logger.ready.name = http.openapi.operation.ready
logger.ready.level = WARN

The log level of all other operations is set to INFO by default.

Use the logging property to configure loggers and logger levels.

You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. The logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. Default logging is used if the name or key is not set. Inside the ConfigMap, the logging configuration is described using log4j.properties. For more information about log levels, see Apache logging services.

Here we see examples of inline and external logging.

Inline logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaBridge
spec:
  # ...
  logging:
    type: inline
    loggers:
      logger.bridge.level: "INFO"
      # enabling DEBUG just for send operation
      logger.send.name: "http.openapi.operation.send"
      logger.send.level: "DEBUG"
  # ...

External logging

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaBridge
spec:
  # ...
  logging:
    type: external
    valueFrom:
      configMapKeyRef:
        name: customConfigMap
        key: bridge-logj42.properties
  # ...

Any available loggers that are not configured have their level set to OFF.

If the Kafka Bridge was deployed using the Cluster Operator, changes to Kafka Bridge logging levels are applied dynamically.

If you use external logging, a rolling update is triggered when logging appenders are changed.

Garbage collector (GC)

Garbage collector logging can also be enabled (or disabled) using the jvmOptions property.

13.2.131.2. KafkaBridgeSpec schema properties

PropertyDescription

replicas

The number of pods in the Deployment.

integer

image

The docker image for the pods.

string

bootstrapServers

A list of host:port pairs for establishing the initial connection to the Kafka cluster.

string

tls

TLS configuration for connecting Kafka Bridge to the cluster.

KafkaBridgeTls

authentication

Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth].

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

http

The HTTP related configuration.

KafkaBridgeHttpConfig

consumer

Kafka consumer related configuration.

KafkaBridgeConsumerSpec

producer

Kafka producer related configuration.

KafkaBridgeProducerSpec

resources

CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

jvmOptions

Currently not supported JVM Options for pods.

JvmOptions

logging

Logging configuration for Kafka Bridge. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

enableMetrics

Enable the metrics for the Kafka Bridge. Default is false.

boolean

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

template

Template for Kafka Bridge resources. The template allows users to specify how is the Deployment and Pods generated.

KafkaBridgeTemplate

tracing

The configuration of tracing in Kafka Bridge. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger].

JaegerTracing

13.2.132. KafkaBridgeTls schema reference

Used in: KafkaBridgeSpec

PropertyDescription

trustedCertificates

Trusted certificates for TLS connection.

CertSecretSource array

13.2.133. KafkaBridgeHttpConfig schema reference

Used in: KafkaBridgeSpec

Full list of KafkaBridgeHttpConfig schema properties

Configures HTTP access to a Kafka cluster for the Kafka Bridge.

The default HTTP configuration is for the Kafka Bridge to listen on port 8080.

13.2.133.1. cors

As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP access methods. For the origins, you can use a URL or a Java regular expression.

Example Kafka Bridge HTTP configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaBridge
metadata:
  name: my-bridge
spec:
  # ...
  http:
    port: 8080
    cors:
      allowedOrigins: "https://strimzi.io"
      allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH"
  # ...

13.2.133.2. KafkaBridgeHttpConfig schema properties

PropertyDescription

port

The port which is the server listening on.

integer

cors

CORS configuration for the HTTP Bridge.

KafkaBridgeHttpCors

13.2.134. KafkaBridgeHttpCors schema reference

Used in: KafkaBridgeHttpConfig

PropertyDescription

allowedOrigins

List of allowed origins. Java regular expressions can be used.

string array

allowedMethods

List of allowed HTTP methods.

string array

13.2.135. KafkaBridgeConsumerSpec schema reference

Used in: KafkaBridgeSpec

Full list of KafkaBridgeConsumerSpec schema properties

Configures consumer options for the Kafka Bridge as keys.

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • bootstrap.servers
  • group.id

When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka

Important

The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes.

There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties.

Example Kafka Bridge consumer configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaBridge
metadata:
  name: my-bridge
spec:
  # ...
  consumer:
    config:
      auto.offset.reset: earliest
      enable.auto.commit: true
      ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
      ssl.enabled.protocols: "TLSv1.2"
      ssl.protocol: "TLSv1.2"
      ssl.endpoint.identification.algorithm: HTTPS
    # ...

13.2.135.1. KafkaBridgeConsumerSpec schema properties

PropertyDescription

config

The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

map

13.2.136. KafkaBridgeProducerSpec schema reference

Used in: KafkaBridgeSpec

Full list of KafkaBridgeProducerSpec schema properties

Configures producer options for the Kafka Bridge as keys.

The values can be one of the following JSON types:

  • String
  • Number
  • Boolean

You can specify and configure the options listed in the Apache Kafka configuration documentation for producers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:

  • ssl.
  • sasl.
  • security.
  • bootstrap.servers

When one of the forbidden options is present in the config property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka

Important

The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes.

There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties.

Example Kafka Bridge producer configuration

apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaBridge
metadata:
  name: my-bridge
spec:
  # ...
  producer:
    config:
      acks: 1
      delivery.timeout.ms: 300000
      ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
      ssl.enabled.protocols: "TLSv1.2"
      ssl.protocol: "TLSv1.2"
      ssl.endpoint.identification.algorithm: HTTPS
    # ...

13.2.136.1. KafkaBridgeProducerSpec schema properties

PropertyDescription

config

The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

map

13.2.137. KafkaBridgeTemplate schema reference

Used in: KafkaBridgeSpec

PropertyDescription

deployment

Template for Kafka Bridge Deployment.

DeploymentTemplate

pod

Template for Kafka Bridge Pods.

PodTemplate

apiService

Template for Kafka Bridge API Service.

ResourceTemplate

bridgeContainer

Template for the Kafka Bridge container.

ContainerTemplate

podDisruptionBudget

Template for Kafka Bridge PodDisruptionBudget.

PodDisruptionBudgetTemplate

13.2.138. KafkaBridgeStatus schema reference

Used in: KafkaBridge

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

url

The URL at which external client applications can access the Kafka Bridge.

string

labelSelector

Label selector for pods providing this resource.

string

replicas

The current number of pods being used to provide this resource.

integer

13.2.139. KafkaConnector schema reference

PropertyDescription

spec

The specification of the Kafka Connector.

KafkaConnectorSpec

status

The status of the Kafka Connector.

KafkaConnectorStatus

13.2.140. KafkaConnectorSpec schema reference

Used in: KafkaConnector

PropertyDescription

class

The Class for the Kafka Connector.

string

tasksMax

The maximum number of tasks for the Kafka Connector.

integer

config

The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max.

map

pause

Whether the connector should be paused. Defaults to false.

boolean

13.2.141. KafkaConnectorStatus schema reference

Used in: KafkaConnector

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

connectorStatus

The connector status, as reported by the Kafka Connect REST API.

map

tasksMax

The maximum number of tasks for the Kafka Connector.

integer

topics

The list of topics used by the Kafka Connector.

string array

13.2.142. KafkaMirrorMaker2 schema reference

PropertyDescription

spec

The specification of the Kafka MirrorMaker 2.0 cluster.

KafkaMirrorMaker2Spec

status

The status of the Kafka MirrorMaker 2.0 cluster.

KafkaMirrorMaker2Status

13.2.143. KafkaMirrorMaker2Spec schema reference

Used in: KafkaMirrorMaker2

PropertyDescription

version

The Kafka Connect version. Defaults to 2.7.0. Consult the user documentation to understand the process required to upgrade or downgrade the version.

string

replicas

The number of pods in the Kafka Connect group.

integer

image

The docker image for the pods.

string

connectCluster

The cluster alias used for Kafka Connect. The alias must match a cluster in the list at spec.clusters.

string

clusters

Kafka clusters for mirroring.

KafkaMirrorMaker2ClusterSpec array

mirrors

Configuration of the MirrorMaker 2.0 connectors.

KafkaMirrorMaker2MirrorSpec array

resources

The maximum limits for CPU and memory resources and the requested initial resources. For more information, see the external documentation for core/v1 resourcerequirements.

ResourceRequirements

livenessProbe

Pod liveness checking.

Probe

readinessProbe

Pod readiness checking.

Probe

jvmOptions

JVM Options for pods.

JvmOptions

jmxOptions

JMX Options.

KafkaJmxOptions

affinity

The affinity property has been deprecated, and should now be configured using spec.template.pod.affinity. The property affinity is removed in API version v1beta2. The pod’s affinity rules. For more information, see the external documentation for core/v1 affinity.

Affinity

tolerations

The tolerations property has been deprecated, and should now be configured using spec.template.pod.tolerations. The property tolerations is removed in API version v1beta2. The pod’s tolerations. For more information, see the external documentation for core/v1 toleration.

Toleration array

logging

Logging configuration for Kafka Connect. The type depends on the value of the logging.type property within the given object, which must be one of [inline, external].

InlineLogging, ExternalLogging

metrics

The metrics property has been deprecated, and should now be configured using spec.metricsConfig. The property metrics is removed in API version v1beta2. The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration.

map

tracing

The configuration of tracing in Kafka Connect. The type depends on the value of the tracing.type property within the given object, which must be one of [jaeger].

JaegerTracing

template

Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the Deployment, Pods and Service are generated.

KafkaConnectTemplate

externalConfiguration

Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors.

ExternalConfiguration

metricsConfig

Metrics configuration. The type depends on the value of the metricsConfig.type property within the given object, which must be one of [jmxPrometheusExporter].

JmxPrometheusExporterMetrics

13.2.144. KafkaMirrorMaker2ClusterSpec schema reference

Used in: KafkaMirrorMaker2Spec

Full list of KafkaMirrorMaker2ClusterSpec schema properties

Configures Kafka clusters for mirroring.

13.2.144.1. config

Use the config properties to configure Kafka options.

Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.

For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties. You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification.

13.2.144.2. KafkaMirrorMaker2ClusterSpec schema properties

PropertyDescription

alias

Alias used to reference the Kafka cluster.

string

bootstrapServers

A comma-separated list of host:port pairs for establishing the connection to the Kafka cluster.

string

tls

TLS configuration for connecting MirrorMaker 2.0 connectors to a cluster.

KafkaMirrorMaker2Tls

authentication

Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-512, plain, oauth].

KafkaClientAuthenticationTls, KafkaClientAuthenticationScramSha512, KafkaClientAuthenticationPlain, KafkaClientAuthenticationOAuth

config

The MirrorMaker 2.0 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).

map

13.2.145. KafkaMirrorMaker2Tls schema reference

Used in: KafkaMirrorMaker2ClusterSpec

PropertyDescription

trustedCertificates

Trusted certificates for TLS connection.

CertSecretSource array

13.2.146. KafkaMirrorMaker2MirrorSpec schema reference

Used in: KafkaMirrorMaker2Spec

PropertyDescription

sourceCluster

The alias of the source cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters.

string

targetCluster

The alias of the target cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at spec.clusters.

string

sourceConnector

The specification of the Kafka MirrorMaker 2.0 source connector.

KafkaMirrorMaker2ConnectorSpec

heartbeatConnector

The specification of the Kafka MirrorMaker 2.0 heartbeat connector.

KafkaMirrorMaker2ConnectorSpec

checkpointConnector

The specification of the Kafka MirrorMaker 2.0 checkpoint connector.

KafkaMirrorMaker2ConnectorSpec

topicsPattern

A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported.

string

topicsBlacklistPattern

A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported.

string

groupsPattern

A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported.

string

groupsBlacklistPattern

A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported.

string

13.2.147. KafkaMirrorMaker2ConnectorSpec schema reference

Used in: KafkaMirrorMaker2MirrorSpec

PropertyDescription

tasksMax

The maximum number of tasks for the Kafka Connector.

integer

config

The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max.

map

pause

Whether the connector should be paused. Defaults to false.

boolean

13.2.148. KafkaMirrorMaker2Status schema reference

Used in: KafkaMirrorMaker2

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

url

The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors.

string

connectorPlugins

The list of connector plugins available in this Kafka Connect deployment.

ConnectorPlugin array

connectors

List of MirrorMaker 2.0 connector statuses, as reported by the Kafka Connect REST API.

map array

labelSelector

Label selector for pods providing this resource.

string

replicas

The current number of pods being used to provide this resource.

integer

13.2.149. KafkaRebalance schema reference

PropertyDescription

spec

The specification of the Kafka rebalance.

KafkaRebalanceSpec

status

The status of the Kafka rebalance.

KafkaRebalanceStatus

13.2.150. KafkaRebalanceSpec schema reference

Used in: KafkaRebalance

PropertyDescription

goals

A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals. If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used.

string array

skipHardGoalCheck

Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false.

boolean

excludedTopics

A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported formar consult the documentation for that class.

string

concurrentPartitionMovementsPerBroker

The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5.

integer

concurrentIntraBrokerPartitionMovements

The upper bound of ongoing partition replica movements between disks within each broker. Default is 2.

integer

concurrentLeaderMovements

The upper bound of ongoing partition leadership movements. Default is 1000.

integer

replicationThrottle

The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default.

integer

replicaMovementStrategies

A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated.

string array

13.2.151. KafkaRebalanceStatus schema reference

Used in: KafkaRebalance

PropertyDescription

conditions

List of status conditions.

Condition array

observedGeneration

The generation of the CRD that was last reconciled by the operator.

integer

sessionId

The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations.

string

optimizationResult

A JSON object describing the optimization result.

map

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.