Questo contenuto non è disponibile nella lingua selezionata.
Appendix B. Custom Resource API Reference
B.1. Common configuration properties
Common configuration properties apply to more than one resource.
B.1.1. replicas
Use the replicas
property to configure replicas.
The type of replication depends on the resource.
-
KafkaTopic
uses a replication factor to configure the number of replicas of each partition within a Kafka cluster. - Kafka components use replicas to configure the number of pods in a deployment to provide better availability and scalability.
When running a Kafka component on OpenShift it may not be necessary to run multiple replicas for high availability. When the node where the component is deployed crashes, OpenShift will automatically reschedule the Kafka component pod to a different node. However, running Kafka components with multiple replicas can provide faster failover times as the other nodes will be up and running.
B.1.2. bootstrapServers
Use the bootstrapServers
property to configure a list of bootstrap servers.
The bootstrap server lists can refer to Kafka clusters that are not deployed in the same OpenShift cluster. They can also refer to a Kafka cluster not deployed by AMQ Streams.
If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named CLUSTER-NAME-kafka-bootstrap
and a port number. If deployed by AMQ Streams but on different OpenShift clusters, the list content depends on the approach used for exposing the clusters (routes, nodeports or loadbalancers).
When using Kafka with a Kafka cluster not managed by AMQ Streams, you can specify the bootstrap servers list according to the configuration of the given cluster.
B.1.3. ssl
Use the three allowed ssl
configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer.
You can also configure the ssl.endpoint.identification.algorithm
property to enable or disable hostname verification.
Example SSL configuration
# ... spec: config: ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 1 ssl.enabled.protocols: "TLSv1.2" 2 ssl.protocol: "TLSv1.2" 3 ssl.endpoint.identification.algorithm: HTTPS 4 # ...
- 1
- The cipher suite for TLS using a combination of
ECDHE
key exchange mechanism,RSA
authentication algorithm,AES
bulk encyption algorithm andSHA384
MAC algorithm. - 2
- The SSl protocol
TLSv1.2
is enabled. - 3
- Specifies the
TLSv1.2
protocol to generate the SSL context. Allowed values areTLSv1.1
andTLSv1.2
. - 4
- Hostname verification is enabled by setting to
HTTPS
. An empty string disables the verification.
B.1.4. trustedCertificates
Having set tls
to configure TLS encryption, use the trustedCertificates
property to provide a list of secrets with key names under which the certificates are stored in X.509 format.
You can use the secrets created by the Cluster Operator for the Kafka cluster, or you can create your own TLS certificate file, then create a Secret
from the file:
oc create secret generic MY-SECRET \ --from-file=MY-TLS-CERTIFICATE-FILE.crt
Example TLS encryption configuration
tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt
If certificates are stored in the same secret, it can be listed multiple times.
If you want to enable TLS, but use the default set of public certification authorities shipped with Java, you can specify trustedCertificates
as an empty array:
Example of enabling TLS with the default Java certificates
tls: trustedCertificates: []
For information on configuring TLS client authentication, see KafkaClientAuthenticationTls
schema reference.
B.1.5. resources
You request CPU and memory resources for components. Limits specify the maximum resources that can be consumed by a given container.
Resource requests and limits for the Topic Operator and User Operator are set in the Kafka
resource.
Use the reources.requests
and resources.limits
properties to configure resource requests and limits.
For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources.
AMQ Streams supports requests and limits for the following types of resources:
-
cpu
-
memory
AMQ Streams uses the OpenShift syntax for specifying these resources.
For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers.
Resource requests
Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available.
If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.
A request may be configured for one or more supported resources.
Resource limits
Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests.
A resource may be configured for one or more supported limits.
Supported CPU formats
CPU requests and limits are supported in the following formats:
-
Number of CPU cores as integer (
5
CPU core) or decimal (2.5
CPU core). -
Number or millicpus / millicores (
100m
) where 1000 millicores is the same1
CPU core.
The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.
For more information on CPU specification, see the Meaning of CPU.
Supported memory formats
Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes.
-
To specify memory in megabytes, use the
M
suffix. For example1000M
. -
To specify memory in gigabytes, use the
G
suffix. For example1G
. -
To specify memory in mebibytes, use the
Mi
suffix. For example1000Mi
. -
To specify memory in gibibytes, use the
Gi
suffix. For example1Gi
.
For more details about memory specification and additional supported units, see Meaning of memory.
B.1.6. image
Use the image
property to configure the container image used by the component.
Overriding container images is recommended only in special situations where you need to use a different container registry or a customized image.
For example, if your network does not allow access to the container repository used by AMQ Streams, you can copy the AMQ Streams images or build them from the source. However, if the configured image is not compatible with AMQ Streams images, it might not work properly.
A copy of the container image might also be customized and used for debugging.
You can specify which container image to use for a component using the image
property in the following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
Kafka.spec.entityOperator.topicOperator
-
Kafka.spec.entityOperator.userOperator
-
Kafka.spec.entityOperator.tlsSidecar
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaMirrorMaker.spec
-
KafkaMirrorMaker2.spec
-
KafkaBridge.spec
Configuring the image
property for Kafka, Kafka Connect, and Kafka MirrorMaker
Kafka, Kafka Connect (including Kafka Connect with S2I support), and Kafka MirrorMaker support multiple versions of Kafka. Each component requires its own image. The default images for the different Kafka versions are configured in the following environment variables:
-
STRIMZI_KAFKA_IMAGES
-
STRIMZI_KAFKA_CONNECT_IMAGES
-
STRIMZI_KAFKA_CONNECT_S2I_IMAGES
-
STRIMZI_KAFKA_MIRROR_MAKER_IMAGES
These environment variables contain mappings between the Kafka versions and their corresponding images. The mappings are used together with the image
and version
properties:
-
If neither
image
norversion
are given in the custom resource then theversion
will default to the Cluster Operator’s default Kafka version, and the image will be the one corresponding to this version in the environment variable. -
If
image
is given butversion
is not, then the given image is used and theversion
is assumed to be the Cluster Operator’s default Kafka version. -
If
version
is given butimage
is not, then the image that corresponds to the given version in the environment variable is used. -
If both
version
andimage
are given, then the given image is used. The image is assumed to contain a Kafka image with the given version.
The image
and version
for the different components can be configured in the following properties:
-
For Kafka in
spec.kafka.image
andspec.kafka.version
. -
For Kafka Connect, Kafka Connect S2I, and Kafka MirrorMaker in
spec.image
andspec.version
.
It is recommended to provide only the version
and leave the image
property unspecified. This reduces the chance of making a mistake when configuring the custom resource. If you need to change the images used for different versions of Kafka, it is preferable to configure the Cluster Operator’s environment variables.
Configuring the image
property in other resources
For the image
property in the other custom resources, the given value will be used during deployment. If the image
property is missing, the image
specified in the Cluster Operator configuration will be used. If the image
name is not defined in the Cluster Operator configuration, then the default value will be used.
For Topic Operator:
-
Container image specified in the
STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7
container image.
-
Container image specified in the
For User Operator:
-
Container image specified in the
STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7
container image.
-
Container image specified in the
For Entity Operator TLS sidecar:
-
Container image specified in the
STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7
container image.
-
Container image specified in the
For Kafka Exporter:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_EXPORTER_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7
container image.
-
Container image specified in the
For Kafka Bridge:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_BRIDGE_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-bridge-rhel7:1.6.7
container image.
-
Container image specified in the
For Kafka broker initializer:
-
Container image specified in the
STRIMZI_DEFAULT_KAFKA_INIT_IMAGE
environment variable from the Cluster Operator configuration. -
registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7
container image.
-
Container image specified in the
For Kafka broker initializer:
-
registry.redhat.io/amq7/amq-streams-rhel7-operator:1.6.7
container image.
-
Example of container image configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ...
B.1.7. livenessProbe
and readinessProbe
healthchecks
Use the livenessProbe
and readinessProbe
properties to configure healthcheck probes supported in AMQ Streams.
Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.
For more details about the probes, see Configure Liveness and Readiness Probes.
Both livenessProbe
and readinessProbe
support the following options:
-
initialDelaySeconds
-
timeoutSeconds
-
periodSeconds
-
successThreshold
-
failureThreshold
Example of liveness and readiness probe configuration
# ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ...
For more information about the livenessProbe
and readinessProbe
options, see Probe schema reference.
B.1.8. metrics
Use the metrics
property to enable and configure Prometheus metrics.
The metrics
property can also contain additional configuration for the Prometheus JMX exporter. AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics.
To enable Prometheus metrics export without any further configuration, you can set it to an empty object ({}
).
When metrics are enabled, they are exposed on port 9404.
When the metrics
property is not defined in the resource, the Prometheus metrics are disabled.
For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka in the Deploying and Upgrading AMQ Streams on OpenShift guide.
B.1.9. jvmOptions
JVM options can be configured using the jvmOptions
property in following resources:
-
Kafka.spec.kafka
-
Kafka.spec.zookeeper
-
KafkaConnect.spec
-
KafkaConnectS2I.spec
-
KafkaMirrorMaker.spec
-
KafkaMirrorMaker2.spec
-
KafkaBridge.spec
Only the following JVM options are supported:
-Xms
- Configures the minimum initial allocation heap size when the JVM starts.
-Xmx
- Configures the maximum heap size.
The units accepted by JVM settings such as -Xmx
and -Xms
are those accepted by the JDK java
binary in the corresponding image. Accordingly, 1g
or 1G
means 1,073,741,824 bytes, and Gi
is not a valid unit suffix. This is in contrast to the units used for memory requests and limits, which follow the OpenShift convention where 1G
means 1,000,000,000 bytes, and 1Gi
means 1,073,741,824 bytes
The default values used for -Xms
and -Xmx
depends on whether there is a memory request limit configured for the container.
- If there is a memory limit, the JVM’s minimum and maximum memory is set to a value corresponding to the limit.
-
If there is no memory limit, the JVM’s minimum memory is set to
128M
. The JVM’s maximum memory is not defined to allow the memory to grow as needed, which is ideal for single node environments in test and development.
Setting -Xmx
explicitly requires some care:
-
The JVM’s overall memory usage will be approximately 4 × the maximum heap, as configured by
-Xmx
. -
If
-Xmx
is set without also setting an appropriate OpenShift memory limit, it is possible that the container will be killed should the OpenShift node experience memory pressure (from other Pods running on it). -
If
-Xmx
is set without also setting an appropriate OpenShift memory request, it is possible that the container will be scheduled to a node with insufficient memory. In this case, the container will not start but crash (immediately if-Xms
is set to-Xmx
, or some later time if not).
When setting -Xmx
explicitly, it is recommended to:
- Set the memory request and the memory limit to the same value
-
Use a memory request that is at least 4.5 × the
-Xmx
-
Consider setting
-Xms
to the same value as-Xmx
Containers doing lots of disk I/O (such as Kafka broker containers) will need to leave some memory available for use as an operating system page cache. On such containers, the requested memory should be significantly higher than the memory used by the JVM.
Example fragment configuring -Xmx
and -Xms
# ... jvmOptions: "-Xmx": "2g" "-Xms": "2g" # ...
In the above example, the JVM will use 2 GiB (=2,147,483,648 bytes) for its heap. Its total memory usage will be approximately 8GiB.
Setting the same value for initial (-Xms
) and maximum (-Xmx
) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed. For Kafka and ZooKeeper pods such allocation could cause unwanted latency. For Kafka Connect avoiding over allocation may be the most important concern, especially in distributed mode where the effects of over-allocation is multiplied by the number of consumers.
-server
-server
enables the server JVM. This option can be set to true or false.
Example fragment configuring -server
# ... jvmOptions: "-server": true # ...
When neither of the two options (-server
and -XX
) are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
is used.
-XX
-XX
object can be used for configuring advanced runtime options of a JVM. The -server
and -XX
options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS
option of Apache Kafka.
Example showing the use of the -XX
object
jvmOptions: "-XX": "UseG1GC": true "MaxGCPauseMillis": 20 "InitiatingHeapOccupancyPercent": 35 "ExplicitGCInvokesConcurrent": true
The example configuration above will result in the following JVM options:
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
When neither of the two options (-server
and -XX
) are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS
is used.
B.1.10. Garbage collector logging
The jvmOptions
property also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled
property as follows:
Example of enabling GC logging
# ... jvmOptions: gcLoggingEnabled: true # ...
B.2. Kafka
schema reference
Property | Description |
---|---|
spec | The specification of the Kafka and ZooKeeper clusters, and Topic Operator. |
status | The status of the Kafka and ZooKeeper clusters, and Topic Operator. |
B.3. KafkaSpec
schema reference
Used in: Kafka
Property | Description |
---|---|
kafka | Configuration of the Kafka cluster. |
zookeeper | Configuration of the ZooKeeper cluster. |
topicOperator |
The property |
entityOperator | Configuration of the Entity Operator. |
clusterCa | Configuration of the cluster certificate authority. |
clientsCa | Configuration of the clients certificate authority. |
cruiseControl | Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. |
kafkaExporter | Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. |
maintenanceTimeWindows | A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression. |
string array |
B.4. KafkaClusterSpec
schema reference
Used in: KafkaSpec
Configures a Kafka cluster.
B.4.1. listeners
Use the listeners
property to configure listeners to provide access to Kafka brokers.
Example configuration of a plain (unencrypted) listener without authentication
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false # ... zookeeper: # ...
B.4.2. config
Use the config
properties to configure Kafka brokers as keys with values in one of the following JSON types:
- String
- Number
- Boolean
You can specify and configure all of the options in the "Broker Configs" section of the Apache Kafka documentation apart from those managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
listeners
-
advertised.
-
broker.
-
listener.
-
host.name
-
port
-
inter.broker.listener.name
-
sasl.
-
ssl.
-
security.
-
password.
-
principal.builder.class
-
log.dir
-
zookeeper.connect
-
zookeeper.set.acl
-
authorizer.
-
super.user
When a forbidden option is present in the config
property, it is ignored and a warning message is printed to the Cluster Operator log file. All other supported options are passed to Kafka.
There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl
properties. You can also configure the zookeeper.connection.timeout.ms
property to set the maximum time allowed for establishing a ZooKeeper connection.
Example Kafka broker configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" zookeeper.connection.timeout.ms: 6000 # ...
Property | Description |
---|---|
replicas | The number of pods in the cluster. |
integer | |
image |
The docker image for the pods. The default value depends on the configured |
string | |
storage |
Storage configuration (disk). Cannot be updated. The type depends on the value of the |
listeners | Configures listeners of Kafka brokers. |
| |
authorization |
Authorization configuration for Kafka brokers. The type depends on the value of the |
| |
config | Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., principal.builder.class, log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers (with the exception of: zookeeper.connection.timeout.ms, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols,cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms,cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms). |
map | |
rack |
Configuration of the |
brokerRackInitImage |
The image of the init container used for initializing the |
string | |
affinity |
The property |
tolerations |
The property |
Toleration array | |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
jvmOptions | JVM Options for pods. |
jmxOptions | JMX Options for Kafka brokers. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
metrics | The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. |
map | |
logging |
Logging configuration for Kafka. The type depends on the value of the |
tlsSidecar |
The property |
template |
Template for Kafka cluster resources. The template allows users to specify how are the |
version | The kafka broker version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. |
string |
B.5. EphemeralStorage
schema reference
Used in: JbodStorage
, KafkaClusterSpec
, ZookeeperClusterSpec
The type
property is a discriminator that distinguishes the use of the type EphemeralStorage
from PersistentClaimStorage
. It must have the value ephemeral
for the type EphemeralStorage
.
Property | Description |
---|---|
id | Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. |
integer | |
sizeLimit | When type=ephemeral, defines the total amount of local storage required for this EmptyDir volume (for example 1Gi). |
string | |
type |
Must be |
string |
B.6. PersistentClaimStorage
schema reference
Used in: JbodStorage
, KafkaClusterSpec
, ZookeeperClusterSpec
The type
property is a discriminator that distinguishes the use of the type PersistentClaimStorage
from EphemeralStorage
. It must have the value persistent-claim
for the type PersistentClaimStorage
.
Property | Description |
---|---|
type |
Must be |
string | |
size | When type=persistent-claim, defines the size of the persistent volume claim (i.e 1Gi). Mandatory when type=persistent-claim. |
string | |
selector | Specifies a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. |
map | |
deleteClaim | Specifies if the persistent volume claim has to be deleted when the cluster is un-deployed. |
boolean | |
class | The storage class to use for dynamic volume allocation. |
string | |
id | Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. |
integer | |
overrides |
Overrides for individual brokers. The |
B.7. PersistentClaimStorageOverride
schema reference
Used in: PersistentClaimStorage
Property | Description |
---|---|
class | The storage class to use for dynamic volume allocation for this broker. |
string | |
broker | Id of the kafka broker (broker identifier). |
integer |
B.8. JbodStorage
schema reference
Used in: KafkaClusterSpec
The type
property is a discriminator that distinguishes the use of the type JbodStorage
from EphemeralStorage
, PersistentClaimStorage
. It must have the value jbod
for the type JbodStorage
.
Property | Description |
---|---|
type |
Must be |
string | |
volumes | List of volumes as Storage objects representing the JBOD disks array. |
B.9. GenericKafkaListener
schema reference
Used in: KafkaClusterSpec
Configures listeners to connect to Kafka brokers within and outside OpenShift.
You configure the listeners in the Kafka
resource.
Example Kafka
resource showing listener configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: false authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #...
B.9.1. listeners
You configure Kafka broker listeners using the listeners
property in the Kafka
resource. Listeners are defined as an array.
Example listener configuration
listeners: - name: plain port: 9092 type: internal tls: false
The name and port must be unique within the Kafka cluster. The name can be up to 25 characters long, comprising lower-case letters and numbers. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX.
By specifying a unique name and port for each listener, you can configure multiple listeners.
B.9.2. type
The type is set as internal
, or for external listeners, as route
, loadbalancer
, nodeport
or ingress
.
- internal
You can configure internal listeners with or without encryption using the
tls
property.Example
internal
listener configuration#... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #...
- route
Configures an external listener to expose Kafka using OpenShift
Routes
and the HAProxy router.A dedicated
Route
is created for every Kafka broker pod. An additionalRoute
is created to serve as a Kafka bootstrap address. Kafka clients can use theseRoutes
to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is9094
in this example.Example
route
listener configuration#... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #...
- ingress
Configures an external listener to expose Kafka using Kubernetes
Ingress
and the NGINX Ingress Controller for Kubernetes.A dedicated
Ingress
resource is created for every Kafka broker pod. An additionalIngress
resource is created to serve as a Kafka bootstrap address. Kafka clients can use theseIngress
resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is9095
in the following example.You must specify the hostnames used by the bootstrap and per-broker services using
GenericKafkaListenerConfigurationBootstrap
andGenericKafkaListenerConfigurationBroker
properties.Example
ingress
listener configuration#... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: false authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #...
NoteExternal listeners using
Ingress
are currently only tested with the NGINX Ingress Controller for Kubernetes.- loadbalancer
Configures an external listener to expose Kafka
Loadbalancer
typeServices
.A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port
9094
in the following example.You can use the
loadBalancerSourceRanges
property to configure source ranges to restrict access to the specified IP addresses.Example
loadbalancer
listener configuration#... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #...
- nodeport
Configures an external listener to expose Kafka using
NodePort
typeServices
.Kafka clients connect directly to the nodes of OpenShift. An additional
NodePort
type of service is created to serve as a Kafka bootstrap address.When configuring the advertised addresses for the Kafka broker pods, AMQ Streams uses the address of the node on which the given pod is running. You can use
preferredNodePortAddressType
property to configure the first address type checked as the node address.Example
nodeport
listener configuration#... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #...
NoteTLS hostname verification is not currently supported when exposing Kafka clusters using node ports.
B.9.3. port
The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client.
-
loadbalancer
listeners use the specified port number, as dointernal
listeners -
ingress
androute
listeners use port 443 for access -
nodeport
listeners use the port number assigned by OpenShift
For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka
resource.
Example command to retrieve the address and port for client connection
oc get kafka KAFKA-CLUSTER-NAME -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}'
Listeners cannot be configured to use the ports set aside for interbroker communication (9091) and metrics (9404).
B.9.4. tls
The TLS property is required.
By default, TLS encryption is not enabled. To enable it, set the tls
property to true
.
TLS encryption is always used with route
listeners.
B.9.5. authentication
Authentication for the listener can be specified as:
-
Mutual TLS (
tls
) -
SCRAM-SHA-512 (
scram-sha-512
) -
Token-based OAuth 2.0 (
oauth
).
B.9.6. networkPolicyPeers
Use networkPolicyPeers
to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers
configuration for a plain
and a tls
listener.
listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ...
In the example:
-
Only application pods matching the labels
app: kafka-sasl-consumer
andapp: kafka-sasl-producer
can connect to theplain
listener. The application pods must be running in the same namespace as the Kafka broker. -
Only application pods running in namespaces matching the labels
project: myproject
andproject: myproject2
can connect to thetls
listener.
The syntax of the networkPolicyPeers
field is the same as the from
field in NetworkPolicy
resources.
Backwards compatibility with KafkaListeners
GenericKafkaListener
replaces the KafkaListeners
schema, which is now deprecated.
To convert the listeners configured using the KafkaListeners
schema into the format of the GenericKafkaListener
schema, with backwards compatibility, use the following names, ports and types:
listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: EXTERNAL-LISTENER-TYPE 1 tls: true # ...
- 1
- Options:
ingress
,loadbalancer
,nodeport
,route
Property | Description |
---|---|
name | Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. |
string | |
port | Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. |
integer | |
type |
Type of the listener. Currently the supported types are
* |
string (one of [ingress, internal, route, loadbalancer, nodeport]) | |
tls | Enables TLS encryption on the listener. This is a required property. |
boolean | |
authentication |
Authentication configuration for this listener. The type depends on the value of the |
| |
configuration | Additional listener configuration. |
networkPolicyPeers | List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer. |
NetworkPolicyPeer array |
B.10. KafkaListenerAuthenticationTls
schema reference
Used in: GenericKafkaListener
, KafkaListenerExternalIngress
, KafkaListenerExternalLoadBalancer
, KafkaListenerExternalNodePort
, KafkaListenerExternalRoute
, KafkaListenerPlain
, KafkaListenerTls
The type
property is a discriminator that distinguishes the use of the type KafkaListenerAuthenticationTls
from KafkaListenerAuthenticationScramSha512
, KafkaListenerAuthenticationOAuth
. It must have the value tls
for the type KafkaListenerAuthenticationTls
.
Property | Description |
---|---|
type |
Must be |
string |
B.11. KafkaListenerAuthenticationScramSha512
schema reference
Used in: GenericKafkaListener
, KafkaListenerExternalIngress
, KafkaListenerExternalLoadBalancer
, KafkaListenerExternalNodePort
, KafkaListenerExternalRoute
, KafkaListenerPlain
, KafkaListenerTls
The type
property is a discriminator that distinguishes the use of the type KafkaListenerAuthenticationScramSha512
from KafkaListenerAuthenticationTls
, KafkaListenerAuthenticationOAuth
. It must have the value scram-sha-512
for the type KafkaListenerAuthenticationScramSha512
.
Property | Description |
---|---|
type |
Must be |
string |
B.12. KafkaListenerAuthenticationOAuth
schema reference
Used in: GenericKafkaListener
, KafkaListenerExternalIngress
, KafkaListenerExternalLoadBalancer
, KafkaListenerExternalNodePort
, KafkaListenerExternalRoute
, KafkaListenerPlain
, KafkaListenerTls
The type
property is a discriminator that distinguishes the use of the type KafkaListenerAuthenticationOAuth
from KafkaListenerAuthenticationTls
, KafkaListenerAuthenticationScramSha512
. It must have the value oauth
for the type KafkaListenerAuthenticationOAuth
.
Property | Description |
---|---|
accessTokenIsJwt |
Configure whether the access token is treated as JWT. This must be set to |
boolean | |
checkAccessTokenType |
Configure whether the access token type check is performed or not. This should be set to |
boolean | |
checkIssuer |
Enable or disable issuer checking. By default issuer is checked using the value configured by |
boolean | |
clientId | OAuth Client ID which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. |
string | |
clientSecret | Link to OpenShift Secret containing the OAuth client secret which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. |
disableTlsHostnameVerification |
Enable or disable TLS hostname verification. Default value is |
boolean | |
enableECDSA |
Enable or disable ECDSA support by installing BouncyCastle crypto provider. Default value is |
boolean | |
fallbackUserNameClaim |
The fallback username claim to be used for the user id if the claim specified by |
string | |
fallbackUserNamePrefix |
The prefix to use with the value of |
string | |
introspectionEndpointUri | URI of the token introspection endpoint which can be used to validate opaque non-JWT tokens. |
string | |
jwksEndpointUri | URI of the JWKS certificate endpoint, which can be used for local JWT validation. |
string | |
jwksExpirySeconds |
Configures how often are the JWKS certificates considered valid. The expiry interval has to be at least 60 seconds longer then the refresh interval specified in |
integer | |
jwksMinRefreshPauseSeconds | The minimum pause between two consecutive refreshes. When an unknown signing key is encountered the refresh is scheduled immediately, but will always wait for this minimum pause. Defaults to 1 second. |
integer | |
jwksRefreshSeconds |
Configures how often are the JWKS certificates refreshed. The refresh interval has to be at least 60 seconds shorter then the expiry interval specified in |
integer | |
maxSecondsWithoutReauthentication | Maximum number of seconds the authenticated session remains valid without re-authentication. This enables Apache Kafka re-authentication feature, and causes sessions to expire when the access token expires. If the access token expires before max time or if max time is reached, the client has to re-authenticate, otherwise the server will drop the connection. Not set by default - the authenticated session does not expire when the access token expires. |
integer | |
tlsTrustedCertificates | Trusted certificates for TLS connection to the OAuth server. |
| |
type |
Must be |
string | |
userInfoEndpointUri | URI of the User Info Endpoint to use as a fallback to obtaining the user id when the Introspection Endpoint does not return information that can be used for the user id. |
string | |
userNameClaim |
Name of the claim from the JWT authentication token, Introspection Endpoint response or User Info Endpoint response which will be used to extract the user id. Defaults to |
string | |
validIssuerUri | URI of the token issuer used for authentication. |
string | |
validTokenType |
Valid value for the |
string |
B.13. GenericSecretSource
schema reference
Used in: KafkaClientAuthenticationOAuth
, KafkaListenerAuthenticationOAuth
Property | Description |
---|---|
key | The key under which the secret value is stored in the OpenShift Secret. |
string | |
secretName | The name of the OpenShift Secret containing the secret value. |
string |
B.14. CertSecretSource
schema reference
Used in: KafkaAuthorizationKeycloak
, KafkaBridgeTls
, KafkaClientAuthenticationOAuth
, KafkaConnectTls
, KafkaListenerAuthenticationOAuth
, KafkaMirrorMaker2Tls
, KafkaMirrorMakerTls
Property | Description |
---|---|
certificate | The name of the file certificate in the Secret. |
string | |
secretName | The name of the Secret containing the certificate. |
string |
B.15. GenericKafkaListenerConfiguration
schema reference
Used in: GenericKafkaListener
Configuration for Kafka listeners.
B.15.1. brokerCertChainAndKey
The brokerCertChainAndKey
property is only used with listeners that have TLS encryption enabled. You can use the property to providing your own Kafka listener certificates.
Example configuration for a loadbalancer
external listener with TLS encryption enabled
listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ...
B.15.2. externalTrafficPolicy
The externalTrafficPolicy
property is used with loadbalancer
and nodeport
listeners. When exposing Kafka outside of OpenShift you can choose Local
or Cluster
. Local
avoids hops to other nodes and preserves the client IP, whereas Cluster
does neither. The default is Cluster
.
B.15.3. loadBalancerSourceRanges
The loadBalancerSourceRanges
property is only used with loadbalancer
listeners. When exposing Kafka outside of OpenShift use source ranges, in addition to labels and annotations, to customize how a service is created.
Example source ranges configured for a loadbalancer listener
listeners: #... - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... # ...
B.15.4. class
The class
property is only used with ingress
listeners.
By default, the Ingress
class is set to nginx
. You can change the Ingress
class using the class
property.
Example of an external listener of type ingress
using Ingress
class nginx-internal
listeners: #... - name: external port: 9094 type: ingress tls: false configuration: class: nginx-internal # ... # ...
B.15.5. preferredNodePortAddressType
The preferredNodePortAddressType
property is only used with nodeport
listeners.
Use the preferredNodePortAddressType
property in your listener configuration to specify the first address type checked as the node address. This property is useful, for example, if your deployment does not have DNS support, or you only want to expose a broker internally through an internal DNS or IP address. If an address of this type is found, it is used. If the preferred address type is not found, AMQ Streams proceeds through the types in the standard order of priority:
- ExternalDNS
- ExternalIP
- Hostname
- InternalDNS
- InternalIP
Example of an external listener configured with a preferred node port address type
listeners: #... - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS # ... # ...
B.15.6. useServiceDnsDomain
The useServiceDnsDomain
property is only used with internal
listeners. It defines whether the fully-qualified DNS names that include the cluster service suffix (usually .cluster.local
) are used. With useServiceDnsDomain
set as false
, the advertised addresses are generated without the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc
. With useServiceDnsDomain
set as true
, the advertised addresses are generated with the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc.cluster.local
. Default is false
.
Example of an internal listener configured to use the Service DNS domain
listeners: #... - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true # ... # ...
If your OpenShift cluster uses a different service suffix than .cluster.local
, you can configure the suffix using the KUBERNETES_SERVICE_DNS_DOMAIN
environment variable in the Cluster Operator configuration. See Section 5.1.1, “Cluster Operator configuration” for more details.
Property | Description |
---|---|
brokerCertChainAndKey |
Reference to the |
externalTrafficPolicy |
Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. |
string (one of [Local, Cluster]) | |
loadBalancerSourceRanges |
A list of CIDR ranges (for example |
string array | |
bootstrap | Bootstrap configuration. |
brokers | Per-broker configurations. |
class |
Configures the |
string | |
preferredNodePortAddressType |
Defines which address type should be used as the node address. Available types are:
This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order.This field can be used only with |
string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) | |
useServiceDnsDomain |
Configures whether the OpenShift service DNS domain should be used or not. If set to |
boolean |
B.16. CertAndKeySecretSource
schema reference
Used in: GenericKafkaListenerConfiguration
, IngressListenerConfiguration
, KafkaClientAuthenticationTls
, KafkaListenerExternalConfiguration
, NodePortListenerConfiguration
, TlsListenerConfiguration
Property | Description |
---|---|
certificate | The name of the file certificate in the Secret. |
string | |
key | The name of the private key in the Secret. |
string | |
secretName | The name of the Secret containing the certificate. |
string |
B.17. GenericKafkaListenerConfigurationBootstrap
schema reference
Used in: GenericKafkaListenerConfiguration
Configures bootstrap service overrides for external listeners.
Broker service equivalents of nodePort
, host
, loadBalancerIP
and annotations
properties are configured in the GenericKafkaListenerConfigurationBroker
schema.
B.17.1. alternativeNames
You can specify alternative names for the bootstrap service. The names are added to the broker certificates and can be used for TLS hostname verification. The alternativeNames
property is applicable to all types of external listeners.
Example of an external route
listener configured with an additional bootstrap address
listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2 # ...
B.17.2. host
The host
property is used with route
and ingress
listeners to specify the hostnames used by the bootstrap and per-broker services.
A host
property value is mandatory for ingress
listener configuration, as the Ingress controller does not assign any hostnames automatically. Make sure that the hostnames resolve to the Ingress endpoints. AMQ Streams will not perform any validation that the requested hosts are available and properly routed to the Ingress endpoints.
Example of host configuration for an ingress listener
listeners: #... - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # ...
By default, route
listener hosts are automatically assigned by OpenShift. However, you can override the assigned route hosts by specifying hosts.
AMQ Streams does not perform any validation that the requested hosts are available. You must ensure that they are free and can be used.
Example of host configuration for a route listener
# ... listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com # ...
B.17.3. nodePort
By default, the port numbers used for the bootstrap and broker services are automatically assigned by OpenShift. You can override the assigned node ports for nodeport
listeners by specifying the requested port numbers.
AMQ Streams does not perform any validation on the requested ports. You must ensure that they are free and available for use.
Example of an external listener configured with overrides for node ports
# ... listeners: #... - name: external port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002 # ...
B.17.4. loadBalancerIP
Use the loadBalancerIP
property to request a specific IP address when creating a loadbalancer. Use this property when you need to use a loadbalancer with a specific IP address. The loadBalancerIP
field is ignored if the cloud provider does not support the feature.
Example of an external listener of type loadbalancer
with specific loadbalancer IP address requests
# ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3 # ...
B.17.5. annotations
Use the annotations
property to add annotations to loadbalancer
, nodeport
or ingress
listeners. You can use these annotations to instrument DNS tooling such as External DNS, which automatically assigns DNS names to the loadbalancer services.
Example of an external listener of type loadbalancer
using annotations
# ... listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: "60" # ...
Property | Description |
---|---|
alternativeNames | Additional alternative names for the bootstrap service. The alternative names will be added to the list of subject alternative names of the TLS certificates. |
string array | |
host |
The bootstrap host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with |
string | |
nodePort |
Node port for the bootstrap service. This field can be used only with |
integer | |
loadBalancerIP |
The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the |
string | |
annotations |
Annotations that will be added to the |
map |
B.18. GenericKafkaListenerConfigurationBroker
schema reference
Used in: GenericKafkaListenerConfiguration
Configures broker service overrides for external listeners.
You can see example configuration for the nodePort
, host
, loadBalancerIP
and annotations
properties in the GenericKafkaListenerConfigurationBootstrap
schema, which configures bootstrap service overrides for external listeners.
Advertised addresses for brokers
By default, AMQ Streams tries to automatically determine the hostnames and ports that your Kafka cluster advertises to its clients. This is not sufficient in all situations, because the infrastructure on which AMQ Streams is running might not provide the right hostname or port through which Kafka can be accessed.
You can specify a broker ID and customize the advertised hostname and port in the configuration
property of the external listener. AMQ Streams will then automatically configure the advertised address in the Kafka brokers and add it to the broker certificates so it can be used for TLS hostname verification. Overriding the advertised host and ports is available for all types of external listeners.
Example of an external route
listener configured with overrides for advertised addresses
listeners: #... - name: external port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342 # ...
Property | Description |
---|---|
broker | ID of the kafka broker (broker identifier). Broker IDs start from 0 and correspond to the number of broker replicas. |
integer | |
advertisedHost |
The host name which will be used in the brokers' |
string | |
advertisedPort |
The port number which will be used in the brokers' |
integer | |
host |
The broker host. This field will be used in the Ingress resource or in the Route resource to specify the desired hostname. This field can be used only with |
string | |
nodePort |
Node port for the per-broker service. This field can be used only with |
integer | |
loadBalancerIP |
The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the |
string | |
annotations |
Annotations that will be added to the |
map |
B.19. KafkaListeners
schema reference
The type KafkaListeners
has been deprecated. Please use GenericKafkaListener
instead.
Used in: KafkaClusterSpec
Refer to previous documentation for example configuration.
Property | Description |
---|---|
plain | Configures plain listener on port 9092. |
tls | Configures TLS listener on port 9093. |
external |
Configures external listener on port 9094. The type depends on the value of the |
|
B.20. KafkaListenerPlain
schema reference
Used in: KafkaListeners
Property | Description |
---|---|
authentication |
Authentication configuration for this listener. Since this listener does not use TLS transport you cannot configure an authentication with |
| |
networkPolicyPeers | List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer. |
NetworkPolicyPeer array |
B.21. KafkaListenerTls
schema reference
Used in: KafkaListeners
Property | Description |
---|---|
authentication |
Authentication configuration for this listener. The type depends on the value of the |
| |
configuration | Configuration of TLS listener. |
networkPolicyPeers | List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer. |
NetworkPolicyPeer array |
B.22. TlsListenerConfiguration
schema reference
Used in: KafkaListenerTls
Property | Description |
---|---|
brokerCertChainAndKey |
Reference to the |
B.23. KafkaListenerExternalRoute
schema reference
Used in: KafkaListeners
The type
property is a discriminator that distinguishes the use of the type KafkaListenerExternalRoute
from KafkaListenerExternalLoadBalancer
, KafkaListenerExternalNodePort
, KafkaListenerExternalIngress
. It must have the value route
for the type KafkaListenerExternalRoute
.
Property | Description |
---|---|
type |
Must be |
string | |
authentication |
Authentication configuration for Kafka brokers. The type depends on the value of the |
| |
overrides | Overrides for external bootstrap and broker services and externally advertised addresses. |
configuration | External listener configuration. |
networkPolicyPeers | List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer. |
NetworkPolicyPeer array |
B.24. RouteListenerOverride
schema reference
Used in: KafkaListenerExternalRoute
Property | Description |
---|---|
bootstrap | External bootstrap service configuration. |
brokers | External broker services configuration. |
B.25. RouteListenerBootstrapOverride
schema reference
Used in: RouteListenerOverride
Property | Description |
---|---|
address | Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. |
string | |
host |
Host for the bootstrap route. This field will be used in the |
string |
B.26. RouteListenerBrokerOverride
schema reference
Used in: RouteListenerOverride
Property | Description |
---|---|
broker | Id of the kafka broker (broker identifier). |
integer | |
advertisedHost |
The host name which will be used in the brokers' |
string | |
advertisedPort |
The port number which will be used in the brokers' |
integer | |
host |
Host for the broker route. This field will be used in the |
string |
B.27. KafkaListenerExternalConfiguration
schema reference
Used in: KafkaListenerExternalLoadBalancer
, KafkaListenerExternalRoute
Property | Description |
---|---|
brokerCertChainAndKey |
Reference to the |
B.28. KafkaListenerExternalLoadBalancer
schema reference
Used in: KafkaListeners
The type
property is a discriminator that distinguishes the use of the type KafkaListenerExternalLoadBalancer
from KafkaListenerExternalRoute
, KafkaListenerExternalNodePort
, KafkaListenerExternalIngress
. It must have the value loadbalancer
for the type KafkaListenerExternalLoadBalancer
.
Property | Description |
---|---|
type |
Must be |
string | |
authentication |
Authentication configuration for Kafka brokers. The type depends on the value of the |
| |
overrides | Overrides for external bootstrap and broker services and externally advertised addresses. |
configuration | External listener configuration. |
networkPolicyPeers | List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer. |
NetworkPolicyPeer array | |
tls |
Enables TLS encryption on the listener. By default set to |
boolean |
B.29. LoadBalancerListenerOverride
schema reference
Used in: KafkaListenerExternalLoadBalancer
Property | Description |
---|---|
bootstrap | External bootstrap service configuration. |
brokers | External broker services configuration. |
B.30. LoadBalancerListenerBootstrapOverride
schema reference
Used in: LoadBalancerListenerOverride
Property | Description |
---|---|
address | Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. |
string | |
dnsAnnotations |
Annotations that will be added to the |
map | |
loadBalancerIP |
The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the |
string |
B.31. LoadBalancerListenerBrokerOverride
schema reference
Used in: LoadBalancerListenerOverride
Property | Description |
---|---|
broker | Id of the kafka broker (broker identifier). |
integer | |
advertisedHost |
The host name which will be used in the brokers' |
string | |
advertisedPort |
The port number which will be used in the brokers' |
integer | |
dnsAnnotations |
Annotations that will be added to the |
map | |
loadBalancerIP |
The loadbalancer is requested with the IP address specified in this field. This feature depends on whether the underlying cloud provider supports specifying the |
string |
B.32. KafkaListenerExternalNodePort
schema reference
Used in: KafkaListeners
The type
property is a discriminator that distinguishes the use of the type KafkaListenerExternalNodePort
from KafkaListenerExternalRoute
, KafkaListenerExternalLoadBalancer
, KafkaListenerExternalIngress
. It must have the value nodeport
for the type KafkaListenerExternalNodePort
.
Property | Description |
---|---|
type |
Must be |
string | |
authentication |
Authentication configuration for Kafka brokers. The type depends on the value of the |
| |
overrides | Overrides for external bootstrap and broker services and externally advertised addresses. |
configuration | External listener configuration. |
networkPolicyPeers | List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer. |
NetworkPolicyPeer array | |
tls |
Enables TLS encryption on the listener. By default set to |
boolean |
B.33. NodePortListenerOverride
schema reference
Used in: KafkaListenerExternalNodePort
Property | Description |
---|---|
bootstrap | External bootstrap service configuration. |
brokers | External broker services configuration. |
B.34. NodePortListenerBootstrapOverride
schema reference
Used in: NodePortListenerOverride
Property | Description |
---|---|
address | Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. |
string | |
dnsAnnotations |
Annotations that will be added to the |
map | |
nodePort | Node port for the bootstrap service. |
integer |
B.35. NodePortListenerBrokerOverride
schema reference
Used in: NodePortListenerOverride
Property | Description |
---|---|
broker | Id of the kafka broker (broker identifier). |
integer | |
advertisedHost |
The host name which will be used in the brokers' |
string | |
advertisedPort |
The port number which will be used in the brokers' |
integer | |
nodePort | Node port for the broker service. |
integer | |
dnsAnnotations |
Annotations that will be added to the |
map |
B.36. NodePortListenerConfiguration
schema reference
Used in: KafkaListenerExternalNodePort
Property | Description |
---|---|
brokerCertChainAndKey |
Reference to the |
preferredAddressType |
Defines which address type should be used as the node address. Available types are: This field can be used to select the address type which will be used as the preferred type and checked first. In case no address will be found for this address type, the other types will be used in the default order.. |
string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) |
B.37. KafkaListenerExternalIngress
schema reference
Used in: KafkaListeners
The type
property is a discriminator that distinguishes the use of the type KafkaListenerExternalIngress
from KafkaListenerExternalRoute
, KafkaListenerExternalLoadBalancer
, KafkaListenerExternalNodePort
. It must have the value ingress
for the type KafkaListenerExternalIngress
.
Property | Description |
---|---|
type |
Must be |
string | |
authentication |
Authentication configuration for Kafka brokers. The type depends on the value of the |
| |
class |
Configures the |
string | |
configuration | External listener configuration. |
networkPolicyPeers | List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list. See external documentation of networking.k8s.io/v1 networkpolicypeer. |
NetworkPolicyPeer array |
B.38. IngressListenerConfiguration
schema reference
Used in: KafkaListenerExternalIngress
Property | Description |
---|---|
bootstrap | External bootstrap ingress configuration. |
brokers | External broker ingress configuration. |
brokerCertChainAndKey |
Reference to the |
B.39. IngressListenerBootstrapConfiguration
schema reference
Used in: IngressListenerConfiguration
Property | Description |
---|---|
address | Additional address name for the bootstrap service. The address will be added to the list of subject alternative names of the TLS certificates. |
string | |
dnsAnnotations |
Annotations that will be added to the |
map | |
host | Host for the bootstrap route. This field will be used in the Ingress resource. |
string |
B.40. IngressListenerBrokerConfiguration
schema reference
Used in: IngressListenerConfiguration
Property | Description |
---|---|
broker | Id of the kafka broker (broker identifier). |
integer | |
advertisedHost |
The host name which will be used in the brokers' |
string | |
advertisedPort |
The port number which will be used in the brokers' |
integer | |
host | Host for the broker ingress. This field will be used in the Ingress resource. |
string | |
dnsAnnotations |
Annotations that will be added to the |
map |
B.41. KafkaAuthorizationSimple
schema reference
Used in: KafkaClusterSpec
Simple authorization in AMQ Streams uses the AclAuthorizer
plugin, the default Access Control Lists (ACLs) authorization plugin provided with Apache Kafka. ACLs allow you to define which users have access to which resources at a granular level.
Configure the Kafka
custom resource to use simple authorization. Set the type
property in the authorization
section to the value simple
, and configure a list of super users.
Access rules are configured for the KafkaUser
, as described in the ACLRule schema reference.
B.41.1. superUsers
A list of user principals treated as super users, so that they are always allowed without querying ACL rules. For more information see Kafka authorization.
An example of simple authorization configuration
authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3
The super.user
configuration option in the config
property in Kafka.spec.kafka
is ignored. Designate super users in the authorization
property instead. For more information, see Kafka broker configuration.
The type
property is a discriminator that distinguishes the use of the type KafkaAuthorizationSimple
from KafkaAuthorizationOpa
, KafkaAuthorizationKeycloak
. It must have the value simple
for the type KafkaAuthorizationSimple
.
Property | Description |
---|---|
type |
Must be |
string | |
superUsers | List of super users. Should contain list of user principals which should get unlimited access rights. |
string array |
B.42. KafkaAuthorizationOpa
schema reference
Used in: KafkaClusterSpec
To use Open Policy Agent authorization, set the type
property in the authorization
section to the value opa
, and configure OPA properties as required.
B.42.1. url
The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. Required.
B.42.2. allowOnError
Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable. Defaults to false
- all actions will be denied.
B.42.3. initialCacheCapacity
Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 5000
.
B.42.4. maximumCacheSize
Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to 50000
.
B.42.5. expireAfterMs
The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to 3600000
milliseconds (1 hour).
B.42.6. superUsers
A list of user principals treated as super users, so that they are always allowed without querying the open Policy Agent policy. For more information see Kafka authorization.
An example of Open Policy Agent authorizer configuration
authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward
The type
property is a discriminator that distinguishes the use of the type KafkaAuthorizationOpa
from KafkaAuthorizationSimple
, KafkaAuthorizationKeycloak
. It must have the value opa
for the type KafkaAuthorizationOpa
.
Property | Description |
---|---|
type |
Must be |
string | |
url | The URL used to connect to the Open Policy Agent server. The URL has to include the policy which will be queried by the authorizer. This option is required. |
string | |
allowOnError |
Defines whether a Kafka client should be allowed or denied by default when the authorizer fails to query the Open Policy Agent, for example, when it is temporarily unavailable). Defaults to |
boolean | |
initialCacheCapacity |
Initial capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request Defaults to |
integer | |
maximumCacheSize |
Maximum capacity of the local cache used by the authorizer to avoid querying the Open Policy Agent for every request. Defaults to |
integer | |
expireAfterMs |
The expiration of the records kept in the local cache to avoid querying the Open Policy Agent for every request. Defines how often the cached authorization decisions are reloaded from the Open Policy Agent server. In milliseconds. Defaults to |
integer | |
superUsers | List of super users, which is specifically a list of user principals that have unlimited access rights. |
string array |
B.43. KafkaAuthorizationKeycloak
schema reference
Used in: KafkaClusterSpec
The type
property is a discriminator that distinguishes the use of the type KafkaAuthorizationKeycloak
from KafkaAuthorizationSimple
, KafkaAuthorizationOpa
. It must have the value keycloak
for the type KafkaAuthorizationKeycloak
.
Property | Description |
---|---|
type |
Must be |
string | |
clientId | OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. |
string | |
tokenEndpointUri | Authorization server token endpoint URI. |
string | |
tlsTrustedCertificates | Trusted certificates for TLS connection to the OAuth server. |
| |
disableTlsHostnameVerification |
Enable or disable TLS hostname verification. Default value is |
boolean | |
delegateToKafkaAcls |
Whether authorization decision should be delegated to the 'Simple' authorizer if DENIED by Red Hat Single Sign-On Authorization Services policies. Default value is |
boolean | |
grantsRefreshPeriodSeconds | The time between two consecutive grants refresh runs in seconds. The default value is 60. |
integer | |
grantsRefreshPoolSize | The number of threads to use to refresh grants for active sessions. The more threads, the more parallelism, so the sooner the job completes. However, using more threads places a heavier load on the authorization server. The default value is 5. |
integer | |
superUsers | List of super users. Should contain list of user principals which should get unlimited access rights. |
string array |
B.44. Rack
schema reference
Used in: KafkaClusterSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
Property | Description |
---|---|
topologyKey |
A key that matches labels assigned to the OpenShift cluster nodes. The value of the label is used to set the broker’s |
string |
B.45. Probe
schema reference
Used in: CruiseControlSpec
, EntityTopicOperatorSpec
, EntityUserOperatorSpec
, KafkaBridgeSpec
, KafkaClusterSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaExporterSpec
, KafkaMirrorMaker2Spec
, KafkaMirrorMakerSpec
, TlsSidecar
, TopicOperatorSpec
, ZookeeperClusterSpec
Property | Description |
---|---|
failureThreshold | Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. |
integer | |
initialDelaySeconds | The initial delay before first the health is first checked. |
integer | |
periodSeconds | How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. |
integer | |
successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. |
integer | |
timeoutSeconds | The timeout for each attempted health check. |
integer |
B.46. JvmOptions
schema reference
Used in: CruiseControlSpec
, EntityTopicOperatorSpec
, EntityUserOperatorSpec
, KafkaBridgeSpec
, KafkaClusterSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2Spec
, KafkaMirrorMakerSpec
, TopicOperatorSpec
, ZookeeperClusterSpec
Property | Description |
---|---|
-XX | A map of -XX options to the JVM. |
map | |
-Xms | -Xms option to to the JVM. |
string | |
-Xmx | -Xmx option to to the JVM. |
string | |
gcLoggingEnabled | Specifies whether the Garbage Collection logging is enabled. The default is false. |
boolean | |
javaSystemProperties |
A map of additional system properties which will be passed using the |
|
B.47. SystemProperty
schema reference
Used in: JvmOptions
Property | Description |
---|---|
name | The system property name. |
string | |
value | The system property value. |
string |
B.48. KafkaJmxOptions
schema reference
Used in: KafkaClusterSpec
Property | Description |
---|---|
authentication |
Authentication configuration for connecting to the Kafka JMX port. The type depends on the value of the |
B.49. KafkaJmxAuthenticationPassword
schema reference
Used in: KafkaJmxOptions
The type
property is a discriminator that distinguishes the use of the type KafkaJmxAuthenticationPassword
from other subtypes which may be added in the future. It must have the value password
for the type KafkaJmxAuthenticationPassword
.
Property | Description |
---|---|
type |
Must be |
string |
B.50. InlineLogging
schema reference
Used in: CruiseControlSpec
, EntityTopicOperatorSpec
, EntityUserOperatorSpec
, KafkaBridgeSpec
, KafkaClusterSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2Spec
, KafkaMirrorMakerSpec
, TopicOperatorSpec
, ZookeeperClusterSpec
The type
property is a discriminator that distinguishes the use of the type InlineLogging
from ExternalLogging
. It must have the value inline
for the type InlineLogging
.
Property | Description |
---|---|
type |
Must be |
string | |
loggers | A Map from logger name to logger level. |
map |
B.51. ExternalLogging
schema reference
Used in: CruiseControlSpec
, EntityTopicOperatorSpec
, EntityUserOperatorSpec
, KafkaBridgeSpec
, KafkaClusterSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2Spec
, KafkaMirrorMakerSpec
, TopicOperatorSpec
, ZookeeperClusterSpec
The type
property is a discriminator that distinguishes the use of the type ExternalLogging
from InlineLogging
. It must have the value external
for the type ExternalLogging
.
Property | Description |
---|---|
type |
Must be |
string | |
name |
The name of the |
string |
B.52. TlsSidecar
schema reference
Used in: CruiseControlSpec
, EntityOperatorSpec
, KafkaClusterSpec
, TopicOperatorSpec
, ZookeeperClusterSpec
Property | Description |
---|---|
image | The docker image for the container. |
string | |
livenessProbe | Pod liveness checking. |
logLevel |
The log level for the TLS sidecar. Default value is |
string (one of [emerg, debug, crit, err, alert, warning, notice, info]) | |
readinessProbe | Pod readiness checking. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
B.53. KafkaClusterTemplate
schema reference
Used in: KafkaClusterSpec
Property | Description |
---|---|
statefulset |
Template for Kafka |
pod |
Template for Kafka |
bootstrapService |
Template for Kafka bootstrap |
brokersService |
Template for Kafka broker |
externalBootstrapService |
Template for Kafka external bootstrap |
perPodService |
Template for Kafka per-pod |
externalBootstrapRoute |
Template for Kafka external bootstrap |
perPodRoute |
Template for Kafka per-pod |
externalBootstrapIngress |
Template for Kafka external bootstrap |
perPodIngress |
Template for Kafka per-pod |
persistentVolumeClaim |
Template for all Kafka |
podDisruptionBudget |
Template for Kafka |
kafkaContainer | Template for the Kafka broker container. |
tlsSidecarContainer |
The property |
initContainer | Template for the Kafka init container. |
B.54. StatefulSetTemplate
schema reference
Used in: KafkaClusterTemplate
, ZookeeperClusterTemplate
Property | Description |
---|---|
metadata | Metadata applied to the resource. |
podManagementPolicy |
PodManagementPolicy which will be used for this StatefulSet. Valid values are |
string (one of [OrderedReady, Parallel]) |
B.55. MetadataTemplate
schema reference
Used in: ExternalServiceTemplate
, PodDisruptionBudgetTemplate
, PodTemplate
, ResourceTemplate
, StatefulSetTemplate
Labels
and Annotations
are used to identify and organize resources, and are configured in the metadata
property.
For example:
# ... template: statefulset: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ...
The labels
and annotations
fields can contain any labels or annotations that do not contain the reserved string strimzi.io
. Labels and annotations containing strimzi.io
are used internally by AMQ Streams and cannot be configured.
Property | Description |
---|---|
labels |
Labels added to the resource template. Can be applied to different resources such as |
map | |
annotations |
Annotations added to the resource template. Can be applied to different resources such as |
map |
B.56. PodTemplate
schema reference
Used in: CruiseControlTemplate
, EntityOperatorTemplate
, KafkaBridgeTemplate
, KafkaClusterTemplate
, KafkaConnectTemplate
, KafkaExporterTemplate
, KafkaMirrorMakerTemplate
, ZookeeperClusterTemplate
Example PodTemplate
configuration
# ... template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 # ...
B.56.1. hostAliases
Use the hostAliases
property to a specify a list of hosts and IP addresses, which are injected into the /etc/hosts
file of the pod.
This configuration is especially useful for Kafka Connect or MirrorMaker when a connection outside of the cluster is also requested by users.
Example hostAliases
configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect #... spec: # ... template: pod: hostAliases: - ip: "192.168.1.86" hostnames: - "my-host-1" - "my-host-2" #...
Property | Description |
---|---|
metadata | Metadata applied to the resource. |
imagePullSecrets |
List of references to secrets in the same namespace to use for pulling any of the images used by this Pod. When the |
LocalObjectReference array | |
securityContext | Configures pod-level security attributes and common container settings. See external documentation of core/v1 podsecuritycontext. |
terminationGracePeriodSeconds | The grace period is the duration in seconds after the processes running in the pod are sent a termination signal, and the time when the processes are forcibly halted with a kill signal. Set this value to longer than the expected cleanup time for your process. Value must be a non-negative integer. A zero value indicates delete immediately. You might need to increase the grace period for very large Kafka clusters, so that the Kafka brokers have enough time to transfer their work to another broker before they are terminated. Defaults to 30 seconds. |
integer | |
affinity | The pod’s affinity rules. See external documentation of core/v1 affinity. |
tolerations | The pod’s tolerations. See external documentation of core/v1 toleration. |
Toleration array | |
priorityClassName | The name of the priority class used to assign priority to the pods. For more information about priority classes, see Pod Priority and Preemption. |
string | |
schedulerName |
The name of the scheduler used to dispatch this |
string | |
hostAliases | The pod’s HostAliases. HostAliases is an optional list of hosts and IPs that will be injected into the pod’s hosts file if specified. See external documentation of core/v1 HostAlias. |
HostAlias array |
B.57. ResourceTemplate
schema reference
Used in: CruiseControlTemplate
, EntityOperatorTemplate
, KafkaBridgeTemplate
, KafkaClusterTemplate
, KafkaConnectTemplate
, KafkaExporterTemplate
, KafkaMirrorMakerTemplate
, KafkaUserTemplate
, ZookeeperClusterTemplate
Property | Description |
---|---|
metadata | Metadata applied to the resource. |
B.58. ExternalServiceTemplate
schema reference
Used in: KafkaClusterTemplate
When exposing Kafka outside of OpenShift using loadbalancers or node ports, you can use properties, in addition to labels and annotations, to customize how a Service is created.
An example showing customized external services
# ... template: externalBootstrapService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 perPodService: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ...
Property | Description |
---|---|
metadata | Metadata applied to the resource. |
externalTrafficPolicy |
The property |
string (one of [Local, Cluster]) | |
loadBalancerSourceRanges |
The property |
string array |
B.59. PodDisruptionBudgetTemplate
schema reference
Used in: CruiseControlTemplate
, KafkaBridgeTemplate
, KafkaClusterTemplate
, KafkaConnectTemplate
, KafkaMirrorMakerTemplate
, ZookeeperClusterTemplate
AMQ Streams creates a PodDisruptionBudget
for every new StatefulSet
or Deployment
. By default, pod disruption budgets only allow a single pod to be unavailable at a given time. You can increase the amount of unavailable pods allowed by changing the default value of the maxUnavailable
property in the PodDisruptionBudget.spec
resource.
An example of PodDisruptionBudget
template
# ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ...
Property | Description |
---|---|
metadata |
Metadata to apply to the |
maxUnavailable |
Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the |
integer |
B.60. ContainerTemplate
schema reference
Used in: CruiseControlTemplate
, EntityOperatorTemplate
, KafkaBridgeTemplate
, KafkaClusterTemplate
, KafkaConnectTemplate
, KafkaExporterTemplate
, KafkaMirrorMakerTemplate
, ZookeeperClusterTemplate
You can set custom security context and environment variables for a container.
The environment variables are defined under the env
property as a list of objects with name
and value
fields. The following example shows two custom environment variables and a custom security context set for the Kafka broker containers:
# ... template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000 # ...
Environment variables prefixed with KAFKA_
are internal to AMQ Streams and should be avoided. If you set a custom environment variable that is already in use by AMQ Streams, it is ignored and a warning is recorded in the log.
Property | Description |
---|---|
env | Environment variables which should be applied to the container. |
| |
securityContext | Security context for the container. See external documentation of core/v1 securitycontext. |
B.61. ContainerEnvVar
schema reference
Used in: ContainerTemplate
Property | Description |
---|---|
name | The environment variable key. |
string | |
value | The environment variable value. |
string |
B.62. ZookeeperClusterSpec
schema reference
Used in: KafkaSpec
Property | Description |
---|---|
replicas | The number of pods in the cluster. |
integer | |
image | The docker image for the pods. |
string | |
storage |
Storage configuration (disk). Cannot be updated. The type depends on the value of the |
config | The ZooKeeper broker config. Properties with the following prefixes cannot be set: server., dataDir, dataLogDir, clientPort, authProvider, quorum.auth, requireClientAuthScheme, snapshot.trust.empty, standaloneEnabled, reconfigEnabled, 4lw.commands.whitelist, secureClientPort, ssl., serverCnxnFactory, sslQuorum (with the exception of: ssl.protocol, ssl.quorum.protocol, ssl.enabledProtocols, ssl.quorum.enabledProtocols, ssl.ciphersuites, ssl.quorum.ciphersuites, ssl.hostnameVerification, ssl.quorum.hostnameVerification). |
map | |
affinity |
The property |
tolerations |
The property |
Toleration array | |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
jvmOptions | JVM Options for pods. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
metrics | The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. |
map | |
logging |
Logging configuration for ZooKeeper. The type depends on the value of the |
template |
Template for ZooKeeper cluster resources. The template allows users to specify how are the |
tlsSidecar |
The property |
B.63. ZookeeperClusterTemplate
schema reference
Used in: ZookeeperClusterSpec
Property | Description |
---|---|
statefulset |
Template for ZooKeeper |
pod |
Template for ZooKeeper |
clientService |
Template for ZooKeeper client |
nodesService |
Template for ZooKeeper nodes |
persistentVolumeClaim |
Template for all ZooKeeper |
podDisruptionBudget |
Template for ZooKeeper |
zookeeperContainer | Template for the ZooKeeper container. |
tlsSidecarContainer |
The property |
B.64. TopicOperatorSpec
schema reference
The type TopicOperatorSpec
has been deprecated. Please use EntityTopicOperatorSpec
instead.
Used in: KafkaSpec
Property | Description |
---|---|
watchedNamespace | The namespace the Topic Operator should watch. |
string | |
image | The image to use for the Topic Operator. |
string | |
reconciliationIntervalSeconds | Interval between periodic reconciliations. |
integer | |
zookeeperSessionTimeoutSeconds | Timeout for the ZooKeeper session. |
integer | |
affinity | Pod affinity rules. See external documentation of core/v1 affinity. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
topicMetadataMaxAttempts | The number of attempts at getting topic metadata. |
integer | |
tlsSidecar | TLS sidecar configuration. |
logging |
Logging configuration. The type depends on the value of the |
jvmOptions | JVM Options for pods. |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
B.65. EntityOperatorSpec
schema reference
Used in: KafkaSpec
Property | Description |
---|---|
topicOperator | Configuration of the Topic Operator. |
userOperator | Configuration of the User Operator. |
affinity |
The property |
tolerations |
The property |
Toleration array | |
tlsSidecar | TLS sidecar configuration. |
template |
Template for Entity Operator resources. The template allows users to specify how is the |
B.66. EntityTopicOperatorSpec
schema reference
Used in: EntityOperatorSpec
Property | Description |
---|---|
watchedNamespace | The namespace the Topic Operator should watch. |
string | |
image | The image to use for the Topic Operator. |
string | |
reconciliationIntervalSeconds | Interval between periodic reconciliations. |
integer | |
zookeeperSessionTimeoutSeconds | Timeout for the ZooKeeper session. |
integer | |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
topicMetadataMaxAttempts | The number of attempts at getting topic metadata. |
integer | |
logging |
Logging configuration. The type depends on the value of the |
jvmOptions | JVM Options for pods. |
B.67. EntityUserOperatorSpec
schema reference
Used in: EntityOperatorSpec
Property | Description |
---|---|
watchedNamespace | The namespace the User Operator should watch. |
string | |
image | The image to use for the User Operator. |
string | |
reconciliationIntervalSeconds | Interval between periodic reconciliations. |
integer | |
zookeeperSessionTimeoutSeconds | Timeout for the ZooKeeper session. |
integer | |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
logging |
Logging configuration. The type depends on the value of the |
jvmOptions | JVM Options for pods. |
B.68. EntityOperatorTemplate
schema reference
Used in: EntityOperatorSpec
Property | Description |
---|---|
deployment |
Template for Entity Operator |
pod |
Template for Entity Operator |
tlsSidecarContainer | Template for the Entity Operator TLS sidecar container. |
topicOperatorContainer | Template for the Entity Topic Operator container. |
userOperatorContainer | Template for the Entity User Operator container. |
B.69. CertificateAuthority
schema reference
Used in: KafkaSpec
Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls
.
Property | Description |
---|---|
generateCertificateAuthority | If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. |
boolean | |
validityDays | The number of days generated certificates should be valid for. The default is 365. |
integer | |
renewalDays |
The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When |
integer | |
certificateExpirationPolicy |
How should CA certificate expiration be handled when |
string (one of [replace-key, renew-certificate]) |
B.70. CruiseControlSpec
schema reference
Used in: KafkaSpec
Property | Description |
---|---|
image | The docker image for the pods. |
string | |
tlsSidecar | TLS sidecar configuration. |
resources | CPU and memory resources to reserve for the Cruise Control container. See external documentation of core/v1 resourcerequirements. |
livenessProbe | Pod liveness checking for the Cruise Control container. |
readinessProbe | Pod readiness checking for the Cruise Control container. |
jvmOptions | JVM Options for the Cruise Control container. |
logging |
Logging configuration (log4j1) for Cruise Control. The type depends on the value of the |
template |
Template to specify how Cruise Control resources, |
brokerCapacity |
The Cruise Control |
config | The Cruise Control configuration. For a full list of configuration options refer to https://github.com/linkedin/cruise-control/wiki/Configurations. Note that properties with the following prefixes cannot be set: bootstrap.servers, client.id, zookeeper., network., security., failed.brokers.zk.path,webserver.http., webserver.api.urlprefix, webserver.session.path, webserver.accesslog., two.step., request.reason.required,metric.reporter.sampler.bootstrap.servers, metric.reporter.topic, partition.metric.sample.store.topic, broker.metric.sample.store.topic,capacity.config.file, self.healing., anomaly.detection., ssl. |
map | |
metrics | The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. |
map |
B.71. CruiseControlTemplate
schema reference
Used in: CruiseControlSpec
Property | Description |
---|---|
deployment |
Template for Cruise Control |
pod |
Template for Cruise Control |
apiService |
Template for Cruise Control API |
podDisruptionBudget |
Template for Cruise Control |
cruiseControlContainer | Template for the Cruise Control container. |
tlsSidecarContainer | Template for the Cruise Control TLS sidecar container. |
B.72. BrokerCapacity
schema reference
Used in: CruiseControlSpec
Property | Description |
---|---|
disk | Broker capacity for disk in bytes, for example, 100Gi. |
string | |
cpuUtilization | Broker capacity for CPU resource utilization as a percentage (0 - 100). |
integer | |
inboundNetwork | Broker capacity for inbound network throughput in bytes per second, for example, 10000KB/s. |
string | |
outboundNetwork | Broker capacity for outbound network throughput in bytes per second, for example 10000KB/s. |
string |
B.73. KafkaExporterSpec
schema reference
Used in: KafkaSpec
Property | Description |
---|---|
image | The docker image for the pods. |
string | |
groupRegex |
Regular expression to specify which consumer groups to collect. Default value is |
string | |
topicRegex |
Regular expression to specify which topics to collect. Default value is |
string | |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
logging |
Only log messages with the given severity or above. Valid levels: [ |
string | |
enableSaramaLogging | Enable Sarama logging, a Go client library used by the Kafka Exporter. |
boolean | |
template | Customization of deployment templates and pods. |
livenessProbe | Pod liveness check. |
readinessProbe | Pod readiness check. |
B.74. KafkaExporterTemplate
schema reference
Used in: KafkaExporterSpec
Property | Description |
---|---|
deployment |
Template for Kafka Exporter |
pod |
Template for Kafka Exporter |
service |
Template for Kafka Exporter |
container | Template for the Kafka Exporter container. |
B.75. KafkaStatus
schema reference
Used in: Kafka
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
listeners | Addresses of the internal and external listeners. |
|
B.76. Condition
schema reference
Used in: KafkaBridgeStatus
, KafkaConnectorStatus
, KafkaConnectS2IStatus
, KafkaConnectStatus
, KafkaMirrorMaker2Status
, KafkaMirrorMakerStatus
, KafkaRebalanceStatus
, KafkaStatus
, KafkaTopicStatus
, KafkaUserStatus
Property | Description |
---|---|
type | The unique identifier of a condition, used to distinguish between other conditions in the resource. |
string | |
status | The status of the condition, either True, False or Unknown. |
string | |
lastTransitionTime | Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone. |
string | |
reason | The reason for the condition’s last transition (a single word in CamelCase). |
string | |
message | Human-readable message indicating details about the condition’s last transition. |
string |
B.77. ListenerStatus
schema reference
Used in: KafkaStatus
Property | Description |
---|---|
type |
The type of the listener. Can be one of the following three types: |
string | |
addresses | A list of the addresses for this listener. |
| |
bootstrapServers |
A comma-separated list of |
string | |
certificates |
A list of TLS certificates which can be used to verify the identity of the server when connecting to the given listener. Set only for |
string array |
B.78. ListenerAddress
schema reference
Used in: ListenerStatus
Property | Description |
---|---|
host | The DNS name or IP address of the Kafka bootstrap service. |
string | |
port | The port of the Kafka bootstrap service. |
integer |
B.79. KafkaConnect
schema reference
Property | Description |
---|---|
spec | The specification of the Kafka Connect cluster. |
status | The status of the Kafka Connect cluster. |
B.80. KafkaConnectSpec
schema reference
Used in: KafkaConnect
Configures a Kafka Connect cluster.
B.80.1. config
Use the config
properties to configure Kafka options as keys.
Standard Apache Kafka Connect configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
Configuration options that cannot be configured relate to:
- Kafka cluster bootstrap address
- Security (Encryption, Authentication, and Authorization)
- Listener / REST interface configuration
- Plugin path configuration
The values can be one of the following JSON types:
- String
- Number
- Boolean
You can specify and configure the options listed in the Apache Kafka documentation with the exception of those options that are managed directly by AMQ Streams. Specifically, configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
listeners
-
plugin.path
-
rest.
-
bootstrap.servers
When a forbidden option is present in the config
property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka Connect.
The Cluster Operator does not validate keys or values in the config
object provided. When an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this circumstance, fix the configuration in the KafkaConnect.spec.config
or KafkaConnectS2I.spec.config
object, then the Cluster Operator can roll out the new configuration to all Kafka Connect nodes.
Certain options have default values:
-
group.id
with default valueconnect-cluster
-
offset.storage.topic
with default valueconnect-cluster-offsets
-
config.storage.topic
with default valueconnect-cluster-configs
-
status.storage.topic
with default valueconnect-cluster-status
-
key.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
-
value.converter
with default valueorg.apache.kafka.connect.json.JsonConverter
These options are automatically configured in case they are not present in the KafkaConnect.spec.config
or KafkaConnectS2I.spec.config
properties.
There are exceptions to the forbidden options. You can use three allowed ssl
configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. You can also configure the ssl.endpoint.identification.algorithm
property to enable or disable hostname verification.
Example Kafka Connect configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ...
For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl
properties. You can also configure the ssl.endpoint.identification.algorithm
property to enable or disable hostname verification.
B.80.2. logging
Kafka Connect (and Kafka Connect with Source2Image support) has its own configurable loggers:
-
connect.root.logger.level
-
log4j.logger.org.reflections
Further loggers are added depending on the Kafka Connect plugins running.
Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod:
curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/
Kafka Connect uses the Apache log4j
logger implementation.
Use the logging
property to configure loggers and logger levels.
You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name
property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties
. For more information about log levels, see Apache logging services.
Here we see examples of inline
and external
logging.
Inline logging
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: "INFO" # ...
External logging
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect spec: # ... logging: type: external name: customConfigMap # ...
Any available loggers that are not configured have their level set to OFF
.
If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically.
If you use external logging, a rolling update is triggered when logging appenders are changed.
Garbage collector (GC)
Garbage collector logging can also be enabled (or disabled) using the jvmOptions
property.
Property | Description |
---|---|
replicas | The number of pods in the Kafka Connect group. |
integer | |
version | The Kafka Connect version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. |
string | |
image | The docker image for the pods. |
string | |
bootstrapServers | Bootstrap servers to connect to. This should be given as a comma separated list of <hostname>:<port> pairs. |
string | |
tls | TLS configuration. |
authentication |
Authentication configuration for Kafka Connect. The type depends on the value of the |
| |
config | The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
map | |
resources | The maximum limits for CPU and memory resources and the requested initial resources. See external documentation of core/v1 resourcerequirements. |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
jvmOptions | JVM Options for pods. |
affinity |
The property |
tolerations |
The property |
Toleration array | |
logging |
Logging configuration for Kafka Connect. The type depends on the value of the |
metrics | The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. |
map | |
tracing |
The configuration of tracing in Kafka Connect. The type depends on the value of the |
template |
Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the |
externalConfiguration | Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. |
clientRackInitImage |
The image of the init container used for initializing the |
string | |
rack | Configuration of the node label which will be used as the client.rack consumer configuration. |
B.81. KafkaConnectTls
schema reference
Used in: KafkaConnectS2ISpec
, KafkaConnectSpec
Configures TLS trusted certificates for connecting Kafka Connect to the cluster.
B.81.1. trustedCertificates
Provide a list of secrets using the trustedCertificates
property.
Property | Description |
---|---|
trustedCertificates | Trusted certificates for TLS connection. |
|
B.82. KafkaClientAuthenticationTls
schema reference
Used in: KafkaBridgeSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2ClusterSpec
, KafkaMirrorMakerConsumerSpec
, KafkaMirrorMakerProducerSpec
To use TLS client authentication, set the type
property to the value tls
. TLS client authentication uses a TLS certificate to authenticate.
B.82.1. certificateAndKey
The certificate is specified in the certificateAndKey
property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private.
You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret
from the file:
oc create secret generic MY-SECRET \ --from-file=MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file=MY-PRIVATE.key
TLS client authentication can only be used with TLS connections.
Example TLS client authentication configuration
authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key
The type
property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationTls
from KafkaClientAuthenticationScramSha512
, KafkaClientAuthenticationPlain
, KafkaClientAuthenticationOAuth
. It must have the value tls
for the type KafkaClientAuthenticationTls
.
Property | Description |
---|---|
certificateAndKey |
Reference to the |
type |
Must be |
string |
B.83. KafkaClientAuthenticationScramSha512
schema reference
Used in: KafkaBridgeSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2ClusterSpec
, KafkaMirrorMakerConsumerSpec
, KafkaMirrorMakerProducerSpec
To configure SASL-based SCRAM-SHA-512 authentication, set the type
property to scram-sha-512
. The SCRAM-SHA-512 authentication mechanism requires a username and password.
B.83.1. username
Specify the username in the username
property.
B.83.2. passwordSecret
In the passwordSecret
property, specify a link to a Secret
containing the password.
You can use the secrets created by the User Operator.
If required, you can create a text file that contains the password, in cleartext, to use for authentication:
echo -n PASSWORD > MY-PASSWORD.txt
You can then create a Secret
from the text file, setting your own field name (key) for the password:
oc create secret generic MY-CONNECT-SECRET-NAME --from-file=MY-PASSWORD-FIELD-NAME=./MY-PASSWORD.txt
Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect
apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm
The secretName
property contains the name of the Secret
, and the password
property contains the name of the key under which the password is stored inside the Secret
.
Do not specify the actual password in the password
property.
Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect
authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field
The type
property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationScramSha512
from KafkaClientAuthenticationTls
, KafkaClientAuthenticationPlain
, KafkaClientAuthenticationOAuth
. It must have the value scram-sha-512
for the type KafkaClientAuthenticationScramSha512
.
Property | Description |
---|---|
passwordSecret |
Reference to the |
type |
Must be |
string | |
username | Username used for the authentication. |
string |
B.84. PasswordSecretSource
schema reference
Used in: KafkaClientAuthenticationPlain
, KafkaClientAuthenticationScramSha512
Property | Description |
---|---|
password | The name of the key in the Secret under which the password is stored. |
string | |
secretName | The name of the Secret containing the password. |
string |
B.85. KafkaClientAuthenticationPlain
schema reference
Used in: KafkaBridgeSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2ClusterSpec
, KafkaMirrorMakerConsumerSpec
, KafkaMirrorMakerProducerSpec
To configure SASL-based PLAIN authentication, set the type
property to plain
. SASL PLAIN authentication mechanism requires a username and password.
The SASL PLAIN mechanism will transfer the username and password across the network in cleartext. Only use SASL PLAIN authentication if TLS encryption is enabled.
B.85.1. username
Specify the username in the username
property.
B.85.2. passwordSecret
In the passwordSecret
property, specify a link to a Secret
containing the password.
You can use the secrets created by the User Operator.
If required, create a text file that contains the password, in cleartext, to use for authentication:
echo -n PASSWORD > MY-PASSWORD.txt
You can then create a Secret
from the text file, setting your own field name (key) for the password:
oc create secret generic MY-CONNECT-SECRET-NAME --from-file=MY-PASSWORD-FIELD-NAME=./MY-PASSWORD.txt
Example Secret for PLAIN client authentication for Kafka Connect
apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm
The secretName
property contains the name of the Secret
and the password
property contains the name of the key under which the password is stored inside the Secret
.
Do not specify the actual password in the password
property.
An example SASL based PLAIN client authentication configuration
authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name
The type
property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationPlain
from KafkaClientAuthenticationTls
, KafkaClientAuthenticationScramSha512
, KafkaClientAuthenticationOAuth
. It must have the value plain
for the type KafkaClientAuthenticationPlain
.
Property | Description |
---|---|
passwordSecret |
Reference to the |
type |
Must be |
string | |
username | Username used for the authentication. |
string |
B.86. KafkaClientAuthenticationOAuth
schema reference
Used in: KafkaBridgeSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2ClusterSpec
, KafkaMirrorMakerConsumerSpec
, KafkaMirrorMakerProducerSpec
To use OAuth client authentication, set the type
property to the value oauth
.
OAuth authentication can be configured using one of the following options:
- Client ID and secret
- Client ID and refresh token
- Access token
- TLS
Client ID and secret
You can configure the address of your authorization server in the tokenEndpointUri
property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret
property, specify a link to a Secret
containing the client secret.
An example of OAuth client authentication using client ID and client secret
authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret
Client ID and refresh token
You can configure the address of your OAuth server in the tokenEndpointUri
property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken
property, specify a link to a Secret
containing the refresh token.
+ .An example of OAuth client authentication using client ID and refresh token
authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token
Access token
You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri
. In the accessToken
property, specify a link to a Secret
containing the access token.
An example of OAuth client authentication using only an access token
authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token
TLS
Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate.
If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resoruce. The tlsTrustedCertificates
property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format.
An example of TLS certificates provided
authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt
The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification.
An example of disabled TLS hostname verification
authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true
The type
property is a discriminator that distinguishes the use of the type KafkaClientAuthenticationOAuth
from KafkaClientAuthenticationTls
, KafkaClientAuthenticationScramSha512
, KafkaClientAuthenticationPlain
. It must have the value oauth
for the type KafkaClientAuthenticationOAuth
.
Property | Description |
---|---|
accessToken | Link to OpenShift Secret containing the access token which was obtained from the authorization server. |
accessTokenIsJwt |
Configure whether access token should be treated as JWT. This should be set to |
boolean | |
clientId | OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. |
string | |
clientSecret | Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. |
disableTlsHostnameVerification |
Enable or disable TLS hostname verification. Default value is |
boolean | |
maxTokenExpirySeconds | Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens. |
integer | |
refreshToken | Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server. |
scope |
OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default |
string | |
tlsTrustedCertificates | Trusted certificates for TLS connection to the OAuth server. |
| |
tokenEndpointUri | Authorization server token endpoint URI. |
string | |
type |
Must be |
string |
B.87. JaegerTracing
schema reference
Used in: KafkaBridgeSpec
, KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2Spec
, KafkaMirrorMakerSpec
The type
property is a discriminator that distinguishes the use of the type JaegerTracing
from other subtypes which may be added in the future. It must have the value jaeger
for the type JaegerTracing
.
Property | Description |
---|---|
type |
Must be |
string |
B.88. KafkaConnectTemplate
schema reference
Used in: KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2Spec
Property | Description |
---|---|
deployment |
Template for Kafka Connect |
pod |
Template for Kafka Connect |
apiService |
Template for Kafka Connect API |
connectContainer | Template for the Kafka Connect container. |
initContainer | Template for the Kafka init container. |
podDisruptionBudget |
Template for Kafka Connect |
B.89. ExternalConfiguration
schema reference
Used in: KafkaConnectS2ISpec
, KafkaConnectSpec
, KafkaMirrorMaker2Spec
Configures external storage properties that define configuration options for Kafka Connect connectors.
You can mount ConfigMaps or Secrets into a Kafka Connect pod as environment variables or volumes. Volumes and environment variables are configured in the externalConfiguration
property in KafkaConnect.spec
and KafkaConnectS2I.spec
.
When applied, the environment variables and volumes are available for use when developing your connectors.
B.89.1. env
The env
property is used to specify one or more environment variables. These variables can contain a value from either a ConfigMap or a Secret.
Example Secret containing values for environment variables
apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=
The names of user-defined environment variables cannot start with KAFKA_
or STRIMZI_
.
To mount a value from a Secret to an environment variable, use the valueFrom
property and the secretKeyRef
.
Example environment variables set to values from a Secret
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey
A common use case for mounting Secrets to environment variables is when your connector needs to communicate with Amazon AWS and needs to read the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables with credentials.
To mount a value from a ConfigMap to an environment variable, use configMapKeyRef
in the valueFrom
property as shown in the following example.
Example environment variables set to values from a ConfigMap
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key
B.89.2. volumes
You can also mount ConfigMaps or Secrets to a Kafka Connect pod as volumes.
Using volumes instead of environment variables is useful in the following scenarios:
- Mounting truststores or keystores with TLS certificates
- Mounting a properties file that is used to configure Kafka Connect connectors
Example Secret with properties
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-user 2 dbPassword: my-password
In this example, a Secret named mysecret
is mounted to a volume named connector-config
. In the config
property, a configuration provider (FileConfigProvider
) is specified, which will load configuration values from external sources. The Kafka FileConfigProvider
is given the alias file
, and will read and extract database username and password property values from the file to use in the connector configuration.
Example external volumes set to values from a Secret
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4
- 1
- The alias for the configuration provider, which is used to define other configuration parameters. Use a comma-separated list if you want to add more than one provider.
- 2
- The
FileConfigProvider
is the configuration provider that provides values from properties files. The parameter uses the alias fromconfig.providers
, taking the formconfig.providers.${alias}.class
. - 3
- The name of the volume containing the Secret. Each volume must specify a name in the
name
property and a reference to ConfigMap or Secret. - 4
- The name of the Secret.
The volumes are mounted inside the Kafka Connect containers in the path /opt/kafka/external-configuration/<volume-name>
. For example, the files from a volume named connector-config
would appear in the directory /opt/kafka/external-configuration/connector-config
.
The FileConfigProvider
is used to read the values from the mounted properties files in connector configurations.
Property | Description |
---|---|
env | Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as environment variables. |
| |
volumes | Allows to pass data from Secret or ConfigMap to the Kafka Connect pods as volumes. |
B.90. ExternalConfigurationEnv
schema reference
Used in: ExternalConfiguration
Property | Description |
---|---|
name |
Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with |
string | |
valueFrom | Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap. |
B.91. ExternalConfigurationEnvVarSource
schema reference
Used in: ExternalConfigurationEnv
Property | Description |
---|---|
configMapKeyRef | Refernce to a key in a ConfigMap. See external documentation of core/v1 configmapkeyselector. |
secretKeyRef | Reference to a key in a Secret. See external documentation of core/v1 secretkeyselector. |
B.92. ExternalConfigurationVolumeSource
schema reference
Used in: ExternalConfiguration
Property | Description |
---|---|
configMap | Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. See external documentation of core/v1 configmapvolumesource. |
name | Name of the volume which will be added to the Kafka Connect pods. |
string | |
secret | Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. See external documentation of core/v1 secretvolumesource. |
B.93. KafkaConnectStatus
schema reference
Used in: KafkaConnect
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
url | The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. |
string | |
connectorPlugins | The list of connector plugins available in this Kafka Connect deployment. |
| |
labelSelector | Label selector for pods providing this resource. |
string | |
replicas | The current number of pods being used to provide this resource. |
integer |
B.94. ConnectorPlugin
schema reference
Used in: KafkaConnectS2IStatus
, KafkaConnectStatus
, KafkaMirrorMaker2Status
Property | Description |
---|---|
type |
The type of the connector plugin. The available types are |
string | |
version | The version of the connector plugin. |
string | |
class | The class of the connector plugin. |
string |
B.95. KafkaConnectS2I
schema reference
Property | Description |
---|---|
spec | The specification of the Kafka Connect Source-to-Image (S2I) cluster. |
status | The status of the Kafka Connect Source-to-Image (S2I) cluster. |
B.96. KafkaConnectS2ISpec
schema reference
Used in: KafkaConnectS2I
Configures a Kafka Connect cluster with Source-to-Image (S2I) support.
When extending Kafka Connect with connector plugins on OpenShift (only), you can use OpenShift builds and S2I to create a container image that is used by the Kafka Connect deployment.
The configuration options are similar to Kafka Connect configuration using the KafkaConnectSpec
schema.
Property | Description |
---|---|
replicas | The number of pods in the Kafka Connect group. |
integer | |
image | The docker image for the pods. |
string | |
buildResources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
jvmOptions | JVM Options for pods. |
affinity |
The property |
logging |
Logging configuration for Kafka Connect. The type depends on the value of the |
metrics | The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. |
map | |
template |
Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the |
authentication |
Authentication configuration for Kafka Connect. The type depends on the value of the |
| |
bootstrapServers | Bootstrap servers to connect to. This should be given as a comma separated list of <hostname>:<port> pairs. |
string | |
clientRackInitImage |
The image of the init container used for initializing the |
string | |
config | The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
map | |
externalConfiguration | Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. |
insecureSourceRepository | When true this configures the source repository with the 'Local' reference policy and an import policy that accepts insecure source tags. |
boolean | |
rack | Configuration of the node label which will be used as the client.rack consumer configuration. |
resources | The maximum limits for CPU and memory resources and the requested initial resources. See external documentation of core/v1 resourcerequirements. |
tls | TLS configuration. |
tolerations |
The property |
Toleration array | |
tracing |
The configuration of tracing in Kafka Connect. The type depends on the value of the |
version | The Kafka Connect version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. |
string |
B.97. KafkaConnectS2IStatus
schema reference
Used in: KafkaConnectS2I
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
url | The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. |
string | |
connectorPlugins | The list of connector plugins available in this Kafka Connect deployment. |
| |
buildConfigName | The name of the build configuration. |
string | |
labelSelector | Label selector for pods providing this resource. |
string | |
replicas | The current number of pods being used to provide this resource. |
integer |
B.98. KafkaTopic
schema reference
Property | Description |
---|---|
spec | The specification of the topic. |
status | The status of the topic. |
B.99. KafkaTopicSpec
schema reference
Used in: KafkaTopic
Property | Description |
---|---|
partitions | The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. |
integer | |
replicas | The number of replicas the topic should have. |
integer | |
config | The topic configuration. |
map | |
topicName | The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. |
string |
B.100. KafkaTopicStatus
schema reference
Used in: KafkaTopic
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer |
B.101. KafkaUser
schema reference
Property | Description |
---|---|
spec | The specification of the user. |
status | The status of the Kafka User. |
B.102. KafkaUserSpec
schema reference
Used in: KafkaUser
Property | Description |
---|---|
authentication |
Authentication mechanism enabled for this Kafka user. The type depends on the value of the |
| |
authorization |
Authorization rules for this Kafka user. The type depends on the value of the |
quotas | Quotas on requests to control the broker resources used by clients. Network bandwidth and request rate quotas can be enforced.Kafka documentation for Kafka User quotas can be found at http://kafka.apache.org/documentation/#design_quotas. |
template |
Template to specify how Kafka User |
B.103. KafkaUserTlsClientAuthentication
schema reference
Used in: KafkaUserSpec
The type
property is a discriminator that distinguishes the use of the type KafkaUserTlsClientAuthentication
from KafkaUserScramSha512ClientAuthentication
. It must have the value tls
for the type KafkaUserTlsClientAuthentication
.
Property | Description |
---|---|
type |
Must be |
string |
B.104. KafkaUserScramSha512ClientAuthentication
schema reference
Used in: KafkaUserSpec
The type
property is a discriminator that distinguishes the use of the type KafkaUserScramSha512ClientAuthentication
from KafkaUserTlsClientAuthentication
. It must have the value scram-sha-512
for the type KafkaUserScramSha512ClientAuthentication
.
Property | Description |
---|---|
type |
Must be |
string |
B.105. KafkaUserAuthorizationSimple
schema reference
Used in: KafkaUserSpec
The type
property is a discriminator that distinguishes the use of the type KafkaUserAuthorizationSimple
from other subtypes which may be added in the future. It must have the value simple
for the type KafkaUserAuthorizationSimple
.
Property | Description |
---|---|
type |
Must be |
string | |
acls | List of ACL rules which should be applied to this user. |
|
B.106. AclRule
schema reference
Used in: KafkaUserAuthorizationSimple
Configures access control rule for a KafkaUser
when brokers are using the AclAuthorizer
.
Example KafkaUser
configuration with authorization
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: prefix operation: Read
B.106.1. resource
Use the resource
property to specify the resource that the rule applies to.
Simple authorization supports four resource types, which are specified in the type
property:
-
Topics (
topic
) -
Consumer Groups (
group
) -
Clusters (
cluster
) -
Transactional IDs (
transactionalId
)
For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name
property.
Cluster type resources have no name.
A name is specified as a literal
or a prefix
using the patternType
property.
-
Literal names are taken exactly as they are specified in the
name
field. -
Prefix names use the value from the
name
as a prefix, and will apply the rule to all resources with names starting with the value.
B.106.2. type
The type
of rule, which is to allow
or deny
(not currently supported) an operation.
The type
field is optional. If type
is unspecified, the ACL rule is treated as an allow
rule.
B.106.3. operation
Specify an operation
for the rule to allow or deny.
The following operations are supported:
- Read
- Write
- Delete
- Alter
- Describe
- All
- IdempotentWrite
- ClusterAction
- Create
- AlterConfigs
- DescribeConfigs
Only certain operations work with each resource.
For more details about AclAuthorizer
, ACLs and supported combinations of resources and operations, see Authorization and ACLs.
B.106.4. host
Use the host
property to specify a remote host from which the rule is allowed or denied.
Use an asterisk (*
) to allow or deny the operation from all hosts. The host
field is optional. If host
is unspecified, the *
value is used by default.
Property | Description |
---|---|
host | The host from which the action described in the ACL rule is allowed or denied. |
string | |
operation | Operation which will be allowed or denied. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. |
string (one of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) | |
resource |
Indicates the resource for which given ACL rule applies. The type depends on the value of the |
| |
type |
The type of the rule. Currently the only supported type is |
string (one of [allow, deny]) |
B.107. AclRuleTopicResource
schema reference
Used in: AclRule
The type
property is a discriminator that distinguishes the use of the type AclRuleTopicResource
from AclRuleGroupResource
, AclRuleClusterResource
, AclRuleTransactionalIdResource
. It must have the value topic
for the type AclRuleTopicResource
.
Property | Description |
---|---|
type |
Must be |
string | |
name |
Name of resource for which given ACL rule applies. Can be combined with |
string | |
patternType |
Describes the pattern used in the resource field. The supported types are |
string (one of [prefix, literal]) |
B.108. AclRuleGroupResource
schema reference
Used in: AclRule
The type
property is a discriminator that distinguishes the use of the type AclRuleGroupResource
from AclRuleTopicResource
, AclRuleClusterResource
, AclRuleTransactionalIdResource
. It must have the value group
for the type AclRuleGroupResource
.
Property | Description |
---|---|
type |
Must be |
string | |
name |
Name of resource for which given ACL rule applies. Can be combined with |
string | |
patternType |
Describes the pattern used in the resource field. The supported types are |
string (one of [prefix, literal]) |
B.109. AclRuleClusterResource
schema reference
Used in: AclRule
The type
property is a discriminator that distinguishes the use of the type AclRuleClusterResource
from AclRuleTopicResource
, AclRuleGroupResource
, AclRuleTransactionalIdResource
. It must have the value cluster
for the type AclRuleClusterResource
.
Property | Description |
---|---|
type |
Must be |
string |
B.110. AclRuleTransactionalIdResource
schema reference
Used in: AclRule
The type
property is a discriminator that distinguishes the use of the type AclRuleTransactionalIdResource
from AclRuleTopicResource
, AclRuleGroupResource
, AclRuleClusterResource
. It must have the value transactionalId
for the type AclRuleTransactionalIdResource
.
Property | Description |
---|---|
type |
Must be |
string | |
name |
Name of resource for which given ACL rule applies. Can be combined with |
string | |
patternType |
Describes the pattern used in the resource field. The supported types are |
string (one of [prefix, literal]) |
B.111. KafkaUserQuotas
schema reference
Used in: KafkaUserSpec
Kafka allows a user to set quotas
to control the use of resources by clients.
B.111.1. quotas
Quotas split into two categories:
- Network usage quotas, which are defined as the byte rate threshold for each group of clients sharing a quota
- CPU utilization quotas, which are defined as the percentage of time a client can utilize on request handler I/O threads and network threads of each broker within a quota window
Using quotas for Kafka clients might be useful in a number of situations. Consider a wrongly configured Kafka producer which is sending requests at too high a rate. Such misconfiguration can cause a denial of service to other clients, so the problematic client ought to be blocked. By using a network limiting quota, it is possible to prevent this situation from significantly impacting other clients.
AMQ Streams supports user-level quotas, but not client-level quotas.
An example Kafka user quotas
spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55
For more info about Kafka user quotas, refer to the Apache Kafka documentation.
Property | Description |
---|---|
consumerByteRate | A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis. |
integer | |
producerByteRate | A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis. |
integer | |
requestPercentage | A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads. |
integer |
B.112. KafkaUserTemplate
schema reference
Used in: KafkaUserSpec
Specify additional labels and annotations for the secret created by the User Operator.
An example showing the KafkaUserTemplate
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 # ...
Property | Description |
---|---|
secret |
Template for KafkaUser resources. The template allows users to specify how the |
B.113. KafkaUserStatus
schema reference
Used in: KafkaUser
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
username | Username. |
string | |
secret |
The name of |
string |
B.114. KafkaMirrorMaker
schema reference
Property | Description |
---|---|
spec | The specification of Kafka MirrorMaker. |
status | The status of Kafka MirrorMaker. |
B.115. KafkaMirrorMakerSpec
schema reference
Used in: KafkaMirrorMaker
Configures Kafka MirrorMaker.
B.115.1. whitelist
Use the whitelist
property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster.
The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using "A|B" or all topics using "*". You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker.
B.115.2. KafkaMirrorMakerConsumerSpec
and KafkaMirrorMakerProducerSpec
Use the KafkaMirrorMakerConsumerSpec
and KafkaMirrorMakerProducerSpec
to configure source (consumer) and target (producer) clusters.
Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT
pairs. Each comma-separated list contains one or more Kafka brokers or a Service
pointing to Kafka brokers specified as a HOSTNAME:PORT
pair.
B.115.3. logging
Kafka MirrorMaker has its own configurable logger:
-
mirrormaker.root.logger
MirrorMaker uses the Apache log4j
logger implementation.
Use the logging
property to configure loggers and logger levels.
You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name
property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties
. For more information about log levels, see Apache logging services.
Here we see examples of inline
and external
logging:
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: mirrormaker.root.logger: "INFO" # ...
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker spec: # ... logging: type: external name: customConfigMap # ...
Garbage collector (GC)
Garbage collector logging can also be enabled (or disabled) using the jvmOptions
property.
Property | Description |
---|---|
replicas |
The number of pods in the |
integer | |
image | The docker image for the pods. |
string | |
whitelist |
List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the whitelist |
string | |
consumer | Configuration of source cluster. |
producer | Configuration of target cluster. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
affinity |
The property |
tolerations |
The property |
Toleration array | |
jvmOptions | JVM Options for pods. |
logging |
Logging configuration for MirrorMaker. The type depends on the value of the |
metrics | The Prometheus JMX Exporter configuration. See JMX Exporter documentation for details of the structure of this configuration. |
map | |
tracing |
The configuration of tracing in Kafka MirrorMaker. The type depends on the value of the |
template |
Template to specify how Kafka MirrorMaker resources, |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
version | The Kafka MirrorMaker version. Defaults to 2.6.0. Consult the documentation to understand the process required to upgrade or downgrade the version. |
string |
B.116. KafkaMirrorMakerConsumerSpec
schema reference
Used in: KafkaMirrorMakerSpec
Configures a MirrorMaker consumer.
B.116.1. numStreams
Use the consumer.numStreams
property to configure the number of streams for the consumer.
You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel.
B.116.2. offsetCommitInterval
Use the consumer.offsetCommitInterval
property to configure an offset auto-commit interval for the consumer.
You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000.
B.116.3. config
Use the consumer.config
properties to configure Kafka options for the consumer.
The config
property contains the Kafka MirrorMaker consumer configuration options as keys, with values set in one of the following JSON types:
- String
- Number
- Boolean
For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl
properties. You can also configure the ssl.endpoint.identification.algorithm
property to enable or disable hostname verification.
Exceptions
You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers.
However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to:
- Kafka cluster bootstrap address
- Security (encryption, authentication, and authorization)
- Consumer group identifier
- Interceptors
Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
bootstrap.servers
-
group.id
-
interceptor.classes
-
ssl.
(not including specific exceptions) -
sasl.
-
security.
When a forbidden option is present in the config
property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker.
The Cluster Operator does not validate keys or values in the provided config
object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.consumer.config
object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker.
B.116.4. groupId
Use the consumer.groupId
property to configure a consumer group identifier for the consumer.
Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions.
Property | Description |
---|---|
numStreams | Specifies the number of consumer stream threads to create. |
integer | |
offsetCommitInterval | Specifies the offset auto-commit interval in ms. Default value is 60000. |
integer | |
groupId | A unique string that identifies the consumer group this consumer belongs to. |
string | |
bootstrapServers | A list of host:port pairs for establishing the initial connection to the Kafka cluster. |
string | |
authentication |
Authentication configuration for connecting to the cluster. The type depends on the value of the |
| |
config | The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
map | |
tls | TLS configuration for connecting MirrorMaker to the cluster. |
B.117. KafkaMirrorMakerTls
schema reference
Used in: KafkaMirrorMakerConsumerSpec
, KafkaMirrorMakerProducerSpec
Configures TLS trusted certificates for connecting MirrorMaker to the cluster.
B.117.1. trustedCertificates
Provide a list of secrets using the trustedCertificates
property.
Property | Description |
---|---|
trustedCertificates | Trusted certificates for TLS connection. |
|
B.118. KafkaMirrorMakerProducerSpec
schema reference
Used in: KafkaMirrorMakerSpec
Configures a MirrorMaker producer.
B.118.1. abortOnSendFailure
Use the producer.abortOnSendFailure
property to configure how to handle message send failure from the producer.
By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster:
- The Kafka MirrorMaker container is terminated in OpenShift.
- The container is then recreated.
If the abortOnSendFailure
option is set to false
, message sending errors are ignored.
B.118.2. config
Use the producer.config
properties to configure Kafka options for the producer.
The config
property contains the Kafka MirrorMaker producer configuration options as keys, with values set in one of the following JSON types:
- String
- Number
- Boolean
For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl
properties. You can also configure the ssl.endpoint.identification.algorithm
property to enable or disable hostname verification.
Exceptions
You can specify and configure the options listed in the Apache Kafka configuration documentation for producers.
However, there are exceptions for options automatically configured and managed directly by AMQ Streams related to:
- Kafka cluster bootstrap address
- Security (encryption, authentication, and authorization)
- Interceptors
Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
bootstrap.servers
-
interceptor.classes
-
ssl.
(not including specific exceptions) -
sasl.
-
security.
When a forbidden option is present in the config
property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to Kafka MirrorMaker.
The Cluster Operator does not validate keys or values in the provided config
object. When an invalid configuration is provided, the Kafka MirrorMaker might not start or might become unstable. In such cases, the configuration in the KafkaMirrorMaker.spec.producer.config
object should be fixed and the Cluster Operator will roll out the new configuration for Kafka MirrorMaker.
Property | Description |
---|---|
bootstrapServers | A list of host:port pairs for establishing the initial connection to the Kafka cluster. |
string | |
abortOnSendFailure |
Flag to set the MirrorMaker to exit on a failed send. Default value is |
boolean | |
authentication |
Authentication configuration for connecting to the cluster. The type depends on the value of the |
| |
config | The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
map | |
tls | TLS configuration for connecting MirrorMaker to the cluster. |
B.119. KafkaMirrorMakerTemplate
schema reference
Used in: KafkaMirrorMakerSpec
Property | Description |
---|---|
deployment |
Template for Kafka MirrorMaker |
pod |
Template for Kafka MirrorMaker |
mirrorMakerContainer | Template for Kafka MirrorMaker container. |
podDisruptionBudget |
Template for Kafka MirrorMaker |
B.120. KafkaMirrorMakerStatus
schema reference
Used in: KafkaMirrorMaker
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
labelSelector | Label selector for pods providing this resource. |
string | |
replicas | The current number of pods being used to provide this resource. |
integer |
B.121. KafkaBridge
schema reference
Property | Description |
---|---|
spec | The specification of the Kafka Bridge. |
status | The status of the Kafka Bridge. |
B.122. KafkaBridgeSpec
schema reference
Used in: KafkaBridge
Configures a Kafka Bridge cluster.
Configuration options relate to:
- Kafka cluster bootstrap address
- Security (Encryption, Authentication, and Authorization)
- Consumer configuration
- Producer configuration
- HTTP configuration
B.122.1. logging
Kafka Bridge has its own configurable loggers:
-
logger.bridge
-
logger.<operation-id>
You can replace <operation-id>
in the logger.<operation-id>
logger to set log levels for specific operations:
-
createConsumer
-
deleteConsumer
-
subscribe
-
unsubscribe
-
poll
-
assign
-
commit
-
send
-
sendToPartition
-
seekToBeginning
-
seekToEnd
-
seek
-
healthy
-
ready
-
openapi
Each operation is defined according OpenAPI specification, and has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to create fine-grained logging information about the incoming and outgoing HTTP requests.
Each logger has to be configured assigning it a name
as http.openapi.operation.<operation-id>
. For example, configuring the logging level for the send
operation logger means defining the following:
logger.send.name = http.openapi.operation.send logger.send.level = DEBUG
Kafka Bridge uses the Apache log4j2
logger implementation. Loggers are defined in the log4j2.properties
file, which has the following default configuration for healthy
and ready
endpoints:
logger.healthy.name = http.openapi.operation.healthy logger.healthy.level = WARN logger.ready.name = http.openapi.operation.ready logger.ready.level = WARN
The log level of all other operations is set to INFO
by default.
Use the logging
property to configure loggers and logger levels.
You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name
property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties
. For more information about log levels, see Apache logging services.
Here we see examples of inline
and external
logging.
Inline logging
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge spec: # ... logging: type: inline loggers: logger.bridge.level: "INFO" # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: "DEBUG" # ...
External logging
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge spec: # ... logging: type: external name: customConfigMap # ...
Any available loggers that are not configured have their level set to OFF
.
If the Kafka Bridge was deployed using the Cluster Operator, changes to Kafka Bridge logging levels are applied dynamically.
If you use external logging, a rolling update is triggered when logging appenders are changed.
Garbage collector (GC)
Garbage collector logging can also be enabled (or disabled) using the jvmOptions
property.
Property | Description |
---|---|
replicas |
The number of pods in the |
integer | |
image | The docker image for the pods. |
string | |
bootstrapServers | A list of host:port pairs for establishing the initial connection to the Kafka cluster. |
string | |
tls | TLS configuration for connecting Kafka Bridge to the cluster. |
authentication |
Authentication configuration for connecting to the cluster. The type depends on the value of the |
| |
http | The HTTP related configuration. |
consumer | Kafka consumer related configuration. |
producer | Kafka producer related configuration. |
resources | CPU and memory resources to reserve. See external documentation of core/v1 resourcerequirements. |
jvmOptions | Currently not supported JVM Options for pods. |
logging |
Logging configuration for Kafka Bridge. The type depends on the value of the |
enableMetrics | Enable the metrics for the Kafka Bridge. Default is false. |
boolean | |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
template |
Template for Kafka Bridge resources. The template allows users to specify how is the |
tracing |
The configuration of tracing in Kafka Bridge. The type depends on the value of the |
B.123. KafkaBridgeTls
schema reference
Used in: KafkaBridgeSpec
Property | Description |
---|---|
trustedCertificates | Trusted certificates for TLS connection. |
|
B.124. KafkaBridgeHttpConfig
schema reference
Used in: KafkaBridgeSpec
Configures HTTP access to a Kafka cluster for the Kafka Bridge.
The default HTTP configuration is for the Kafka Bridge to listen on port 8080.
B.124.1. cors
As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP access methods. For the origins, you can use a URL or a Java regular expression.
Example Kafka Bridge HTTP configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... http: port: 8080 cors: allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" # ...
Property | Description |
---|---|
port | The port which is the server listening on. |
integer | |
cors | CORS configuration for the HTTP Bridge. |
B.125. KafkaBridgeHttpCors
schema reference
Used in: KafkaBridgeHttpConfig
Property | Description |
---|---|
allowedOrigins | List of allowed origins. Java regular expressions can be used. |
string array | |
allowedMethods | List of allowed HTTP methods. |
string array |
B.126. KafkaBridgeConsumerSpec
schema reference
Used in: KafkaBridgeSpec
Configures consumer options for the Kafka Bridge as keys.
The values can be one of the following JSON types:
- String
- Number
- Boolean
You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
bootstrap.servers
-
group.id
When one of the forbidden options is present in the config
property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka
The Cluster Operator does not validate keys or values in the config
object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes.
There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl
properties.
Example Kafka Bridge consumer configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ...
Property | Description |
---|---|
config | The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
map |
B.127. KafkaBridgeProducerSpec
schema reference
Used in: KafkaBridgeSpec
Configures producer options for the Kafka Bridge as keys.
The values can be one of the following JSON types:
- String
- Number
- Boolean
You can specify and configure the options listed in the Apache Kafka configuration documentation for producers with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden:
-
ssl.
-
sasl.
-
security.
-
bootstrap.servers
When one of the forbidden options is present in the config
property, it is ignored and a warning message will be printed to the Cluster Operator log file. All other options will be passed to Kafka
The Cluster Operator does not validate keys or values in the config
object. If an invalid configuration is provided, the Kafka Bridge cluster might not start or might become unstable. Fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes.
There are exceptions to the forbidden options. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl
properties.
Example Kafka Bridge producer configuration
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: # ... producer: config: acks: 1 delivery.timeout.ms: 300000 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS # ...
Property | Description |
---|---|
config | The Kafka producer configuration used for producer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
map |
B.128. KafkaBridgeTemplate
schema reference
Used in: KafkaBridgeSpec
Property | Description |
---|---|
deployment |
Template for Kafka Bridge |
pod |
Template for Kafka Bridge |
apiService |
Template for Kafka Bridge API |
bridgeContainer | Template for the Kafka Bridge container. |
podDisruptionBudget |
Template for Kafka Bridge |
B.129. KafkaBridgeStatus
schema reference
Used in: KafkaBridge
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
url | The URL at which external client applications can access the Kafka Bridge. |
string | |
labelSelector | Label selector for pods providing this resource. |
string | |
replicas | The current number of pods being used to provide this resource. |
integer |
B.130. KafkaConnector
schema reference
Property | Description |
---|---|
spec | The specification of the Kafka Connector. |
status | The status of the Kafka Connector. |
B.131. KafkaConnectorSpec
schema reference
Used in: KafkaConnector
Property | Description |
---|---|
class | The Class for the Kafka Connector. |
string | |
tasksMax | The maximum number of tasks for the Kafka Connector. |
integer | |
config | The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. |
map | |
pause | Whether the connector should be paused. Defaults to false. |
boolean |
B.132. KafkaConnectorStatus
schema reference
Used in: KafkaConnector
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
connectorStatus | The connector status, as reported by the Kafka Connect REST API. |
map | |
tasksMax | The maximum number of tasks for the Kafka Connector. |
integer |
B.133. KafkaMirrorMaker2
schema reference
Property | Description |
---|---|
spec | The specification of the Kafka MirrorMaker 2.0 cluster. |
status | The status of the Kafka MirrorMaker 2.0 cluster. |
B.134. KafkaMirrorMaker2Spec
schema reference
Used in: KafkaMirrorMaker2
Property | Description |
---|---|
replicas | The number of pods in the Kafka Connect group. |
integer | |
version | The Kafka Connect version. Defaults to 2.6.0. Consult the user documentation to understand the process required to upgrade or downgrade the version. |
string | |
image | The docker image for the pods. |
string | |
connectCluster |
The cluster alias used for Kafka Connect. The alias must match a cluster in the list at |
string | |
clusters | Kafka clusters for mirroring. |
mirrors | Configuration of the MirrorMaker 2.0 connectors. |
resources | The maximum limits for CPU and memory resources and the requested initial resources. See external documentation of core/v1 resourcerequirements. |
livenessProbe | Pod liveness checking. |
readinessProbe | Pod readiness checking. |
jvmOptions | JVM Options for pods. |
affinity |
The property |
tolerations |
The property |
Toleration array | |
logging |
Logging configuration for Kafka Connect. The type depends on the value of the |
metrics | The Prometheus JMX Exporter configuration. See https://github.com/prometheus/jmx_exporter for details of the structure of this configuration. |
map | |
tracing |
The configuration of tracing in Kafka Connect. The type depends on the value of the |
template |
Template for Kafka Connect and Kafka Connect S2I resources. The template allows users to specify how the |
externalConfiguration | Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. |
B.135. KafkaMirrorMaker2ClusterSpec
schema reference
Used in: KafkaMirrorMaker2Spec
Configures Kafka clusters for mirroring.
B.135.1. config
Use the config
properties to configure Kafka options.
Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams.
For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl
properties. You can also configure the ssl.endpoint.identification.algorithm
property to enable or disable hostname verification.
Property | Description |
---|---|
alias | Alias used to reference the Kafka cluster. |
string | |
bootstrapServers |
A comma-separated list of |
string | |
config | The MirrorMaker 2.0 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). |
map | |
tls | TLS configuration for connecting MirrorMaker 2.0 connectors to a cluster. |
authentication |
Authentication configuration for connecting to the cluster. The type depends on the value of the |
|
B.136. KafkaMirrorMaker2Tls
schema reference
Used in: KafkaMirrorMaker2ClusterSpec
Property | Description |
---|---|
trustedCertificates | Trusted certificates for TLS connection. |
|
B.137. KafkaMirrorMaker2MirrorSpec
schema reference
Used in: KafkaMirrorMaker2Spec
Property | Description |
---|---|
sourceCluster |
The alias of the source cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at |
string | |
targetCluster |
The alias of the target cluster used by the Kafka MirrorMaker 2.0 connectors. The alias must match a cluster in the list at |
string | |
sourceConnector | The specification of the Kafka MirrorMaker 2.0 source connector. |
checkpointConnector | The specification of the Kafka MirrorMaker 2.0 checkpoint connector. |
heartbeatConnector | The specification of the Kafka MirrorMaker 2.0 heartbeat connector. |
topicsPattern | A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. |
string | |
topicsBlacklistPattern | A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. |
string | |
groupsPattern | A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. |
string | |
groupsBlacklistPattern | A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. |
string |
B.138. KafkaMirrorMaker2ConnectorSpec
schema reference
Used in: KafkaMirrorMaker2MirrorSpec
Property | Description |
---|---|
tasksMax | The maximum number of tasks for the Kafka Connector. |
integer | |
config | The Kafka Connector configuration. The following properties cannot be set: connector.class, tasks.max. |
map | |
pause | Whether the connector should be paused. Defaults to false. |
boolean |
B.139. KafkaMirrorMaker2Status
schema reference
Used in: KafkaMirrorMaker2
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
url | The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. |
string | |
connectorPlugins | The list of connector plugins available in this Kafka Connect deployment. |
| |
connectors | List of MirrorMaker 2.0 connector statuses, as reported by the Kafka Connect REST API. |
map array | |
labelSelector | Label selector for pods providing this resource. |
string | |
replicas | The current number of pods being used to provide this resource. |
integer |
B.140. KafkaRebalance
schema reference
Property | Description |
---|---|
spec | The specification of the Kafka rebalance. |
status | The status of the Kafka rebalance. |
B.141. KafkaRebalanceSpec
schema reference
Used in: KafkaRebalance
Property | Description |
---|---|
goals | A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals. If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. |
string array | |
skipHardGoalCheck | Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. |
boolean | |
excludedTopics | A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported formar consult the documentation for that class. |
string | |
concurrentPartitionMovementsPerBroker | The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. |
integer | |
concurrentIntraBrokerPartitionMovements | The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. |
integer | |
concurrentLeaderMovements | The upper bound of ongoing partition leadership movements. Default is 1000. |
integer | |
replicationThrottle | The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. |
integer | |
replicaMovementStrategies | A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. |
string array |
B.142. KafkaRebalanceStatus
schema reference
Used in: KafkaRebalance
Property | Description |
---|---|
conditions | List of status conditions. |
| |
observedGeneration | The generation of the CRD that was last reconciled by the operator. |
integer | |
sessionId | The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations. |
string | |
optimizationResult | A JSON object describing the optimization result. |
map |