此内容没有您所选择的语言版本。
Chapter 2. Common configuration properties
Use Common configuration properties to configure Streams for Apache Kafka custom resources. You add common configuration properties to a custom resource like any other supported configuration for that resource.
2.1. replicas 复制链接链接已复制到粘贴板!
Use the replicas property to configure replicas.
The type of replication depends on the resource.
-
KafkaTopicuses a replication factor to configure the number of replicas of each partition within a Kafka cluster. - Kafka components use replicas to configure the number of pods in a deployment to provide better availability and scalability.
When running a Kafka component on OpenShift it may not be necessary to run multiple replicas for high availability. When the node where the component is deployed crashes, OpenShift will automatically reschedule the Kafka component pod to a different node. However, running Kafka components with multiple replicas can provide faster failover times as the other nodes will be up and running.
2.2. bootstrapServers 复制链接链接已复制到粘贴板!
Use the bootstrapServers property to configure a list of bootstrap servers.
The bootstrap server lists can refer to Kafka clusters that are not deployed in the same OpenShift cluster. They can also refer to a Kafka cluster not deployed by Streams for Apache Kafka.
If on the same OpenShift cluster, each list must ideally contain the Kafka cluster bootstrap service which is named CLUSTER-NAME-kafka-bootstrap and a port number. If deployed by Streams for Apache Kafka but on different OpenShift clusters, the list content depends on the approach used for exposing the clusters (routes, ingress, nodeports or loadbalancers).
When using Kafka with a Kafka cluster not managed by Streams for Apache Kafka, you can specify the bootstrap servers list according to the configuration of the given cluster.
2.3. ssl (supported TLS versions and cipher suites) 复制链接链接已复制到粘贴板!
You can incorporate SSL configuration and cipher suite specifications to further secure TLS-based communication between your client application and a Kafka cluster. In addition to the standard TLS configuration, you can specify a supported TLS version and enable cipher suites in the configuration for the Kafka broker. You can also add the configuration to your clients if you wish to limit the TLS versions and cipher suites they use. The configuration on the client must only use protocols and cipher suites that are enabled on the broker.
A cipher suite is a set of security mechanisms for secure connection and data transfer. For example, the cipher suite TLS_AES_256_GCM_SHA384 is composed of the following mechanisms, which are used in conjunction with the TLS protocol:
- AES (Advanced Encryption Standard) encryption (256-bit key)
- GCM (Galois/Counter Mode) authenticated encryption
- SHA384 (Secure Hash Algorithm) data integrity protection
The combination is encapsulated in the TLS_AES_256_GCM_SHA384 cipher suite specification.
The ssl.enabled.protocols property specifies the available TLS versions that can be used for secure communication between the cluster and its clients. The ssl.protocol property sets the default TLS version for all connections, and it must be chosen from the enabled protocols. Use the ssl.endpoint.identification.algorithm property to enable or disable hostname verification (configurable only in components based on Kafka clients - Kafka Connect, MirrorMaker 2, and Kafka Bridge).
Example SSL configuration
# ...
config:
ssl.cipher.suites: TLS_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
ssl.enabled.protocols: TLSv1.3, TLSv1.2
ssl.protocol: TLSv1.3
ssl.endpoint.identification.algorithm: HTTPS
# ...
- 1
- Cipher suite specifications enabled.
- 2
- TLS versions supported.
- 3
- Default TLS version is
TLSv1.3. If a client only supports TLSv1.2, it can still connect to the broker and communicate using that supported version, and vice versa if the configuration is on the client and the broker only supports TLSv1.2. - 4
- Hostname verification is enabled by setting to
HTTPS. An empty string disables the verification.
2.4. trustedCertificates 复制链接链接已复制到粘贴板!
Use the tls and trustedCertificates properties to enable TLS encryption and specify secrets under which TLS certificates are stored in X.509 format. You can add this configuration to the Kafka Connect, Kafka MirrorMaker, and Kafka Bridge components for TLS connections to the Kafka cluster.
You can use the secrets created by the Cluster Operator for the Kafka cluster, or you can create your own TLS certificate file, then create a Secret from the file:
Creating a secret
oc create secret generic <my_secret> \
--from-file=<my_tls_certificate_file.crt>
-
Replace
<my_secret>with your secret name. -
Replace
<my_tls_certificate_file.crt>with the path to your TLS certificate file.
Use the pattern property to include all files in the secret that match the pattern. Using the pattern property means that the custom resource does not need to be updated if certificate file names change. However, you can specify a specific file using the certificate property instead of the pattern property.
Example TLS encryption configuration for components
tls:
trustedCertificates:
- secretName: my-cluster-cluster-cert
pattern: "*.crt"
- secretName: my-cluster-cluster-cert
certificate: ca2.crt
If you want to enable TLS encryption, but use the default set of public certification authorities shipped with Java, you can specify trustedCertificates as an empty array:
Example of enabling TLS with the default Java certificates
tls:
trustedCertificates: []
Similarly, you can use the tlsTrustedCertificates property in the configuration for oauth and keycloak authentication and authorization types that integrate with authorization servers. The configuration sets up encrypted TLS connections to the authorization server.
Example TLS encryption configuration for authentication types
tlsTrustedCertificates:
- secretName: oauth-server-ca
pattern: "*.crt"
For information on configuring mTLS authentication, see the KafkaClientAuthenticationTls schema reference.
2.5. resources 复制链接链接已复制到粘贴板!
Configure resource requests and limits to control resources for Streams for Apache Kafka containers. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of Kafka.
How you configure resources in a production environment depends on a number of factors. For example, applications are likely to be sharing resources in your OpenShift cluster.
For Kafka, the following aspects of a deployment can impact the resources you need:
- Throughput and size of messages
- The number of network threads handling messages
- The number of producers and consumers
- The number of topics and partitions
The values specified for resource requests are reserved and always available to the container. Resource limits specify the maximum resources that can be consumed by a given container. The amount between the request and limit is not reserved and might not be always available. A container can use the resources up to the limit only when they are available. Resource limits are temporary and can be reallocated.
Resource requests and limits
If you set limits without requests or vice versa, OpenShift uses the same value for both. Setting equal requests and limits for resources guarantees quality of service, as OpenShift will not kill containers unless they exceed their limits.
Configure resource requests and limits for components using resources properties in the spec of following custom resources:
Use the KafkaNodePool custom resource for Kafka nodes (spec.resources)
Use the Kafka custom resource for the following components:
-
Topic Operator (
spec.entityOperator.topicOperator.resources) -
User Operator (
spec.entityOperator.userOperator.resources) -
Cruise Control (
spec.cruiseControl.resources) -
Kafka Exporter (
spec.kafkaExporter.resources)
For other components, resources are configured in the corresponding custom resource. For example:
-
KafkaConnectresource for Kafka Connect (spec.resources) -
KafkaMirrorMaker2resource for MirrorMaker (spec.resources) -
KafkaBridgeresource for Kafka Bridge (spec.resources)
Example resource configuration for a node pool
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: pool-a
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles:
- broker
resources:
requests:
memory: 64Gi
cpu: "8"
limits:
memory: 64Gi
cpu: "12"
# ...
Example resource configuration for the Topic Operator
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
# ..
entityOperator:
#...
topicOperator:
#...
resources:
requests:
memory: 512Mi
cpu: "1"
limits:
memory: 512Mi
cpu: "1"
If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled.
Streams for Apache Kafka uses the OpenShift syntax for specifying memory and cpu resources. For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers.
- Memory resources
When configuring memory resources, consider the total requirements of the components.
Kafka runs inside a JVM and uses an operating system page cache to store message data before writing to disk. The memory request for Kafka should fit the JVM heap and page cache. You can configure the
jvmOptionsproperty to control the minimum and maximum heap size.Other components don’t rely on the page cache. You can configure memory resources without configuring the
jvmOptionsto control the heap size.Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes. Use the following suffixes in the specification:
-
Mfor megabytes -
Gfor gigabytes -
Mifor mebibytes -
Gifor gibibytes
Example resources using different memory units
# ... resources: requests: memory: 512Mi limits: memory: 2Gi # ...For more details about memory specification and additional supported units, see Meaning of memory.
-
- CPU resources
A CPU request should be enough to give a reliable performance at any time. CPU requests and limits are specified as cores or millicpus/millicores.
CPU cores are specified as integers (
5CPU core) or decimals (2.5CPU core). 1000 millicores is the same as1CPU core.Example CPU units
# ... resources: requests: cpu: 500m limits: cpu: 2.5 # ...The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed.
For more information on CPU specification, see Meaning of CPU.
2.6. image 复制链接链接已复制到粘贴板!
Use the image property to configure the container image used by the component.
Overriding container images is recommended only in special situations where you need to use a different container registry or a custom image.
For example, if your network does not allow access to the container repository used by Streams for Apache Kafka, you can copy the Streams for Apache Kafka images or build them from the source. However, if the configured image is not compatible with Streams for Apache Kafka images, it might not work properly.
A copy of the container image might also be customized and used for debugging.
You can specify which container image to use for a component using the image property in the following resources:
-
Kafka.spec.kafka -
Kafka.spec.entityOperator.topicOperator -
Kafka.spec.entityOperator.userOperator -
Kafka.spec.cruiseControl -
Kafka.spec.kafkaExporter -
Kafka.spec.kafkaBridge -
KafkaConnect.spec -
KafkaMirrorMaker2.spec -
KafkaBridge.spec
Changing the Kafka image version does not automatically update the image versions for other Kafka components, such as Kafka Exporter. These components are not version dependent, so no additional configuration is necessary when updating the Kafka image version.
Setting Kafka component images
Streams for Apache Kafka supports multiple Kafka versions across Kafka, Kafka Connect, and Kafka MirrorMaker 2 components. Each component requires a specific container image, which can be configured in two places:
- Cluster Operator environment variables (Default image mappings)
- Custom resource configuration
Each environment variable maps Kafka versions to container images. These mappings are used when a custom resource does not explicitly specify an image. You can override the default image by specifying the image and the matching version in the custom resource.
| Component | Environment variable |
|---|---|
| Kafka |
|
| Kafka Connect |
|
| MirrorMaker 2 |
|
| Custom resource | Image property | Version property |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
The values for the environment variables and those specified for image and version in the component configuration determines the image and Kafka version used:
image set? | version set? | Result |
|---|---|---|
| ✗ | ✗ | Uses Cluster Operator’s default image and corresponding Kafka version |
| ✓ | ✗ | Uses specified image and default Kafka version |
| ✗ | ✓ | Uses image from environment variable for specified Kafka version |
| ✓ | ✓ | Uses specified image and assumes specified Kafka version matches |
To avoid Kafka version and image mismatches, set the version property and allow the Cluster Operator to select the matching image from its mappings. If you need to change the default image mapping for a given Kafka version, configure the Cluster Operator’s environment variables.
Even if the configuration is syntactically correct, it can still be invalid if the image and version mismatch. To ensure a valid configuration:
-
The specified
versionmust match the Kafka version that the image is built for. -
The specified
versionmust be one of the versions supported by the operator. -
If you set a custom
image, always setversionto the Kafka version of that image.
Here we can see what happens with versions 4.0.0 and 4.1.0 (the default).
image set? | version set? | Image used | Kafka version | Valid? |
|---|---|---|---|---|
| ✗ | ✗ | 4.1.0 (default) | 4.1.0 | ✓ |
| ✗ | 4.0.0 | 4.0.0 (from mapping) | 4.0.0 | ✓ |
| Custom 4.0.0 | 4.0.0 | Custom 4.0.0 | 4.0.0 | ✓ |
| Custom 4.0.0 | ✗ | Custom 4.0.0 | 4.1.0 (default) | ✗ |
| Custom 4.0.0 | 4.1.0 | Custom 4.0.0 | 4.1.0 | ✗ |
Custom 4.0.0 refers to a custom user-provided container image built against Kafka 4.0.0. Setting a custom image means you set the image property to a user-provided image. The operator does not automatically change this value during upgrades.
Handling upgrades with custom images
When you set a custom image through the image property in a custom resource, you must keep the image and version in sync during upgrades. The Cluster Operator does not automatically update images defined this way. (This limitation does not apply when you use Kafka Connect Build or the default image mappings defined in the Cluster Operator environment variables.)
To avoid version mismatches when changing the Kafka version:
- Pause reconciliation of the custom resource.
- Upgrade the Cluster Operator to a release that supports a new Kafka version.
Update the custom resource:
-
Set
spec.*.versionto the target Kafka version -
Set
spec.*.imageto the custom image built for that version
-
Set
- Unpause the reconciliation.
If you don’t follow these steps, the operator might attempt to upgrade the resource before the correct image is specified, leading to mismatches.
Configuring the image property in other resources
For the image property in the custom resources for other components, the given value is used during deployment. If the image property is not set, the container image specified as an environment variable in the Cluster Operator configuration is used. If an image name is not defined in the Cluster Operator configuration, then a default value is used.
For more information on image environment variables, see Configuring the Cluster Operator.
| Component | Environment variable | Default image |
|---|---|---|
| Topic Operator |
|
|
| User Operator |
|
|
| Kafka Exporter |
|
|
| Cruise Control |
|
|
| Kafka Bridge |
|
|
| Kafka initializer |
|
|
Example container image configuration
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
# ...
image: my-org/my-image:latest
# ...
2.7. livenessProbe and readinessProbe healthchecks 复制链接链接已复制到粘贴板!
Use the livenessProbe and readinessProbe properties to configure healthcheck probes supported in Streams for Apache Kafka.
Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it.
For more details about the probes, see Configure Liveness and Readiness Probes.
Both livenessProbe and readinessProbe support the following options:
-
initialDelaySeconds -
timeoutSeconds -
periodSeconds -
successThreshold -
failureThreshold
Example of liveness and readiness probe configuration
# ...
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
# ...
For more information about the livenessProbe and readinessProbe options, see the Probe schema reference.
2.8. metricsConfig 复制链接链接已复制到粘贴板!
Use the metricsConfig property to enable and configure Prometheus metrics. Streams for Apache Kafka provides support for Prometheus JMX Exporter and Streams for Apache Kafka Metrics Reporter. Only one of these can be selected at any given time.
When metrics are enabled, they are exposed on port 9404.
When the metricsConfig property is not defined in the resource, the Prometheus metrics are not enabled.
For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka.
Using Prometheus JMX Exporter
The metricsConfig property contains a reference to a ConfigMap that has additional configurations for the Prometheus JMX Exporter. When configured to use Prometheus JMX Exporter, Streams for Apache Kafka converts the JMX metrics provided by Apache Kafka into a Prometheus-compatible format.
To enable Prometheus metrics export without further configuration, you can reference a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key. When referencing an empty file, all metrics are exposed as long as they have not been renamed.
Example ConfigMap with metrics configuration for Kafka
kind: ConfigMap
apiVersion: v1
metadata:
name: my-configmap
data:
my-key: |
lowercaseOutputName: true
rules:
# Special cases and very specific rules
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
name: kafka_server_$1_$2
type: GAUGE
labels:
clientId: "$3"
topic: "$4"
partition: "$5"
# further configuration
Example metrics configuration for Kafka
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
# ...
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: my-config-map
key: my-key
# ...
Using Streams for Apache Kafka Metrics Reporter
The metricsConfig property contains configurations for the Streams for Apache Kafka Metrics Reporter. The Streams for Apache Kafka Metrics Reporter offers a lightweight solution for exposing Kafka metrics in Prometheus format, and avoiding complex mapping rules that can introduce latency.
To enable Streams for Apache Kafka Metrics Reporter, set the type to strimziMetricsReporter. The allowList configuration is a comma-separated list of regex patterns to filter the metrics that are collected. This defaults to .*, which allows all metrics.
Using strimziMetricsReporter is only supported in the Kafka brokers and controllers at the moment.
Example metrics configuration for Kafka
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
# ...
metricsConfig:
type: strimziMetricsReporter
values:
allowList:
key: ".*"
# ...
2.9. jvmOptions 复制链接链接已复制到粘贴板!
The following Streams for Apache Kafka components run inside a Java Virtual Machine (JVM):
- Apache Kafka
- Apache Kafka Connect
- Apache Kafka MirrorMaker
- Kafka Bridge
To optimize their performance on different platforms and architectures, you configure the jvmOptions property in the following resources:
-
Kafka.spec.kafka -
Kafka.spec.entityOperator.userOperator -
Kafka.spec.entityOperator.topicOperator -
Kafka.spec.cruiseControl -
KafkaNodePool.spec -
KafkaConnect.spec -
KafkaMirrorMaker2.spec -
KafkaBridge.spec
You can specify the following options in your configuration:
-Xms- Minimum initial allocation heap size when the JVM starts
-Xmx- Maximum heap size
-XX- Advanced runtime options for the JVM
javaSystemProperties- Additional system properties
gcLoggingEnabled- Enables garbage collector logging
The units accepted by JVM settings, such as -Xmx and -Xms, are the same units accepted by the JDK java binary in the corresponding image. Therefore, 1g or 1G means 1,073,741,824 bytes, and Gi is not a valid unit suffix. This is different from the units used for memory requests and limits, which follow the OpenShift convention where 1G means 1,000,000,000 bytes, and 1Gi means 1,073,741,824 bytes.
-Xms and -Xmx options
In addition to setting memory request and limit values for your containers, you can use the -Xms and -Xmx JVM options to set specific heap sizes for your JVM. Use the -Xms option to set an initial heap size and the -Xmx option to set a maximum heap size.
Specify heap size to have more control over the memory allocated to your JVM. Heap sizes should make the best use of a container’s memory limit (and request) without exceeding it. Heap size and any other memory requirements need to fit within a specified memory limit. If you don’t specify heap size in your configuration, but you configure a memory resource limit (and request), the Cluster Operator imposes default heap sizes automatically. The Cluster Operator sets default maximum and minimum heap values based on a percentage of the memory resource configuration.
The following table shows the default heap values.
| Component | Percent of available memory allocated to the heap | Maximum limit |
|---|---|---|
| Kafka | 50% | 5 GB |
| Kafka Connect | 75% | None |
| MirrorMaker 2 | 75% | None |
| MirrorMaker | 75% | None |
| Cruise Control | 75% | None |
| Kafka Bridge | 50% | 31 Gi |
If a memory limit (and request) is not specified, a JVM’s minimum heap size is set to 128M. The JVM’s maximum heap size is not defined to allow the memory to increase as needed. This is ideal for single node environments in test and development.
Setting an appropriate memory request can prevent the following:
- OpenShift killing a container if there is pressure on memory from other pods running on the node.
-
OpenShift scheduling a container to a node with insufficient memory. If
-Xmsis set to-Xmx, the container will crash immediately; if not, the container will crash at a later time.
In this example, the JVM uses 2 GiB (=2,147,483,648 bytes) for its heap. Total JVM memory usage can be a lot more than the maximum heap size.
Example -Xmx and -Xms configuration
# ...
jvmOptions:
"-Xmx": "2g"
"-Xms": "2g"
# ...
Setting the same value for initial (-Xms) and maximum (-Xmx) heap sizes avoids the JVM having to allocate memory after startup, at the cost of possibly allocating more heap than is really needed.
Containers performing lots of disk I/O, such as Kafka broker containers, require available memory for use as an operating system page cache. For such containers, the requested memory should be significantly higher than the memory used by the JVM.
-XX option
-XX options are used to configure the KAFKA_JVM_PERFORMANCE_OPTS option of Apache Kafka.
Example -XX configuration
jvmOptions:
"-XX":
"UseG1GC": "true"
"MaxGCPauseMillis": "20"
"InitiatingHeapOccupancyPercent": "35"
"ExplicitGCInvokesConcurrent": "true"
JVM options resulting from the -XX configuration
-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC
When no -XX options are specified, the default Apache Kafka configuration of KAFKA_JVM_PERFORMANCE_OPTS is used.
javaSystemProperties
javaSystemProperties are used to configure additional Java system properties, such as debugging utilities.
Example javaSystemProperties configuration
jvmOptions:
javaSystemProperties:
- name: javax.net.debug
value: ssl
For more information about the jvmOptions, see the JvmOptions schema reference.
2.10. Garbage collector logging 复制链接链接已复制到粘贴板!
The jvmOptions property also allows you to enable and disable garbage collector (GC) logging. GC logging is disabled by default. To enable it, set the gcLoggingEnabled property as follows:
Example GC logging configuration
# ...
jvmOptions:
gcLoggingEnabled: true
# ...
2.11. Additional volumes 复制链接链接已复制到粘贴板!
Streams for Apache Kafka supports specifying additional volumes and volume mounts in the following components:
- Kafka
- Kafka Connect
- Kafka Bridge
- Kafka MirrorMaker2
- Entity Operator
- Cruise Control
- Kafka Exporter
- User Operator
- Topic Operator
All additional mounted paths are located inside /mnt to ensure compatibility with future Kafka and Streams for Apache Kafka updates.
Supported Volume Types
- Secret
- ConfigMap
- EmptyDir
- PersistentVolumeClaims
- CSI Volumes
- Image Volumes
Example configuration for additional volumes
kind: Kafka
spec:
kafka:
# ...
template:
pod:
volumes:
- name: example-secret
secret:
secretName: secret-name
- name: example-configmap
configMap:
name: config-map-name
- name: temp
emptyDir: {}
- name: example-pvc-volume
persistentVolumeClaim:
claimName: myclaim
- name: example-csi-volume
csi:
driver: csi.cert-manager.io
readOnly: true
volumeAttributes:
csi.cert-manager.io/issuer-name: my-ca
csi.cert-manager.io/dns-names: ${POD_NAME}.${POD_NAMESPACE}.svc.cluster.local
- name: example-oci-plugin
image:
reference: my-registry.io/oci-artifacts/example-plugin:latest
kafkaContainer:
volumeMounts:
- name: example-secret
mountPath: /mnt/secret-volume
- name: example-configmap
mountPath: /mnt/cm-volume
- name: temp
mountPath: /mnt/temp
- name: example-pvc-volume
mountPath: /mnt/data
- name: example-csi-volume
mountPath: /mnt/certificate
- name: example-oci-plugin
mountPath: /mnt/example-plugin
You can use volumes to store files containing configuration values for a Kafka component and then load those values using a configuration provider. For more information, see Loading configuration values from external sources.
You can also use additional volumes to mount custom plugins:
-
To include custom plugins in the User Operator and Topic Operator, set the
JAVA_CLASSPATHenvironment variable to modify the Java classpath. -
To include custom plugins in the Kafka operands and Cruise Control, set the
CLASSPATHenvironment variable to modify the Java classpath. - To add Kafka Connect connectors, see Adding Kafka Connect connectors.
- Some plugins, such as the Tiered Storage plugins, may require their own classpath configuration.