此内容没有您所选择的语言版本。
Chapter 7. Kafka configuration
A deployment of Kafka components to an OpenShift cluster using AMQ Streams is highly configurable through the application of custom resources. Custom resources are created as instances of APIs added by Custom resource definitions (CRDs) to extend OpenShift resources.
CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with AMQ Streams for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the AMQ Streams distribution.
CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation.
In this section we look at how Kafka components are configured through custom resources, starting with common configuration points and then important configuration considerations specific to components.
AMQ Streams provides example configuration files, which can serve as a starting point when building your own Kafka component configuration for deployment.
7.1. Custom resources 复制链接链接已复制到粘贴板!
After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification.
The custom resources for AMQ Streams components have common configuration properties, which are defined under spec.
In this fragment from a Kafka topic custom resource, the apiVersion and kind properties identify the associated CRD. The spec property shows configuration that defines the number of partitions and replicas for the topic.
Kafka topic custom resource
There are many additional configuration options that can be incorporated into a YAML definition, some common and some specific to a particular component.
7.2. Common configuration 复制链接链接已复制到粘贴板!
Some of the configuration options common to resources are described here. Security and metrics collection might also be adopted where applicable.
- Bootstrap servers
Bootstrap servers are used for host/port connection to a Kafka cluster for:
- Kafka Connect
- Kafka Bridge
- Kafka MirrorMaker producers and consumers
- CPU and memory resources
You request CPU and memory resources for components. Limits specify the maximum resources that can be consumed by a given container.
Resource requests and limits for the Topic Operator and User Operator are set in the
Kafkaresource.- Logging
- You define the logging level for the component. Logging can be defined directly (inline) or externally using a config map.
- Healthchecks
- Healthcheck configuration introduces liveness and readiness probes to know when to restart a container (liveness) and when a container can accept traffic (readiness).
- JVM options
- JVM options provide maximum and minimum memory allocation to optimize the performance of the component according to the platform it is running on.
- Pod scheduling
- Pod schedules use affinity/anti-affinity rules to determine under what circumstances a pod is scheduled onto a node.
Example YAML showing common configuration
7.3. Kafka cluster configuration 复制链接链接已复制到粘贴板!
A kafka cluster comprises one or more brokers. For producers and consumers to be able to access topics within the brokers, Kafka configuration must define how data is stored in the cluster, and how the data is accessed. You can configure a Kafka cluster to run with multiple broker nodes across racks.
- Storage
Kafka and ZooKeeper store data on disks.
AMQ Streams requires block storage provisioned through
StorageClass. The file system format for storage must be XFS or EXT4. Three types of data storage are supported:- Ephemeral (Recommended for development only)
- Ephemeral storage stores data for the lifetime of an instance. Data is lost when the instance is restarted.
- Persistent
- Persistent storage relates to long-term data storage independent of the lifecycle of the instance.
- JBOD (Just a Bunch of Disks, suitable for Kafka only)
- JBOD allows you to use multiple disks to store commit logs in each broker.
The disk capacity used by an existing Kafka cluster can be increased if supported by the infrastructure.
- Listeners
Listeners configure how clients connect to a Kafka cluster.
By specifying a unique name and port for each listener within a Kafka cluster, you can configure multiple listeners.
The following types of listener are supported:
- Internal listeners for access within OpenShift
- External listeners for access outside of OpenShift
You can enable TLS encryption for listeners, and configure authentication.
Internal listeners are specified using an
internaltype.External listeners expose Kafka by specifying an external
type:-
routeto use OpenShift routes and the default HAProxy router -
loadbalancerto use loadbalancer services -
nodeportto use ports on OpenShift nodes -
ingressto use OpenShift Ingress and the NGINX Ingress Controller for Kubernetes
If you are using OAuth 2.0 for token-based authentication, you can configure listeners to use the authorization server.
- Rack awareness
- Rack awareness is a configuration feature that distributes Kafka broker pods and topic replicas across racks, which represent data centers or racks in data centers, or availability zones.
Example YAML showing Kafka configuration
7.4. Kafka MirrorMaker configuration 复制链接链接已复制到粘贴板!
To set up MirrorMaker, a source and target (destination) Kafka cluster must be running.
You can use AMQ Streams with MirrorMaker 2.0, although the earlier version of MirrorMaker continues to be supported.
MirrorMaker 2.0
MirrorMaker 2.0 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters.
MirrorMaker 2.0 uses:
- Source cluster configuration to consume data from the source cluster
- Target cluster configuration to output data to the target cluster
Cluster configuration
You can use MirrorMaker 2.0 in active/passive or active/active cluster configurations.
- In an active/active configuration, both clusters are active and provide the same data simultaneously, which is useful if you want to make the same data available locally in different geographical locations.
- In an active/passive configuration, the data from an active cluster is replicated in a passive cluster, which remains on standby, for example, for data recovery in the event of system failure.
You configure a KafkaMirrorMaker2 custom resource to define the Kafka Connect deployment, including the connection details of the source and target clusters, and then run a set of MirrorMaker 2.0 connectors to make the connection.
Topic configuration is automatically synchronized between the source and target clusters according to the topics defined in the KafkaMirrorMaker2 custom resource. Configuration changes are propagated to remote topics so that new topics and partitions are detected and created. Topic replication is defined using regular expression patterns to include or exclude topics.
The following MirrorMaker 2.0 connectors and related internal topics help manage the transfer and synchronization of data between the clusters.
- MirrorSourceConnector
- A MirrorSourceConnector creates remote topics from the source cluster.
- MirrorCheckpointConnector
- A MirrorCheckpointConnector tracks and maps offsets for specified consumer groups using an offset sync topic and checkpoint topic. The offset sync topic maps the source and target offsets for replicated topic partitions from record metadata. A checkpoint is emitted from each source cluster and replicated in the target cluster through the checkpoint topic. The checkpoint topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group.
- MirrorHeartbeatConnector
- A MirrorHeartbeatConnector periodically checks connectivity between clusters. A heartbeat is produced every second by the MirrorHeartbeatConnector into a heartbeat topic that is created on the local cluster. If you have MirrorMaker 2.0 at both the remote and local locations, the heartbeat emitted at the remote location by the MirrorHeartbeatConnector is treated like any remote topic and mirrored by the MirrorSourceConnector at the local cluster. The heartbeat topic makes it easy to check that the remote cluster is available and the clusters are connected. If things go wrong, the heartbeat topic offset positions and time stamps can help with recovery and diagnosis.
Figure 7.1. Replication across two clusters
Bidirectional replication across two clusters
The MirrorMaker 2.0 architecture supports bidirectional replication in an active/active cluster configuration, so both clusters are active and provide the same data simultaneously. A MirrorMaker 2.0 cluster is required at each target destination.
Remote topics are distinguished by automatic renaming that prepends the name of cluster to the name of the topic. This is useful if you want to make the same data available locally in different geographical locations.
However, if you want to backup or migrate data in an active/passive cluster configuration, you might want to keep the original names of the topics. If so, you can configure MirrorMaker 2.0 to turn off automatic renaming.
Figure 7.2. Bidirectional replication
Example YAML showing MirrorMaker 2.0 configuration
MirrorMaker
The earlier version of MirrorMaker uses producers and consumers to replicate data across clusters.
MirrorMaker uses:
- Consumer configuration to consume data from the source cluster
- Producer configuration to output data to the target cluster
Consumer and producer configuration includes any authentication and encryption settings.
The include field defines the topics to mirror from a source to a target cluster.
Key Consumer configuration
- Consumer group identifier
- The consumer group ID for a MirrorMaker consumer so that messages consumed are assigned to a consumer group.
- Number of consumer streams
- A value to determine the number of consumers in a consumer group that consume a message in parallel.
- Offset commit interval
- An offset commit interval to set the time between consuming and committing a message.
Key Producer configuration
- Cancel option for send failure
- You can define whether a message send failure is ignored or MirrorMaker is terminated and recreated.
Example YAML showing MirrorMaker configuration
7.5. Kafka Connect configuration 复制链接链接已复制到粘贴板!
Use AMQ Streams’s KafkaConnect resource to quickly and easily create new Kafka Connect clusters.
When you deploy Kafka Connect using the KafkaConnect resource, you specify bootstrap server addresses (in spec.bootstrapServers) for connecting to a Kafka cluster. You can specify more than one address in case a server goes down. You also specify the authentication credentials and TLS encryption certificates to make a secure connection.
The Kafka cluster doesn’t need to be managed by AMQ Streams or deployed to an OpenShift cluster.
You can also use the KafkaConnect resource to specify the following:
- Plugin configuration to build a container image that includes the plugins to make connections
- Configuration for the worker pods that belong to the Kafka Connect cluster
-
An annotation to enable use of the
KafkaConnectorresource to manage plugins
The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect resource and connectors created using the KafkaConnector resource.
Plugin configuration
Plugins provide the implementation for creating connector instances. When a plugin is instantiated, configuration is provided for connection to a specific type of external data system. Plugins provide a set of one or more JAR files that define a connector and task implementation for connecting to a given kind of data source. Plugins for many external systems are available for use with Kafka Connect. You can also create your own plugins.
The configuration describes the source input data and target output data to feed into and out of Kafka Connect. For a source connector, external source data must reference specific topics that will store the messages. The plugins might also contain the libraries and files needed to transform the data.
A Kafka Connect deployment can have one or more plugins, but only one version of each plugin.
You can create a custom Kafka Connect image that includes your choice of plugins. You can create the image in two ways:
To create the container image automatically, you specify the plugins to add to your Kafka Connect cluster using the build property of the KafkaConnect resource. AMQ Streams automatically downloads and adds the plugin artifacts to a new container image.
Example plugin configuration
- 1
- Build configuration properties for building a container image with plugins automatically.
- 2
- Configuration of the container registry where new images are pushed. The
outputproperties describe the type and name of the image, and optionally the name of the secret containing the credentials needed to access the container registry. - 3
- List of plugins and their artifacts to add to the new container image. The
pluginsproperties describe the type of artifact and the URL from which the artifact is downloaded. Each plugin must be configured with at least one artifact. Additionally, you can specify a SHA-512 checksum to verify the artifact before unpacking it.
If you are using a Dockerfile to build an image, you can use AMQ Streams’s latest container image as a base image to add your plugin configuration file.
Example showing manual addition of plugin configuration
FROM registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
FROM registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0
USER root:root
COPY ./my-plugins/ /opt/kafka/plugins/
USER 1001
Kafka Connect cluster configuration for workers
You specify the configuration for workers in the config property of the KafkaConnect resource.
A distributed Kafka Connect cluster has a group ID and a set of internal configuration topics.
-
group.id -
offset.storage.topic -
config.storage.topic -
status.storage.topic
Kafka Connect clusters are configured by default with the same values for these properties. Kafka Connect clusters cannot share the group ID or topic names as it will create errors. If multiple different Kafka Connect clusters are used, these settings must be unique for the workers of each Kafka Connect cluster created.
The names of the connectors used by each Kafka Connect cluster must also be unique.
In the following example worker configuration, JSON converters are specified. A replication factor is set for the internal Kafka topics used by Kafka Connect. This should be at least 3 for a production environment. Changing the replication factor after the topics have been created will have no effect.
Example worker configuration
- 1
- The Kafka Connect cluster ID within Kafka. Must be unique for each Kafka Connect cluster.
- 2
- Kafka topic that stores connector offsets. Must be unique for each Kafka Connect cluster.
- 3
- Kafka topic that stores connector and task status configurations. Must be unique for each Kafka Connect cluster.
- 4
- Kafka topic that stores connector and task status updates. Must be unique for each Kafka Connect cluster.
- 5
- Converter to transform message keys into JSON format for storage in Kafka.
- 6
- Converter to transform message values into JSON format for storage in Kafka.
- 7
- Schema enabled for converting message keys into structured JSON format.
- 8
- Schema enabled for converting message values into structured JSON format.
- 9
- Replication factor for the Kafka topic that stores connector offsets.
- 10
- Replication factor for the Kafka topic that stores connector and task status configurations.
- 11
- Replication factor for the Kafka topic that stores connector and task status updates.
KafkaConnector management of connectors
After plugins have been added to the container image used for the worker pods in a deployment, you can use AMQ Streams’s KafkaConnector custom resource or the Kafka Connect API to manage connector instances. You can also create new connector instances using these options.
The KafkaConnector resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. To manage connectors with KafkaConnector resources, you must specify an annotation in your KafkaConnect custom resource.
Annotation to enable KafkaConnectors
Setting use-connector-resources to true enables KafkaConnectors to create, delete, and reconfigure connectors.
If use-connector-resources is enabled in your KafkaConnect configuration, you must use the KafkaConnector resource to define and manage connectors. KafkaConnector resources are configured to connect to external systems. They are deployed to the same OpenShift cluster as the Kafka Connect cluster and Kafka cluster interacting with the external data system.
Kafka components are contained in the same OpenShift cluster
The configuration specifies how connector instances connect to an external data system, including any authentication. You also need to state what data to watch. For a source connector, you might provide a database name in the configuration. You can also specify where the data should sit in Kafka by specifying a target topic name.
Use tasksMax to specify the maximum number of tasks. For example, a source connector with tasksMax: 2 might split the import of source data into two tasks.
Example KafkaConnector source connector configuration
- 1
- Name of the
KafkaConnectorresource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. - 2
- Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to.
- 3
- Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster.
- 4
- Maximum number of Kafka Connect tasks that the connector can create.
- 5
- Connector configuration as key-value pairs.
- 6
- Location of the external data file. In this example, we’re configuring the
FileStreamSourceConnectorto read from the/opt/kafka/LICENSEfile. - 7
- Kafka topic to publish the source data to.
You can load confidential configuration values for a connector from OpenShift Secrets or ConfigMaps.
Kafka Connect API
Use the Kafka Connect REST API as an alternative to using KafkaConnector resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name>-connect-api:8083, where <connect_cluster_name> is the name of your Kafka Connect cluster.
You add the connector configuration as a JSON object.
Example curl request to add connector configuration
If KafkaConnectors are enabled, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
The operations supported by the REST API are described in the Apache Kafka documentation.
You can expose the Kafka Connect API service outside OpenShift. You do this by creating a service that uses a connection mechanism that provides the access, such as an ingress or route. Use advisedly as the connection is insecure.
7.6. Kafka Bridge configuration 复制链接链接已复制到粘贴板!
A Kafka Bridge configuration requires a bootstrap server specification for the Kafka cluster it connects to, as well as any encryption and authentication options required.
Kafka Bridge consumer and producer configuration is standard, as described in the Apache Kafka configuration documentation for consumers and Apache Kafka configuration documentation for producers.
HTTP-related configuration options set the port connection which the server listens on.
CORS
The Kafka Bridge supports the use of Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin, for example, resources on different domains. If you choose to use CORS, you can define a list of allowed resource origins and HTTP methods for interaction with the Kafka cluster through the Kafka Bridge. The lists are defined in the http specification of the Kafka Bridge configuration.
CORS allows for simple and preflighted requests between origin sources on different domains.
- A simple request is a HTTP request that must have an allowed origin defined in its header.
- A preflighted request sends an initial OPTIONS HTTP request before the actual request to check that the origin and the method are allowed.
Example YAML showing Kafka Bridge configuration
Additional resources