此内容没有您所选择的语言版本。
Chapter 2. Installing Debezium connectors on RHEL
Install Debezium connectors through AMQ Streams by extending Kafka Connect with connector plugins. Following a deployment of AMQ Streams, you can deploy Debezium as a connector configuration through Kafka Connect.
2.1. Prerequisites
A Debezium installation requires the following:
- Red Hat Enterprise Linux is running.
-
Administrative privileges (
sudo
access). - AMQ Streams 2.0 on Red Hat Enterprise Linux is installed on the host computer.
-
Credentials for the
kafka
user that was created when AMQ Streams was installed. An AMQ Streams cluster is running.
- For instructions on running a basic, non-production AMQ Streams cluster that contains a single ZooKeeper node, and a single Kafka node, see Running a single node AMQ Streams cluster.
If you have an earlier version of AMQ Streams, you must first upgrade to AMQ Streams 2.0. For upgrade instructions, see AMQ Streams and Kafka upgrades.
Additional resources
- For information about the supported configuration for running Debezium on Red Hat Enterprise Linux, see the Debezium Supported Configurations page.
- For more information about how to install AMQ Streams, see Installing AMQ Streams.
2.2. Kafka topic creation recommendations
Debezium stores data in multiple Apache Kafka topics. The topics must either be created in advance by an administrator, or you can configure Kafka Connect to configure topics automatically.
The following list describes limitations and recommendations to consider when creating topics:
- Database history topics for MySQL, SQL Server, Db2, and Oracle connectors
- Infinite or very long retention.
- Replication factor of at least three in production environments.
- Single partition.
- Other topics
When you enable Kafka log compaction so that only the last change event for a given record is saved, set the following topic properties in Apache Kafka:
-
min.compaction.lag.ms
To ensure that topic consumers have enough time to receive all events and delete markers, specify values for the preceding properties that are larger than the maximum downtime that you expect for your sink connectors. For example, consider the downtime that might occur when you apply updates to sink connectors.
-
- Replicated in production.
Single partition.
You can relax the single partition rule, but your application must handle out-of-order events for different rows in the database. Events for a single row are still totally ordered. If you use multiple partitions, the default behavior is that Kafka determines the partition by hashing the key. Other partition strategies require the use of single message transformations (SMTs) to set the partition number for each record.
2.3. Deploying Debezium with AMQ Streams on RHEL
This procedure describes how to set up connectors for Debezium on Red Hat Enterprise Linux. Connectors are deployed to an AMQ Streams cluster using Apache Kafka Connect, a framework for streaming data between Apache Kafka and external systems. Kafka Connect must be run in distributed mode rather than standalone mode.
This procedure assumes that AMQ Streams is installed and ZooKeeper and Apache Kafka are running.
Procedure
- Visit the Red Hat Integration download site on the Red Hat Customer Portal and download the Debezium connector or connectors that you want to use. For example, download the Debezium 1.7 MySQL Connector to use Debezium with a MySQL database.
In
/opt/kafka
, create theconnector-plugins
directory if not already created for other Kafka Connect plugins:$ sudo mkdir /opt/kafka/connector-plugins
Extract the contents of the Debezium connector archive to the
/opt/kafka/connector-plugins
directory.This example extracts the contents of the MySQL connector:
$ sudo unzip debezium-connector-mysql-1.7.2.Final.zip -d /opt/kafka/connector-plugins
- Repeat the preceding steps for each connector that you want to install.
Switch to the
kafka
user:$ su - kafka $ Password:
Stop the Kafka Connect process if it is running.
Check whether Kafka Connect is running in distributed mode by entering the following command:
$ jcmd | grep ConnectDistributed
If the process is running, the command returns the process ID, for example:
18514 org.apache.kafka.connect.cli.ConnectDistributed /opt/kafka/config/connect-distributed.properties
Stop the process by entering the
kill
command with the process ID, for example,$ kill 18514
Edit the
connect-distributed.properties
file in/opt/kafka/config/
and specify the location of the Debezium connector:plugin.path=/opt/kafka/connector-plugins
Start Kafka Connect in distributed mode:
$ /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties
Kafka Connect runs. During startup, Debezium connectors are loaded from the
connector-plugins
directory.- Repeat steps 6–8 for each Kafka Connect worker node.
Additional resources
Updating Kafka Connect
If you need to update your deployment, amend the Debezium connector JAR files in the /opt/kafka/connector-plugins
directory, and then restart Kafka Connect.
Next Steps
The Debezium User Guide describes how to configure each connector and its source database for change data capture. After you complete the configuration, a connector will connect to the source database and produce events for each inserted, updated, and deleted row or document.