このコンテンツは選択した言語では利用できません。

Chapter 2. Installing Debezium connectors


Install Debezium connectors through AMQ Streams by extending Kafka Connect with connector plug-ins. Following a deployment of AMQ Streams, you can deploy Debezium as a connector configuration through Kafka Connect.

2.1. Prerequisites

A Debezium installation requires the following:

  • An OpenShift cluster
  • A deployment of AMQ Streams with Kafka Connect
  • A user on the OpenShift cluster with cluster-admin permissions to set up the required cluster roles and API services
Note

Java 8 or later is required to run the Debezium connectors.

To install Debezium, the OpenShift Container Platform command-line interface (CLI) is required. For information about how to install the CLI for OpenShift 4.4, see the OpenShift Container Platform 4.4 documentation.

Additional resources

2.2. Kafka topic creation recommendations

Debezium uses multiple Kafka topics for storing data. The topics must be created by an administrator, or by Kafka itself by enabling auto-creation for topics using the auto.create.topics.enable broker configuration property.

The following list describes limitations and recommendations to consider when creating topics:

Database history topics for MySQL, SQL Server, and Db2 connectors
  • Infinite or very long retention
  • Replication factor of at least three in production
  • Single partition
Other topics
  • When Kafka log compaction is enabled because you want to keep only the last change event for a given record, configure the min.compaction.lag.ms and delete.retention.ms topic-level settings in Apache Kafka. You want to ensure that consumers have enough time to receive all events and delete markers. Consequently, set these values to be larger than the maximum downtime you anticipate for the sink connectors. For example, consider the downtime when you update the connectors.
  • Replicated in production.
  • Single partition.

    You can relax the single partition rule, but your application must handle out-of-order events for different rows in the database. Events for a single row are still totally ordered. If you use multiple partitions, the default behavior is that Kafka determines the partition by hashing the key. Other partition strategies require using simple message transforms (SMTs) to set the partition number for each record.

2.3. Deploying Debezium with AMQ Streams

To set up connectors for Debezium on Red Hat OpenShift Container Platform, deploy a Kafka cluster to OpenShift, download and configure Debezium connectors, and deploy Kafka Connect with the connectors.

Prerequisites

  • You used Red Hat AMQ Streams to set up Apache Kafka and Kafka Connect on OpenShift. AMQ Streams offers operators and images that bring Kafka to OpenShift.
  • Podman is installed.

Procedure

  1. Deploy your Kafka cluster. If you already have a Kafka cluster deployed, skip the following three sub-steps.

    1. Install the AMQ Streams operator by following the steps in Installing AMQ Streams and deploying components.
    2. Select the desired configuration and deploy your Kafka Cluster.
    3. Deploy Kafka Connect.

    You now have a working Kafka cluster that is running in OpenShift with Kafka Connect.

  2. Check that your pods are running. The pod names correspond with your AMQ Streams deployment.

    $ oc get pods
    
    NAME                                               READY STATUS
    <cluster-name>-entity-operator-7b6b9d4c5f-k7b92    3/3   Running
    <cluster-name>-kafka-0                             2/2   Running
    <cluster-name>-zookeeper-0                         2/2   Running
    <cluster-name>-operator-97cd5cf7b-l58bq            1/1   Running

    In addition to running pods, you should have a DeploymentConfig associated with Kafka Connect.

  3. Go to the Red Hat Integration download site.
  4. Download the Debezium connector archive(s) for your database(s).
  5. Extract the archive(s) to create a directory structure for the connector plug-in(s). If you downloaded and extracted multiple archives, the structure looks like this:

    $ tree ./my-plugins/
    ./my-plugins/
    ├── debezium-connector-db2
    |   ├── ...
    ├── debezium-connector-mongodb
    |   ├── ...
    ├── debezium-connector-mysql
    │   ├── ...
    ├── debezium-connector-postgres
    │   ├── ...
    └── debezium-connector-sqlserver
        ├── ...
  6. Create a new Dockerfile by using registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 as the base image:

    FROM registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0
    USER root:root
    COPY ./my-plugins/ /opt/kafka/plugins/
    USER 1001
  7. Build the container image. If the Dockerfile you created in the previous step is in the current directory, run the following command:

    podman build -t my-new-container-image:latest .
  8. Push your custom image to your container registry:

    podman push my-new-container-image:latest
  9. Point to the new container image. Do one of the following:

    • Edit the spec.image field of the KafkaConnector custom resource.

      If set, this property overrides the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable in the Cluster Operator. For example:

      apiVersion: kafka.strimzi.io/v1beta1
      kind: KafkaConnector
      metadata:
        name: my-connect-cluster
      spec:
        #...
        image: my-new-container-image
    • In the install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml file, edit the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable to point to the new container image and reinstall the Cluster Operator. If you edit this file you will need to apply it to your OpenShift cluster.

    The Kafka Connect deployment starts to use the new image.

Next steps

Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.