Search

Chapter 2. Starting the services

download PDF

Using Debezium requires AMQ Streams and the Debezium connector service. To start the services needed for this tutorial, you must:

2.1. Setting up a Kafka cluster

You use AMQ Streams to set up a Kafka cluster. This procedure deploys a single-node Kafka cluster.

Procedure

  1. In your OpenShift 4.x cluster, create a new project:

    $ oc new-project debezium-tutorial
  2. Change to the directory where you downloaded the AMQ Streams 1.5 OpenShift installation and example files.
  3. Deploy the AMQ Streams Cluster Operator.

    The Cluster Operator is responsible for deploying and managing Kafka clusters within an OpenShift cluster. This command deploys the Cluster Operator to watch just the project that you created:

    $ sed -i 's/namespace: .*/namespace: debezium-tutorial/' install/cluster-operator/*RoleBinding*.yaml
    
    $ oc apply -f install/cluster-operator -n debezium-tutorial
  4. Verify that the Cluster Operator is running.

    This command shows that the Cluster Operator is running, and that all of the Pods are ready:

    $ oc get pods
    NAME                                       READY   STATUS    RESTARTS   AGE
    strimzi-cluster-operator-5c6d68c54-l4gdz   1/1     Running   0          46s
  5. Deploy the Kafka cluster.

    This command uses the kafka-ephemeral-single.yaml Custom Resource to create an ephemeral Kafka cluster with three ZooKeeper nodes and one Kafka node:

    $ oc apply -f examples/kafka/kafka-ephemeral-single.yaml
  6. Verify that the Kafka cluster is running.

    This command shows that the Kafka cluster is running, and that all of the Pods are ready:

    $ oc get pods
    NAME                                          READY   STATUS    RESTARTS   AGE
    my-cluster-entity-operator-5b5d4f7c58-8gnq5   3/3     Running   0          41s
    my-cluster-kafka-0                            2/2     Running   0          70s
    my-cluster-zookeeper-0                        2/2     Running   0          107s
    my-cluster-zookeeper-1                        2/2     Running   0          107s
    my-cluster-zookeeper-2                        2/2     Running   0          107s
    strimzi-cluster-operator-5c6d68c54-l4gdz      1/1     Running   0          8m53s

2.2. Deploying Kafka Connect

After setting up a Kafka cluster, you deploy Kafka Connect in a custom container image for Debezium. This service provides a framework for managing the Debezium MySQL connector.

Prerequisites

  • Podman or Docker is installed and you have sufficient rights to create and manage containers.

Procedure

  1. Download the Debezium MySQL Connector 1.2 archive from the Red Hat Integration download site.
  2. Extract the Debezium MySQL connector archive to create a directory structure for the connector plug-in, for example:

    tree ./my-plugins/
    ./my-plugins/
    ├── debezium-connector-mysql
    │   ├── ...
  3. Create and publish a custom image that runs Kafka Connect with the Debezium MySQL connector:

    1. Create a new Dockerfile by using registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 as the base image. In the following example, you would replace my-plugins with the name of your plug-ins directory:

      FROM registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0
      USER root:root
      COPY ./my-plugins/ /opt/kafka/plugins/
      USER 1001

      Before Kafka Connect starts running the connector, Kafka Connect loads any third-party plug-ins that are in the /opt/kafka/plugins directory.

    2. Build the container image. For example, if you saved the Dockerfile that you created in the previous step as debezium-container-for-mysql, and if the Dockerfile is in the current directory, then you would run the following command:

      podman build -t debezium-container-for-mysql:latest .

    3. Push your custom image to your container registry, for example:

      podman push debezium-container-for-mysql:latest

    4. Point to the new container image by editing the spec.image property of the KafkaConnector custom resource. If set, this property overrides the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable in the Cluster Operator. For example:

      apiVersion: kafka.strimzi.io/v1beta1
      kind: KafkaConnector
      metadata:
        name: my-connect-cluster
      spec:
        #...
        image: debezium-container-for-mysql

Results

Kafka Connect is now running. The container has a Debezium MySQL connector but this connector is not yet configured to capture changes in a database.

2.3. Deploying a MySQL database

At this point, you have deployed a Kafka cluster and the Kafka Connect service with the Debezium MySQL database connector. However, you still need a database server from which Debezium can capture changes. In this procedure, you start a MySQL server with an example database.

Procedure

  1. Start a MySQL database by running the following command, which starts a MySQL database server configured with an example inventory database:

    $ oc new-app --name=mysql debezium/example-mysql:1.2
  2. Configure credentials for the MySQL database by running the following command, which updates the deployment configuration for the MySQL database to add the user name and password:

    $ oc set env dc/mysql MYSQL_ROOT_PASSWORD=debezium  MYSQL_USER=mysqluser MYSQL_PASSWORD=mysqlpw
  3. Verify that the MySQL database is running by invoking the following command, which is followed by the output that shows that the MySQL database is running, and that the pod is ready:

    $ oc get pods -l app=mysql
    NAME            READY   STATUS    RESTARTS   AGE
    mysql-1-2gzx5   1/1     Running   1          23s
  4. Open a new terminal and log into the sample inventory database.

    This command opens a MySQL command line client in the pod that is running the MySQL database. The client uses the user name and password that you previously configured:

    $ oc exec mysql-1-2gzx5 -it -- mysql -u mysqluser -pmysqlpw inventory
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 7
    Server version: 5.7.29-log MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql>
  5. List the tables in the inventory database:

    mysql> show tables;
    +---------------------+
    | Tables_in_inventory |
    +---------------------+
    | addresses           |
    | customers           |
    | geom                |
    | orders              |
    | products            |
    | products_on_hand    |
    +---------------------+
    6 rows in set (0.00 sec)
  6. Explore the database and view the data that it contains, for example, view the customers table:

    mysql> select * from customers;
    +------+------------+-----------+-----------------------+
    | id   | first_name | last_name | email                 |
    +------+------------+-----------+-----------------------+
    | 1001 | Sally      | Thomas    | sally.thomas@acme.com |
    | 1002 | George     | Bailey    | gbailey@foobar.com    |
    | 1003 | Edward     | Walker    | ed@walker.com         |
    | 1004 | Anne       | Kretchmar | annek@noanswer.org    |
    +------+------------+-----------+-----------------------+
    4 rows in set (0.00 sec)
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.