Chapter 2. Debezium Connector for MySQL


MySQL has a binary log (binlog) that records all operations in the order in which they are committed to the database. This includes changes to table schemas and the data within tables. MySQL uses the binlog for replication and recovery.

The MySQL connector reads the binlog and produces change events for row-level INSERT, UPDATE, and DELETE operations and records the change events in a Kafka topic. Client applications read those Kafka topics.

As MySQL is typically set up to purge binlogs after a specified period of time, the MySQL connector performs and initial consistent snapshot of each of your databases. The MySQL connector reads the binlog from the point at which the snapshot was made.

2.1. Overview of how the MySQL connector works

The Debezium MySQL connector tracks the structure of the tables, performs snapshots, transforms binlog events into Debezium change events and records where those events are recorded in Kafka.

2.1.1. How the MySQL connector uses database schemas

When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot just use the current schema because the connector might be processing events that are relatively old and may have been recorded before the tables' schemas were changed.

To handle this, MySQL includes in the binlog the row-level changes to the data and the DDL statements that are applied to the database. As the connector reads the binlog and comes across these DDL statements, it parses them and updates an in-memory representation of each table’s schema. The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete and to produce the appropriate change event. In a separate database history Kafka topic, the connector also records all DDL statements along with the position in the binlog where each DDL statement appeared.

When the connector restarts after having crashed or been stopped gracefully, the connector starts reading the binlog from a specific position, that is, from a specific point in time. The connector rebuilds the table structures that existed at this point in time by reading the database history Kafka topic and parsing all DDL statements up to the point in the binlog where the connector is starting.

This database history topic is for connector use only. The connector can optionally generate schema change events on a different topic that is intended for consumer applications. This is described in how the MySQL connector handles schema change topics.

When the MySQL connector captures changes in a table to which a schema change tool such as gh-ost or pt-online-schema-change is applied then helper tables created during the migration process need to be included among whitelisted tables.

If downstream systems do not need the messages generated by the temporary table then a simple message transform can be written and applied to filter them out.

For information about topic naming conventions, see MySQL connector and Kafka topics.

2.1.2. How the MySQL connector performs database snapshots

When your Debezium MySQL connector is first started, it performs an initial consistent snapshot of your database. The following flow describes how this snapshot is completed.

Note

This is the default snapshot mode which is set as initial in the snapshot.mode property. For other snapshots modes, please check out the MySQL connector configuration properties.

The connector…​
StepAction

1

Grabs a global read lock that blocks writes by other database clients.

Note

The snapshot itself does not prevent other clients from applying DDL which might interfere with the connector’s attempt to read the binlog position and table schemas. The global read lock is kept while the binlog position is read before released in a later step.

2

Starts a transaction with repeatable read semantics to ensure that all subsequent reads within the transaction are done against the consistent snapshot.

3

Reads the current binlog position.

4

Reads the schema of the databases and tables allowed by the connector’s configuration.

5

Releases the global read lock. This now allows other database clients to write to the database.

6

Writes the DDL changes to the schema change topic, including all necessary DROP…​ and CREATE…​ DDL statements.

Note

This happens if applicable.

7

Scans the database tables and generates CREATE events on the relevant table-specific Kafka topics for each row.

8

Commits the transaction.

9

Records the completed snapshot in the connector offsets.

2.1.2.1. What happens if the connector fails?

If the connector fails, stops, or is rebalanced while making the initial snapshot, the connector creates a new snapshot once restarted. Once that intial snapshot is completed, the Debezium MySQL connector restarts from the same position in the binlog so it does not miss any updates.

Note

If the connector stops for long enough, MySQL could purge old binlog files and the connector’s position would be lost. If the position is lost, the connector reverts to the initial snapshot for its starting position. For more tips on troubleshooting the Debezium MySQL connector, see MySQL connector common issues.

2.1.2.2. What if Global Read Locks are not allowed?

Some environments do not allow a global read lock. If the Debezium MySQL connector detects that global read locks are not permitted, the connector uses table-level locks instead and performs a snapshot with this method.

Important

The user must have LOCK_TABLES privileges.

The connector…​
StepAction

1

Starts a transaction with repeatable read semantics to ensure that all subsequent reads within the transaction are done against the consistent snapshot.

2

Reads and filters the names of the databases and tables.

3

Reads the current binlog position.

4

Reads the schema of the databases and tables allowed by the connector’s configuration.

5

Writes the DDL changes to the schema change topic, including all necessary DROP…​ and CREATE…​ DDL statements.

Note

This happens if applicable.

6

Scans the database tables and generates CREATE events on the relevant table-specific Kafka topics for each row.

7

Commits the transaction.

8

Releases the table-level locks.

9

Records the completed snapshot in the connector offsets.

2.1.3. How the MySQL connector handles schema change topics

You can configure the Debezium MySQL connector to produce schema change events that include all DDL statements applied to databases in the MySQL server. The connector writes all of these events to a Kafka topic named <serverName> where serverName is the name of the connector as specified in the database.server.name configuration property.

Important

If you choose to use schema change events, use the schema change topic and do not consume the database history topic.

Note

It is vital that there is a global order of the events in the database schema history. Therefore, the database history topic must not be partitioned. This means that a partition count of 1 must be specified when creating this topic. When relying on auto topic creation, make sure that Kafka’s num.partitions configuration option (the default number of partitions) is set to 1.

2.1.3.1. Schema change topic structure

Each message that is written to the schema change topic contains a message key which includes the name of the connected database used when applying DDL statements:

{
  "schema": {
    "type": "struct",
    "name": "io.debezium.connector.mysql.SchemaChangeKey",
    "optional": false,
    "fields": [
      {
        "field": "databaseName",
        "type": "string",
        "optional": false
      }
    ]
  },
  "payload": {
    "databaseName": "inventory"
  }
}

The schema change event message value contains a structure that includes the DDL statements, the database to which the statements were applied, and the position in the binlog where the statements appeared:

{
  "schema": {
    "type": "struct",
    "name": "io.debezium.connector.mysql.SchemaChangeValue",
    "optional": false,
    "fields": [
      {
        "field": "databaseName",
        "type": "string",
        "optional": false
      },
      {
        "field": "ddl",
        "type": "string",
        "optional": false
      },
      {
        "field": "source",
        "type": "struct",
        "name": "io.debezium.connector.mysql.Source",
        "optional": false,
        "fields": [
          {
            "type": "string",
            "optional": true,
            "field": "version"
          },
          {
            "type": "string",
            "optional": false,
            "field": "name"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "server_id"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "ts_sec"
          },
          {
            "type": "string",
            "optional": true,
            "field": "gtid"
          },
          {
            "type": "string",
            "optional": false,
            "field": "file"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "pos"
          },
          {
            "type": "int32",
            "optional": false,
            "field": "row"
          },
          {
            "type": "boolean",
            "optional": true,
            "default": false,
            "field": "snapshot"
          },
          {
            "type": "int64",
            "optional": true,
            "field": "thread"
          },
          {
            "type": "string",
            "optional": true,
            "field": "db"
          },
          {
            "type": "string",
            "optional": true,
            "field": "table"
          },
          {
            "type": "string",
            "optional": true,
            "field": "query"
          }
        ]
      }
    ]
  },
  "payload": {
    "databaseName": "inventory",
    "ddl": "CREATE TABLE products ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, description VARCHAR(512), weight FLOAT ); ALTER TABLE products AUTO_INCREMENT = 101;",
    "source" : {
      "version": "0.10.0.Beta4",
      "name": "mysql-server-1",
      "server_id": 0,
      "ts_sec": 0,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 154,
      "row": 0,
      "snapshot": true,
      "thread": null,
      "db": null,
      "table": null,
      "query": null
    }
  }
}
2.1.3.1.1. Important tips regarding schema change topics

The ddl field may contain multiple DDL statements. Every statement applies to the database in the databaseName field and appears in the same order as they were applied in the database. The source field is structured exactly as a standard data change event written to table-specific topics. This field is useful to correlate events on different topic.

....
    "payload": {
        "databaseName": "inventory",
        "ddl": "CREATE TABLE products ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,...
        "source" : {
            ....
        }
    }
....
What if a client submits DDL statements to multiple databases?
  • If MySQL applies them atomically, the connector takes the DDL statements in order, groups them by database, and creates a schema change event for each group.
  • If MySQL applies them individually, the connector creates a separate schema change event for each statement.

Additional resources

2.1.4. MySQL connector events

All data change events produced by the Debezium MySQL connector contain a key and a value. The change event key and the change event value each contain a schema and a payload where the schema describes the structure of the payload and the payload contains the data.

Warning

The MySQL connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This is important as any character that is not a latin letter or underscore is replaced by an underscore which can lead to unexpected conflicts in schema names when the logical server names, database names, and table names container other characters that are replaced with these underscores.

2.1.4.1. Change event key

For any given table, the change event’s key has a structure that contains a field for each column in the PRIMARY KEY (or unique constraint) at the time the event was created. Let us look at an example table and then how the schema and payload would appear for the table.

example table

CREATE TABLE customers (
  id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
  first_name VARCHAR(255) NOT NULL,
  last_name VARCHAR(255) NOT NULL,
  email VARCHAR(255) NOT NULL UNIQUE KEY
) AUTO_INCREMENT=1001;

example change event key

{
 "schema": { 1
    "type": "struct",
 "name": "mysql-server-1.inventory.customers.Key", 2
 "optional": false, 3
 "fields": [ 4
      {
        "field": "id",
        "type": "int32",
        "optional": false
      }
    ]
  },
 "payload": { 5
    "id": 1001
  }
}

1
The schema describes what is in the payload.
2
The mysql-server-1.inventory.customers.Key is the name of the schema which defines the structure where mysql-server-1 is the connector name, inventory is the database, and customers is the table.
3
Denotes that the payload is not optional.
4
Specifies the type of fields expected in the payload.
5
The payload itself, which in this case only contains a single id field.

This key describes the row in the inventory.customers table which is out from the connector entitled mysql-server-1 whose id primary key column has a value of 1001.

2.1.4.2. Change event value

The change event value contains a schema and a payload section. There are three types of change event values which have an envelope structure. The fields in this structure are explained below and marked on each of the change event value examples.

ItemField nameDescription

1

name

mysql-server-1.inventory.customers.Key is the name of the schema which defines the structure where mysql-server-1 is the connector name, inventory is the database and customers is the table

2

op

A mandatory string that describes the type of operation.

values

  • c = create
  • u = update
  • d = delete
  • r = read (initial snapshot only)

3

before

An optional field that specifies the state of the row before the event occurred.

4

after

An optional field that specifies the state of the row after the event occurred.

5

source

A mandatory field that describes the source metadata for the event including:

  • the Debezium version
  • the connector name
  • the binlog name where the event was recorded
  • the binlog position
  • the row within the event
  • if the event was part of a snapshot
  • the name of the affected database and table
  • the id of the MySQL thread creating the event (non-snapshot only)
  • the MySQL server ID (if available)
  • timestamp
Note

If the binlog_rows_query_log_events option is enabled and the connector has the include.query option enabled, a query field is displayed which contains the original SQL statement that generated the event.

6

ts_ms

An optional field that displays the time at which the connector processed the event.

Note

The time is based on the system clock in the JVM running the Kafka Connect task.

Let us look at an example table and then how the schema and payload would appear for the table.

example table

CREATE TABLE customers (
  id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,
  first_name VARCHAR(255) NOT NULL,
  last_name VARCHAR(255) NOT NULL,
  email VARCHAR(255) NOT NULL UNIQUE KEY
) AUTO_INCREMENT=1001;

2.1.4.2.1. Create change event value

This example shows a create event for the customers table:

{
  "schema": { 1
    "type": "struct",
    "fields": [
      {
        "type": "struct",
        "fields": [
          {
            "type": "int32",
            "optional": false,
            "field": "id"
          },
          {
            "type": "string",
            "optional": false,
            "field": "first_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "last_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "email"
          }
        ],
        "optional": true,
        "name": "mysql-server-1.inventory.customers.Value",
        "field": "before"
      },
      {
        "type": "struct",
        "fields": [
          {
            "type": "int32",
            "optional": false,
            "field": "id"
          },
          {
            "type": "string",
            "optional": false,
            "field": "first_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "last_name"
          },
          {
            "type": "string",
            "optional": false,
            "field": "email"
          }
        ],
        "optional": true,
        "name": "mysql-server-1.inventory.customers.Value",
        "field": "after"
      },
      {
        "type": "struct",
        "fields": [
          {
            "type": "string",
            "optional": false,
            "field": "version"
          },
          {
            "type": "string",
            "optional": false,
            "field": "connector"
          },
          {
            "type": "string",
            "optional": false,
            "field": "name"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "ts_ms"
          },
          {
            "type": "boolean",
            "optional": true,
            "default": false,
            "field": "snapshot"
          },
          {
            "type": "string",
            "optional": false,
            "field": "db"
          },
          {
            "type": "string",
            "optional": true,
            "field": "table"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "server_id"
          },
          {
            "type": "string",
            "optional": true,
            "field": "gtid"
          },
          {
            "type": "string",
            "optional": false,
            "field": "file"
          },
          {
            "type": "int64",
            "optional": false,
            "field": "pos"
          },
          {
            "type": "int32",
            "optional": false,
            "field": "row"
          },
          {
            "type": "int64",
            "optional": true,
            "field": "thread"
          },
          {
            "type": "string",
            "optional": true,
            "field": "query"
          }
        ],
        "optional": false,
        "name": "io.product.connector.mysql.Source",
        "field": "source"
      },
      {
        "type": "string",
        "optional": false,
        "field": "op"
      },
      {
        "type": "int64",
        "optional": true,
        "field": "ts_ms"
      }
    ],
    "optional": false,
    "name": "mysql-server-1.inventory.customers.Envelope"
  },
  "payload": { 2
    "op": "c",
    "ts_ms": 1465491411815,
    "before": null,
    "after": {
      "id": 1004,
      "first_name": "Anne",
      "last_name": "Kretchmar",
      "email": "annek@noanswer.org"
    },
    "source": {
      "version": "1.1.2.Final",
      "connector": "mysql",
      "name": "mysql-server-1",
      "ts_ms": 0,
      "snapshot": false,
      "db": "inventory",
      "table": "customers",
      "server_id": 0,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 154,
      "row": 0,
      "thread": 7,
      "query": "INSERT INTO customers (first_name, last_name, email) VALUES ('Anne', 'Kretchmar', 'annek@noanswer.org')"
    }
  }
}
1
The schema portion of this event’s value shows the schema for the envelope, the schema for the source structure (which is specific to the MySQL connector and reused across all events), and the table-specific schemas for the before and after fields.
2
The payload portion of this event’s value shows the information in the event, namely that it is describing that the row was created (because op=c), and that the after field value contains the values of the new inserted row’s id, first_name, last_name, and email columns.
2.1.4.2.2. Update change event value

The value of an update change event on the customers table has the exact same schema as a create event. The payload is structured the same, but holds different values. Here is an example (formatted for readability):

{
  "schema": { ... },
  "payload": {
    "before": { 1
      "id": 1004,
      "first_name": "Anne",
      "last_name": "Kretchmar",
      "email": "annek@noanswer.org"
    },
    "after": { 2
      "id": 1004,
      "first_name": "Anne Marie",
      "last_name": "Kretchmar",
      "email": "annek@noanswer.org"
    },
    "source": { 3
      "version": "1.1.2.Final",
      "name": "mysql-server-1",
      "connector": "mysql",
      "name": "mysql-server-1",
      "ts_ms": 1465581,
      "snapshot": false,
      "db": "inventory",
      "table": "customers",
      "server_id": 223344,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 484,
      "row": 0,
      "thread": 7,
      "query": "UPDATE customers SET first_name='Anne Marie' WHERE id=1004"
    },
    "op": "u", 4
    "ts_ms": 1465581029523 5
  }
}

Comparing this to the value in the insert event, you can see a couple of differences in the payload section:

1
The before field now has the state of the row with the values before the database commit.
2
The after field now has the updated state of the row, and the first_name value is now Anne Marie. You can compare the before and after structures to determine what actually changed in this row because of the commit.
3
The source field structure has the same fields as before, but the values are different (this event is from a different position in the binlog). The source structure shows information about MySQL’s record of this change (providing traceability). It also has information you can use to compare to other events in this and other topics to know whether this event occurred before, after, or as part of the same MySQL commit as other events.
4
The op field value is now u, signifying that this row changed because of an update.
5
The ts_ms field shows the timestamp when Debezium processed this event.
Note

When the columns for a row’s primary or unique key are updated, the value of the row’s key is changed and Debezium outputs three events: a DELETE event and tombstone event with the old key for the row, followed by an INSERT event with the new key for the row.

2.1.4.2.3. Delete change event value

The value of a delete change event on the customers table has the exact same schema as create and update events. The payload is structured the same, but holds different values. Here is an example (formatted for readability):

{
  "schema": { ... },
  "payload": {
    "before": { 1
      "id": 1004,
      "first_name": "Anne Marie",
      "last_name": "Kretchmar",
      "email": "annek@noanswer.org"
    },
    "after": null, 2
    "source": { 3
      "version": "1.1.2.Final",
      "connector": "mysql",
      "name": "mysql-server-1",
      "ts_ms": 1465581,
      "snapshot": false,
      "db": "inventory",
      "table": "customers",
      "server_id": 223344,
      "gtid": null,
      "file": "mysql-bin.000003",
      "pos": 805,
      "row": 0,
      "thread": 7,
      "query": "DELETE FROM customers WHERE id=1004"
    },
    "op": "d", 4
    "ts_ms": 1465581902461 5
  }
}

Comparing the payload portion to the payloads in the create and update events, you can see some differences:

1
The before field now has the state of the row that was deleted with the database commit.
2
The after field is null, signifying that the row no longer exists.
3
The source field structure has many of the same values as before, except the ts_sec and pos fields have changed (and the file might have changed in other scenarios).
4
The op field value is now d, signifying that this row was deleted.
5
The ts_ms shows the timestamp when Debezium processed this event.

This event provides a consumer with the information that it needs to process the removal of this row. The old values are included because some consumers might require them in order to properly handle the removal.

The MySQL connector’s events are designed to work with Kafka log compaction, which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete data set and can be used for reloading key-based state.

When a row is deleted, the delete event value listed above still works with log compaction, because Kafka can still remove all earlier messages with that same key. If the message value is null, Kafka knows that it can remove all messages with that same key. To make this possible, Debezium’s MySQL connector always follows a delete event with a special tombstone event that has the same key but a null value.

2.1.5. How the MySQL connector maps data types

The Debezium MySQL connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. The MySQL data type of that column dictates how the value is represented in the event.

Columns that store strings are defined in MySQL with a character set and collation. The MySQL connector uses the column’s character set when reading the binary representation of the column values in the binlog events. The following table shows how the connector maps the MySQL data types to both literal and semantic types.

  • literal type : how the value is represented using Kafka Connect schema types
  • semantic type : how the Kafka Connect schema captures the meaning of the field (schema name)
MySQL typeLiteral typeSemantic type

BOOLEAN, BOOL

BOOLEAN

n/a

BIT(1)

BOOLEAN

n/a

BIT(>1)

BYTES

io.debezium.data.Bits

Note

The length schema parameter contains an integer that represents the number of bits. The byte[] contains the bits in little-endian form and is sized to contain the specified number of bits.

example (where n is bits)

numBytes = n/8 + (n%8== 0 ? 0 : 1)

TINYINT

INT16

n/a

SMALLINT[(M)]

INT16

n/a

MEDIUMINT[(M)]

INT32

n/a

INT, INTEGER[(M)]

INT32

n/a

BIGINT[(M)]

INT64

n/a

REAL[(M,D)]

FLOAT32

n/a

FLOAT[(M,D)]

FLOAT64

n/a

DOUBLE[(M,D)]

FLOAT64

n/a

CHAR(M)]

STRING

n/a

VARCHAR(M)]

STRING

n/a

BINARY(M)]

BYTES

n/a

VARBINARY(M)]

BYTES

n/a

TINYBLOB

BYTES

n/a

TINYTEXT

STRING

n/a

BLOB

BYTES

n/a

TEXT

STRING

n/a

MEDIUMBLOB

BYTES

n/a

MEDIUMTEXT

STRING

n/a

LONGBLOB

BYTES

n/a

LONGTEXT

STRING

n/a

JSON

STRING

io.debezium.data.Json

Note

Contains the string representation of a JSON document, array, or scalar.

ENUM

STRING

io.debezium.data.Enum

Note

The allowed schema parameter contains the comma-separated list of allowed values.

SET

STRING

io.debezium.data.EnumSet

Note

The allowed schema parameter contains the comma-separated list of allowed values.

YEAR[(2|4)]

INT32

io.debezium.time.Year

TIMESTAMP[(M)]

STRING

io.debezium.time.ZonedTimestamp

Note

In ISO 8601 format with microsecond precision. MySQL allows M to be in the range of 0-6.

2.1.5.1. Temporal values

Excluding the TIMESTAMP data type, MySQL temporal types depend on the value of the time.precision.mode configuration property. For TIMESTAMP columns whose default value is specified as CURRENT_TIMESTAMP or NOW, the value 1970-01-01 00:00:00 is used as the default value in the Kafka Connect schema.

MySQL allows zero-values for DATE, `DATETIME, and TIMESTAMP columns because zero-values are sometimes preferred over null values. The MySQL connector represents zero-values as null values when the column definition allows null values, or as the epoch day when the column does not allow null values.

Temporal values without time zones

The DATETIME type represents a local date and time such as "2018-01-13 09:48:27". As you can see, there is no time zone information. Such columns are converted into epoch milli-seconds or micro-seconds based on the column’s precision by using UTC. The TIMESTAMP type represents a timestamp without time zone information and is converted by MySQL from the server (or session’s) current time zone into UTC when writing and vice versa when reading back the value. For example:

  • DATETIME with a value of 2018-06-20 06:37:03 becomes 1529476623000.
  • TIMESTAMP with a value of 2018-06-20 06:37:03 becomes 2018-06-20T13:37:03Z.

Such columns are converted into an equivalent io.debezium.time.ZonedTimestamp in UTC based on the server (or session’s) current time zone. The time zone will be queried from the server by default. If this fails, it must be specified explicitly by the database.serverTimezone connector configuration property. For example, if the database’s time zone (either globally or configured for the connector by means of the database.serverTimezone property) is "America/Los_Angeles", the TIMESTAMP value "2018-06-20 06:37:03" is represented by a ZonedTimestamp with the value "2018-06-20T13:37:03Z".

Note that the time zone of the JVM running Kafka Connect and Debezium does not affect these conversions.

More details about properties related to termporal values are in the documentation for MySQL connector configuration properties.

time.precision.mode=adaptive_time_microseconds(default)

The MySQL connector determines the literal type and semantic type based on the column’s data type definition so that events represent exactly the values in the database. All time fields are in microseconds. Only positive TIME field values in the range of 00:00:00.000000 to 23:59:59.999999 can be captured correctly.

MySQL typeLiteral typeSemantic type

DATE

INT32

io.debezium.time.Date

Note

Represents the number of days since epoch.

TIME[(M)]

INT64

io.debezium.time.MicroTime

Note

Represents the time value in microseconds and does not include time zone information. MySQL allows M to be in the range of 0-6.

DATETIME, DATETIME(0), DATETIME(1), DATETIME(2), DATETIME(3)

INT64

io.debezium.time.Timestamp

Note

Represents the number of milliseconds past epoch and does not include time zone information.

DATETIME(4), DATETIME(5), DATETIME(6)

INT64

io.debezium.time.MicroTimestamp

Note

Represents the number of microseconds past epoch and does not include time zone information.

time.precision.mode=connect

The MySQL connector uses the predefined Kafka Connect logical types. This approach is less precise than the default approach and the events could be less precise if the database column has a fractional second precision value of greater than 3. Only values in the range of 00:00:00.000 to 23:59:59.999 can be handled. Set time.precision.mode=connect only if you can ensure that the TIME values in your tables never exceed the supported ranges. The connect setting is expected to be removed in a future version of Debezium.

MySQL typeLiteral typeSemantic type

DATE

INT32

org.apache.kafka.connect.data.Date

Note

Represents the number of days since epoch.

TIME[(M)]

INT64

org.apache.kafka.connect.data.Time

Note

Represents the time value in microseconds since midnight and does not include time zone information.

DATETIME[(M)]

INT64

org.apache.kafka.connect.data.Timestamp

Note

Represents the number of milliseconds since epoch, and does not include time zone information.

== Decimal values

Decimals are handled via the decimal.handling.mode property.

Tip
decimal.handling.mode=precise
MySQL typeLiteral typeSemantic type

NUMERIC[(M[,D])]

BYTES

org.apache.kafka.connect.data.Decimal

Note

The scale schema parameter contains an integer that represents how many digits the decimal point shifted.

DECIMAL[(M[,D])]

BYTES

org.apache.kafka.connect.data.Decimal

Note

The scale schema parameter contains an integer that represents how many digits the decimal point shifted.

decimal.handling.mode=double
MySQL typeLiteral typeSemantic type

NUMERIC[(M[,D])]

FLOAT64

n/a

DECIMAL[(M[,D])]

FLOAT64

n/a

decimal.handling.mode=string
MySQL typeLiteral typeSemantic type

NUMERIC[(M[,D])]

STRING

n/a

DECIMAL[(M[,D])]

STRING

n/a

2.1.5.2. Boolean values

MySQL handles the BOOLEAN value internally in a specific way. The BOOLEAN column is internally mapped to TINYINT(1) datatype. When the table is created during streaming then it uses proper BOOLEAN mapping as Debezium receives the original DDL. During snapshot Debezium executes SHOW CREATE TABLE to obtain table definition which returns TINYINT(1) for both BOOLEAN and TINYINT(1) columns.

Debezium then has no way how to obtain the original type mapping and will map to TINYINT(1).

An example configuration is

converters=boolean
boolean.type=io.debezium.connector.mysql.converters.TinyIntOneToBooleanConverter
boolean.selector=db1.table1.*, db1.table2.column1

2.1.5.3. Spatial data types

Currently, the Debezium MySQL connector supports the following spatial data types:

MySQL typeLiteral typeSemantic type

GEOMETRY, LINESTRING, POLYGON, MULTIPOINT, MULTILINESTRING, MULTIPOLYGON, GEOMETRYCOLLECTION

STRUCT

io.debezium.data.geometry.Geometry

Note

Contains a structure with two fields:

  • srid (INT32: a spatial reference system id that defines the type of geometry object stored in the structure
  • wkb (BYTES): a binary representation of the geometry object encoded in the Well-Known-Binary (wkb) format. See the Open Geospatial Consortium for more details.

2.1.6. The MySQL connector and Kafka topics

The Debezium MySQL connector writes events for all INSERT, UPDATE, and DELETE operations from a single table to a single Kafka topic. The Kafka topic naming convention is as follows:

format

serverName.databaseName.tableName

Example 2.1. example

Let us say that fulfillment is the server name and inventory is the database which contains three tables of orders, customers, and products. The Debezium MySQL connector produces events on three Kafka topics, one for each table in the database:

fulfillment.inventory.orders

fulfillment.inventory.customers

fulfillment.inventory.products

2.1.7. MySQL supported topologies

The Debezium MySQL connector supports the following MySQL topologies:

Standalone
When a single MySQL server is used, the server must have the binlog enabled (and optionally GTIDs enabled) so the Debezium MySQL connector can monitor the server. This is often acceptable, since the binary log can also be used as an incremental backup. In this case, the MySQL connector always connects to and follows this standalone MySQL server instance.
Master and slave

The Debezium MySQL connector can follow one of the masters or one of the slaves (if that slave has its binlog enabled), but the connector only sees changes in the cluster that are visible to that server. Generally, this is not a problem except for the multi-master topologies.

The connector records its position in the server’s binlog, which is different on each server in the cluster. Therefore, the connector will need to follow just one MySQL server instance. If that server fails, it must be restarted or recovered before the connector can continue.

High available clusters
A variety of high availability solutions exist for MySQL, and they make it far easier to tolerate and almost immediately recover from problems and failures. Most HA MySQL clusters use GTIDs so that slaves are able to keep track of all changes on any of the master.
Multi-master

A multi-master MySQL topology uses one or more MySQL slaves that each replicate from multiple masters. This is a powerful way to aggregate the replication of multiple MySQL clusters, and requires using GTIDs.

The Debezium MySQL connector can use these multi-master MySQL slaves as sources, and can fail over to different multi-master MySQL slaves as long as thew new slave is caught up to the old slave (e.g., the new slave has all of the transactions that were last seen on the first slave). This works even if the connector is only using a subset of databases and/or tables, as the connector can be configured to include or exclude specific GTID sources when attempting to reconnect to a new multi-master MySQL slave and find the correct position in the binlog.

Hosted

There is support for the Debezium MySQL connector to use hosted options such as Amazon RDS and Amazon Aurora.

Important

Because these hosted options do not allow a global read lock, table-level locks are used to create the consistent snapshot.

2.2. Setting up MySQL server

2.2.1. Creating a MySQL user for Debezium

You have to define a MySQL user with appropriate permissions on all databases that the Debezium MySQL connector monitors.

Prerequisites

  • You must have a MySQL server.
  • You must know basic SQL commands.

Procedure

  1. Create the MySQL user:
mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
  1. Grant the required permissions to the user:
mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user' IDENTIFIED BY 'password';
Tip

See permissions explained for notes on each permission.

Important

If using a hosted option such as Amazon RDS or Amazon Aurora that do not allow a global read lock, table-level locks are used to create the consistent snapshot. In this case, you need to also grant LOCK_TABLES permissions to the user that you create. See Overview of how the MySQL connector works for more details.

  1. Finalize the user’s permissions:
mysql> FLUSH PRIVILEGES;

2.2.1.1. Permissions explained

Permission/itemDescription

SELECT

enables the connector to select rows from tables in databases

Note

This is only used when performing a snapshot.

RELOAD

enables the connector the use of the FLUSH statement to clear or reload internal caches, flush tables, or acquire locks.

Note

This is only used when performing a snapshot.

SHOW DATABASES

enables the connector to see database names by issuing the SHOW DATABASE statement.

Note

This is only used when performing a snapshot.

REPLICATION SLAVE

enables the connector to connect to and read the MySQL server binlog.

REPLICATION CLIENT

enables the connector the use of following statements:

  • SHOW MASTER STATUS
  • SHOW SLAVE STATUS
  • SHOW BINARY LOGS
Important

This is always required for the connector.

ON

Identifies the database to which the permission apply.

TO 'user'

Specifies the user to which the permissions are granted.

IDENTIFIED BY 'password'

Specifies the password for the user.

2.2.2. Enabling the MySQL binlog for Debezium

You must enable binary logging for MySQL replication. The binary logs record transaction updates for replication tools to propagate changes.

Prerequisites

  • You must have a MySQL server.
  • You should have appropriate MySQL user privileges.

Procedure

  1. Check if the log-bin option is already on or not.
mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::"
FROM information_schema.global_variables WHERE variable_name='log_bin';
  1. If OFF, configure your MySQL server configuration file with the following:
Tip

See Binlog config properties for notes on each property.

server-id         = 223344 1
log_bin           = mysql-bin 2
binlog_format     = ROW 3
binlog_row_image  = FULL 4
expire_logs_days  = 10 5
  1. Confirm your changes by checking the binlog status once more.
mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::"
FROM information_schema.global_variables WHERE variable_name='log_bin';

2.2.2.1. Binlog configuration properties

NumberPropertyDescription

1

server-id

The value for the server-id must be unique for each server and replication client within the MySQL cluster. When the MySQL connector is setup, we assign the connector a unique server ID.

2

log_bin

The value of log_bin is the base name of the sequence of binlog files.

3

binlog_format

The binlog-format must be set to ROW or row.

4

binlog_row_image

The binlog_row_image must be set to FULL or full.

5

expire_logs_days

This is the number of days for automatic binlog file removal. The default is 0 which means no automatic removal.

Tip

Set the value to match the needs of your environment.

2.2.3. Enabling MySQL Global Transaction Identifiers for Debezium

Global transaction identifiers (GTIDs) uniquely identify transactions that occur on a server within a cluster. Though not required for the Debezium MySQL connector, using GTIDs simplifies replication and allows you to more easily confirm if master and slave servers are consistent.

Note

GTIDs are only available from MySQL 5.6.5 and later. See the MySQL documentation for more details.

Prerequisites

  • You must have a MySQL server.
  • You must know basic SQL commands.
  • You must have access to the MySQL configuration file.

Procedure

  1. Enable gtid_mode:
mysql> gtid_mode=ON
  1. Enable enforce_gtid_consistency:
mysql> enforce_gtid_consistency=ON
  1. Confirm the changes:
mysql> show global variables like '%GTID%';

response

+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| enforce_gtid_consistency | ON    |
| gtid_mode                | ON    |
+--------------------------+-------+

2.2.3.1. Options explained

Permission/itemDescription

gtid_mode

Boolean which specifies whether GTID mode of the MySQL server is enabled or not.

  • ON = enabled
  • OFF = disabled

enforce_gtid_consistency

Boolean which instructs the server whether or not to enforce GTID consistency by allowing the execution of statements that can be logged in a transactionally safe manner; required when using GTIDs.

  • ON = enabled
  • OFF = disabled

2.2.4. Setting up session timeouts for Debezium

When an initial consistent snapshot is made for large databases, your established connection could timeout while the tables are being read. You can prevent this behavior by configuring interactive_timeout and wait_timeout in your MySQL configuration file.

Prerequisites

  • You must have a MySQL server.
  • You must know basic SQL commands.
  • You must have access to the MySQL configuration file.

Procedure

  1. Configure interactive_timeout:
mysql> interactive_timeout=<duration-in-seconds>
  1. Configure wait_timeout:
mysql> wait_timeout= <duration-in-seconds>

2.2.4.1. Options explained

Permission/itemDescription

interactive_timeout

The number of seconds the server waits for activity on an interactive connection before closing it.

Tip

See MySQL’s documentation for more details.

wait_timeout

The number of seconds the server waits for activity on a noninteractive connection before closing it.

Tip

See MySQL’s documentation for more details.

2.2.5. Enabling query log events for Debezium

You might want to see the original SQL statement for each binlog event. Enabling the binlog_rows_query_log_events option in the MySQL configuration file allows you to do this.

Note

This option is only available from MySQL 5.6 and later.

Prerequisites

  • You must have a MySQL server.
  • You must know basic SQL commands.
  • You must have access to the MySQL configuration file.

Procedure

  1. Enable binlog_rows_query_log_events:
mysql> binlog_rows_query_log_events=ON

2.2.5.1. Options explained

Permission/itemDescription

binlog_rows_query_log_events`

Boolean which enables/disables support for including the original SQL statement in the binlog entry.

  • ON = enabled
  • OFF = disabled

2.3. Deploying the MySQL connector

2.3.1. Installing the MySQL connector

Installing the Debezium MySQL connector is a simple process whereby you only need to download the JAR, extract it to your Kafka Connect environment, and ensure the plug-in’s parent directory is specified in your Kafka Connect environment.

Prerequisites

Procedure

  1. Download the Debezium MySQL connector.
  2. Extract the files into your Kafka Connect environment.
  3. Add the plug-in’s parent directory to your Kafka Connect plugin.path:
plugin.path=/kafka/connect
Note

The above example assumes you have extracted the Debezium MySQL connector to the /kafka/connect/debezium-connector-mysql path.

  1. Restart your Kafka Connect process. This ensures the new JARs are picked up.

2.3.2. Configuring the MySQL connector

Typically, you configure the Debezium MySQL connector in a .yaml file using the configuration properties available for the connector.

Prerequisites

Procedure

  1. Set the "name" of the connector in the .yaml file.
  2. Set the configuration properties that you require for your Debezium MySQL connector.
Tip

For a complete list of configuration properties, see MySQL connector configuration properties.

MySQL connector example configuration

  apiVersion: kafka.strimzi.io/v1beta1
  kind: KafkaConnector
  metadata:
    name: inventory-connector  1
    labels:
      strimzi.io/cluster: my-connect-cluster
  spec:
    class: io.debezium.connector.mysql.MySqlConnector
    tasksMax: 1  2
    config:  3
      database.hostname: mysql  4
      database.port: 3306
      database.user: debezium
      database.password: dbz
      database.server.id: 184054  5
      database.server.name: dbserver1  6
      database.whitelist: inventory  7
      database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092  8
      database.history.kafka.topic: schema-changes.inventory  9

1 1
The name of the connector.
2 2
Only one task should operate at any one time. Because the MySQL connector reads the MySQL server’s binlog, using a single connector task ensures proper order and event handling. The Kafka Connect service uses connectors to start one or more tasks that do the work, and it automatically distributes the running tasks across the cluster of Kafka Connect services. If any of the services stop or crash, those tasks will be redistributed to running services.
3 3
The connector’s configuration.
4 4
The database host, which is the name of the container running the MySQL server (mysql).
5 5 6
A unique server ID and name. The server name is the logical identifier for the MySQL server or cluster of servers. This name will be used as the prefix for all Kafka topics.
7
Only changes in the inventory database will be detected.
8 9
The connector will store the history of the database schemas in Kafka using this broker (the same broker to which you are sending events) and topic name. Upon restart, the connector will recover the schemas of the database that existed at the point in time in the binlog when the connector should begin reading.

2.3.3. MySQL connector configuration properties

The configuration properties listed here are required to run the Debezium MySQL connector. There are also advanced MySQL connector properties whose default value rarely needs to be changed and therefore, they do not need to be specified in the connector configuration.

Tip

The Debezium MySQL connector supports pass-through configuration when creating the Kafka producer and consumer. See information about pass-through properties at the end of this section, and also see the Kafka documentation for more details about pass-through properties.

PropertyDefaultDescription

name

 

Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)

connector.class

 

The name of the Java class for the connector. Always use a value of io.debezium​.connector.mysql.MySqlConnector for the MySQL connector.

tasks.max

1

The maximum number of tasks that should be created for this connector. The MySQL connector always uses a single task and therefore does not use this value, so the default is always acceptable.

database.hostname

 

IP address or hostname of the MySQL database server.

database.port

3306

Integer port number of the MySQL database server.

database.user

 

Name of the MySQL database to use when connecting to the MySQL database server.

database.password

 

Password to use when connecting to the MySQL database server.

database.server.name

 

Logical name that identifies and provides a namespace for the particular MySQL database server/cluster being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector. Only alphanumeric characters and underscores should be used.

database.server.id

random

A numeric ID of this database client, which must be unique across all currently-running database processes in the MySQL cluster. This connector joins the MySQL database cluster as another server (with this unique ID) so it can read the binlog. By default, a random number is generated between 5400 and 6400, though we recommend setting an explicit value.

database.history.kafka.topic

 

The full name of the Kafka topic where the connector will store the database schema history.

database.history​.kafka.bootstrap.servers

 

A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection will be used for retrieving database schema history previously stored by the connector, and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process.

database.whitelist

empty string

An optional comma-separated list of regular expressions that match database names to be monitored; any database name not included in the whitelist will be excluded from monitoring. By default all databases will be monitored. May not be used with database.blacklist.

database.blacklist

empty string

An optional comma-separated list of regular expressions that match database names to be excluded from monitoring; any database name not included in the blacklist will be monitored. May not be used with database.whitelist.

table.whitelist

empty string

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the whitelist will be excluded from monitoring. Each identifier is of the form databaseName.tableName. By default the connector will monitor every non-system table in each monitored database. May not be used with table.blacklist.

table.blacklist

empty string

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the blacklist will be monitored. Each identifier is of the form databaseName.tableName. May not be used with table.whitelist.

column.blacklist

empty string

An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values. Fully-qualified names for columns are of the form databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName.

column.truncate.to.length.chars

n/a

An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event message values if the field values are longer than the specified number of characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer. Fully-qualified names for columns are of the form databaseName.tableName.columnName.

column.mask.with.length.chars

n/a

An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk (*) characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form databaseName.tableName.columnName.

column.propagate.source.type

n/a

An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters __Debezium.source.column.type, __Debezium.source.column.length and _Debezium.source.column.scale will be used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified names for columns are of the form databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName.

datatype.propagate.source.type

n/a

An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters __debezium.source.column.type, __debezium.source.column.length and __debezium.source.column.scale will be used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified data type names are of the form databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName. See how the MySQL connector maps data types for the list of MySQL-specific data type names.

time.precision.mode

adaptive_time​_microseconds

Time, date, and timestamps can be represented with different kinds of precision, including: adaptive_time_microseconds (the default) captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type, with the exception of TIME type fields, which are always captured as microseconds; or connect always represents time and timestamp values using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.

decimal.handling.mode

precise

Specifies how the connector should handle values for DECIMAL and NUMERIC columns: precise (the default) represents them precisely using java.math.BigDecimal values represented in change events in a binary form; or double represents them using double values, which may result in a loss of precision but will be far easier to use. string option encodes values as formatted string which is easy to consume but a semantic information about the real type is lost.

bigint.unsigned.handling.mode

long

Specifies how BIGINT UNSIGNED columns should be represented in change events, including: precise uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect’s org.apache.kafka.connect.data.Decimal type; long (the default) represents values using Java’s long, which may not offer the precision but will be far easier to use in consumers. long is usually the preferable setting. Only when working with values larger than 2^63, the precise setting should be used as those values cannot be conveyed using long.

include.schema.changes

true

Boolean value that specifies whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value includes the DDL statement(s). This is independent of how the connector internally records database history. The default is true.

include.query

false

Boolean value that specifies whether the connector should include the original SQL query that generated the change event.
Note: This option requires MySQL be configured with the binlog_rows_query_log_events option set to ON. Query will not be present for events generated from the snapshot process.
WARNING: Enabling this option may expose tables or fields explicitly blacklisted or masked by including the original SQL statement in the change event. For this reason this option is defaulted to 'false'.

event.processing​.failure.handling.mode

fail

Specifies how the connector should react to exceptions during deserialization of binlog events. fail will propagate the exception (indicating the problematic event and its binlog offset), causing the connector to stop.
warn will cause the problematic event to be skipped and the problematic event and its binlog offset to be logged.
skip will cause problematic event will be skipped.

inconsistent.schema.handling.mode

fail

Specifies how the connector should react to binlog events that relate to tables that are not present in internal schema representation (i.e. internal representation is not consistent with database) fail will throw an exception (indicating the problematic event and its binlog offset), causing the connector to stop.
warn will cause the problematic event to be skipped and the problematic event and its binlog offset to be logged.
skip will cause the problematic event to be skipped.

max.queue.size

8192

Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the max.batch.size property.

max.batch.size

2048

Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048.

poll.interval.ms

1000

Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second.

connect.timeout.ms

30000

A positive integer value that specifies the maximum time in milliseconds this connector should wait after trying to connect to the MySQL database server before timing out. Defaults to 30 seconds.

gtid.source.includes

 

A comma-separated list of regular expressions that match source UUIDs in the GTID set used to find the binlog position in the MySQL server. Only the GTID ranges that have sources matching one of these include patterns will be used. May not be used with gtid.source.excludes.

gtid.source.excludes

 

A comma-separated list of regular expressions that match source UUIDs in the GTID set used to find the binlog position in the MySQL server. Only the GTID ranges that have sources matching none of these exclude patterns will be used. May not be used with gtid.source.includes.

tombstones.on.delete

true

Controls whether a tombstone event should be generated after a delete event.
When true the delete operations are represented by a delete event and a subsequent tombstone event. When false only a delete event is sent.
Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.

message.key.columns

empty string

A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key.
Each item (regular expression) must match the <fully-qualified table>:<a comma-separated list of columns> representing the custom key.
Fully-qualified tables could be defined as databaseName.tableName.

2.3.3.1. Advanced MySQL connector properties

The following table describes advanced MySQL connector properties.

PropertyDefaultDescription

connect.keep.alive

true

A boolean value that specifies whether a separate thread should be used to ensure the connection to the MySQL server/cluster is kept alive.

table.ignore.builtin

true

Boolean value that specifies whether built-in system tables should be ignored. This applies regardless of the table whitelist or blacklists. By default system tables are excluded from monitoring, and no events are generated when changes are made to any of the system tables.

database.history.kafka.recovery.poll.interval.ms

100

An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms.

database.history.kafka.recovery.attempts

4

The maximum number of times that the connector should attempt to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is recovery.attempts x recovery.poll.interval.ms.

database.history.skip.unparseable.ddl

false

Boolean value that specifies if connector should ignore malformed or unknown database statements or stop processing and let operator to fix the issue. The safe default is false. Skipping should be used only with care as it can lead to data loss or mangling when binlog is processed.

database.history.store.only.monitored.tables.ddl

false

Boolean value that specifies if connector should should record all DDL statements or (when true) only those that are relevant to tables that are monitored by Debezium (via filter configuration). The safe default is false. This feature should be used only with care as the missing data might be necessary when the filters are changed.

database.ssl.mode

disabled

Specifies whether to use an encrypted connection. The default is disabled, and specifies to use an unencrypted connection.

The preferred option establishes an encrypted connection if the server supports secure connections but falls back to an unencrypted connection otherwise.

The required option establishes an encrypted connection but will fail if one cannot be made for any reason.

The verify_ca option behaves like required but additionally it verifies the server TLS certificate against the configured Certificate Authority (CA) certificates and will fail if it doesn’t match any valid CA certificates.

The verify_identity option behaves like verify_ca but additionally verifies that the server certificate matches the host of the remote connection.

binlog.buffer.size

0

The size of a look-ahead buffer used by the binlog reader.
Under specific conditions it is possible that MySQL binlog contains uncommitted data finished by a ROLLBACK statement. Typical examples are using savepoints or mixing temporary and regular table changes in a single transaction.
When a beginning of a transaction is detected then Debezium tries to roll forward the binlog position and find either COMMIT or ROLLBACK so it can decide whether the changes from the transaction will be streamed or not. The size of the buffer defines the maximum number of changes in the transaction that Debezium can buffer while searching for transaction boundaries. If the size of transaction is larger than the buffer then Debezium needs to rewind and re-read the events that has not fit into the buffer while streaming. Value 0 disables buffering.
Disabled by default.
Note: This feature should be considered an incubating one. We need a feedback from customers but it is expected that it is not completely polished.

snapshot.mode

initial

Specifies the criteria for running a snapshot upon startup of the connector. The default is initial, and specifies the connector can run a snapshot only when no offsets have been recorded for the logical server name. The when_needed option specifies that the connector run a snapshot upon startup whenever it deems it necessary (when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server). The never option specifies that the connect should never use snapshots and that upon first startup with a logical server name the connector should read from the beginning of the binlog; this should be used with care, as it is only valid when the binlog is guaranteed to contain the entire history of the database. If you don’t need the topics to contain a consistent snapshot of the data but only need them to have the changes since the connector was started, you can use the schema_only option, where the connector only snapshots the schemas (not the data).

schema_only_recovery is a recovery option for an existing connector to recover a corrupted or lost database history topic, or to periodically "clean up" a database history topic (which requires infinite retention) that may be growing unexpectedly.

snapshot.locking.mode

minimal

Controls if and how long the connector holds onto the global MySQL read lock (preventing any updates to the database) while it is performing a snapshot. There are three possible values minimal, extended, and none.

minimal The connector holds the global read lock for just the initial portion of the snapshot while the connector reads the database schemas and other metadata. The remaining work in a snapshot involves selecting all rows from each table, and this can be done in a consistent fashion using the REPEATABLE READ transaction even when the global read lock is no longer held and while other MySQL clients are updating the database.

extended In some cases where clients are submitting operations that MySQL excludes from REPEATABLE READ semantics, it may be desirable to block all writes for the entire duration of the snapshot. For these such cases, use this option.

none Will prevent the connector from acquiring any table locks during the snapshot process. This value can be used with all snapshot modes but it is safe to use if and only if no schema changes are happening while the snapshot is taken. Note that for tables defined with MyISAM engine, the tables would still be locked despite this property being set as MyISAM acquires a table lock. This behavior is unlike InnoDB engine which acquires row level locks.

snapshot.select.statement.overrides

 

Controls which rows from tables will be included in snapshot.
This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME). Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id snapshot.select.statement.overrides.[DB_NAME].[TABLE_NAME]. The value of those properties is the SELECT statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.
Note: This setting has impact on snapshots only. Events captured from binlog are not affected by it at all.

min.row.count.to.stream.results

1000

During a snapshot operation, the connector will query each included table to produce a read event for all rows in that table. This parameter determines whether the MySQL connection will pull all results for a table into memory (which is fast but requires large amounts of memory), or whether the results will instead be streamed (can be slower, but will work for very large tables). The value specifies the minimum number of rows a table must contain before the connector will stream results, and defaults to 1,000. Set this parameter to '0' to skip all table size checks and always stream all results during a snapshot.

heartbeat.interval.ms

0

Controls how frequently the heartbeat messages are sent.
This property contains an interval in milli-seconds that defines how frequently the connector sends heartbeat messages into a heartbeat topic. Set this parameter to 0 to not send heartbeat messages at all.
Disabled by default.

heartbeat.topics.prefix

__debezium-heartbeat

Controls the naming of the topic to which heartbeat messages are sent.
The topic is named according to the pattern <heartbeat.topics.prefix>.<server.name>.

database.initial.statements

 

A semicolon separated list of SQL statements to be executed when a JDBC connection (not the transaction log reading connection) to the database is established. Use doubled semicolon (';;') to use a semicolon as a character and not as a delimiter.
Note: The connector may establish JDBC connections at its own discretion, so this should typically be used for configuration of session parameters only, but not for executing DML statements.

snapshot.delay.ms

 

An interval in milli-seconds that the connector should wait before taking a snapshot after starting up;
Can be used to avoid snapshot interruptions when starting multiple connectors in a cluster, which may cause re-balancing of connectors.

snapshot.fetch.size

 

Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size.

snapshot.lock.timeout.ms

10000

Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If table locks cannot be acquired in this time interval, the snapshot will fail. See How the MySQL connector performs database snapshots.

enable.time.adjuster

 

MySQL allows user to insert year value as either 2-digit or 4-digit. In case of two digits the value is automatically mapped to 1970 - 2069 range. This is usually done by database.
Set to true (the default) when Debezium should do the conversion.
Set to false when conversion is fully delegated to the database.

sanitize.field.names

true when connector configuration explicitly specifies the key.converter or value.converter parameters to use Avro, otherwise defaults to false.

Whether field names will be sanitized to adhere to Avro naming requirements.

2.3.3.2. Pass-through configuration properties

The MySQL connector also supports pass-through configuration properties that are used when creating the Kafka producer and consumer. Specifically, all connector configuration properties that begin with the database.history.producer. prefix are used (without the prefix) when creating the Kafka producer that writes to the database history. All properties that begin with the prefix database.history.consumer. are used (without the prefix) when creating the Kafka consumer that reads the database history upon connector start-up.

For example, the following connector configuration properties can be used to secure connections to the Kafka broker:

database.history.producer.security.protocol=SSL
database.history.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.producer.ssl.keystore.password=test1234
database.history.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.producer.ssl.truststore.password=test1234
database.history.producer.ssl.key.password=test1234
database.history.consumer.security.protocol=SSL
database.history.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.consumer.ssl.keystore.password=test1234
database.history.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.consumer.ssl.truststore.password=test1234
database.history.consumer.ssl.key.password=test1234

2.3.3.3. Pass-through properties for database drivers

In addition to the pass-through properties for the Kafka producer and consumer, there are pass-through properties for database drivers. These properties have the database. prefix. For example, database.tinyInt1isBit=false is passed to the JDBC URL.

2.3.4. MySQL connector monitoring metrics

The Debezium MySQL connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.

Refer to the monitoring documentation for details of how to expose these metrics via JMX.

2.3.4.1. Snapshot metrics

The MBean is debezium.mysql:type=connector-metrics,context=snapshot,server=<database.server.name>.

AttributeTypeDescription

TotalTableCount

int

The total number of tables that are being included in the snapshot.

RemainingTableCount

int

The number of tables that the snapshot has yet to copy.

HoldingGlobalLock

boolean

Whether the connector currently holds a global or table write lock.

SnapshotRunning

boolean

Whether the snapshot was started.

SnapshotAborted

boolean

Whether the snapshot was aborted.

SnapshotCompleted

boolean

Whether the snapshot completed.

SnapshotDurationInSeconds

long

The total number of seconds that the snapshot has taken so far, even if not complete.

RowsScanned

Map<String, Long>

Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.

LastEvent

string

The last snapshot event that the connector has read.

MilliSecondsSinceLastEvent

long

The number of milliseconds since the connector has read and processed the most recent event.

TotalNumberOfEventsSeen

long

The total number of events that this connector has seen since last started or reset.

NumberOfEventsFiltered

long

The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector.

MonitoredTables

string[]

The list of tables that are monitored by the connector.

QueueTotalCapcity

int

The length of the queue used to pass events between snapshot reader and the main Kafka Connect loop.

QueueRemainingCapcity

int

The free capacity of the queue used to pass events between snapshot reader and the main Kafka Connect loop.

2.3.4.2. Binlog metrics

The MBean is debezium.mysql:type=connector-metrics,context=binlog,server=<database.server.name>.

Note

The transaction-related attributes are only available if binlog event buffering is enabled. See binlog.buffer.size in the advanced connector configuration properties for more details.

AttributeTypeDescription

Connected

boolean

Flag that denotes whether the connector is currently connected to the MySQL server.

BinlogFilename

string

The name of the binlog filename that the connector has most recently read.

BinlogPosition

long

The most recent position (in bytes) within the binlog that the connector has read.

IsGtidModeEnabled

boolean

Flag that denotes whether the connector is currently tracking GTIDs from MySQL server.

GtidSet

string

The string representation of the most recent GTID set seen by the connector when reading the binlog.

LastEvent

string

The last binlog event that the connector has read.

SecondsSinceLastEvent (obsolete)

long

The number of seconds since the connector has read and processed the most recent event.

SecondsBehindMaster (obsolete)

long

The number of seconds between the last event’s MySQL timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the MySQL server and the MySQL connector are running.

MilliSecondsBehindSource

long

The number of milliseconds between the last event’s MySQL timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the MySQL server and the MySQL connector are running.

TotalNumberOfEventsSeen

long

The total number of events that this connector has seen since last started or reset.

NumberOfSkippedEvents

long

The number of events that have been skipped by the MySQL connector. Typically events are skipped due to a malformed or unparseable event from MySQL’s binlog.

NumberOfEventsFiltered

long

The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector.

NumberOfDisconnects

long

The number of disconnects by the MySQL connector.

SourceEventPosition

map<string, string>

The coordinates of the last received event.

LastTransactionId

string

Transaction identifier of the last processed transaction.

LastEvent

string

The last binlog event that the connector has read.

MilliSecondsSinceLastEvent

long

The number of milliseconds since the connector has read and processed the most recent event.

MonitoredTables

string[]

The list of tables that are monitored by Debezium.

QueueTotalCapcity

int

The length of the queue used to pass events between binlog reader and the main Kafka Connect loop.

QueueRemainingCapcity

int

The free capacity of the queue used to pass events between binlog reader and the main Kafka Connect loop.

NumberOfCommittedTransactions

long

The number of processed transactions that were committed.

NumberOfRolledBackTransactions

long

The number of processed transactions that were rolled back and not streamed.

NumberOfNotWellFormedTransactions

long

The number of transactions that have not conformed to expected protocol BEGIN + COMMIT/ROLLBACK. Should be 0 under normal conditions.

NumberOfLargeTransactions

long

The number of transactions that have not fitted into the look-ahead buffer. Should be significantly smaller than NumberOfCommittedTransactions and NumberOfRolledBackTransactions for optimal performance.

2.3.4.3. Schema history metrics

The MBean is debezium.mysql:type=connector-metrics,context=schema-history,server=<database.server.name>.

AttributeTypeDescription

Status

string

One of STOPPED, RECOVERING (recovering history from the storage), RUNNING describing state of the database history.

RecoveryStartTime

long

The time in epoch seconds at what recovery has started.

ChangesRecovered

long

The number of changes that were read during recovery phase.

ChangesApplied

long

The total number of schema changes applie during recovery and runtime.

MilliSecondsSinceLastRecoveredChange

long

The number of milliseconds that elapsed since the last change was recovered from the history store.

MilliSecondsSinceLastAppliedChange

long

The number of milliseconds that elapsed since the last change was applied.

LastRecoveredChange

string

The string representation of the last change recovered from the history store.

LastAppliedChange

string

The string representation of the last applied change.

2.4. MySQL connector common issues

2.4.1. Configuration and startup errors

The Debezium MySQL connector fails, reports an error, and stops running when the following startup errors occur:

  • The connector’s configuration is invalid.
  • The connector cannot connect to the MySQL server using the specified connectivity parameters.
  • The connector is attempting to restart at a position in the binlog where MySQL no longer has the history available.

If you receive any of these errors, you receive more details in the error message. The error message also contains workarounds where possible.

2.4.2. MySQL is unavailable

If your MySQL server becomes unavailable, the Debezium MySQL connector fails with an error and the connector stops. You simply need to restart the connector when the server is available.

2.4.2.1. Using GTIDs

If you have GTIDs enabled and a highly available MySQL cluster, restart the connector immediately as the connector will simply connect to a different MySQL server in the cluster, find the location in the server’s binlog that represents the last transaction, and begin reading the new server’s binlog from that specific location.

2.4.2.2. Not Using GTIDs

If you do not have GTIDs enabled, the connector only records the binlog position of the MySQL server to which it was connected. In order to restart from the correct binlog position, you must reconnect to that specific server.

2.4.3. Kafka Connect stops

There are three scenarios that cause some issues when Kafka Connect stops:

2.4.3.1. Kafka Connect stops gracefully

When Kafka Connect stops gracefully, there is only a short delay while the Debezium MySQL connector tasks are stopped and restarted on new Kafka Connect processes.

2.4.3.2. Kafka Connect process crashes

If Kafka Connect crashes, the process stops and any Debezium MySQL connector tasks terminate without their most recently-processed offsets being recorded. In distributed mode, Kafka Connect restarts the connector tasks on other processes. However, the MySQL connector resumes from the last offset recorded by the earlier processes. This means that the replacement tasks may generate some of the same events processed prior to the crash, creating duplicate events.

Tip

Each change event message includes source-specific information about:

  • the event origin
  • the MySQL server’s event time
  • the binlog filename and position
  • GTIDs (if used)

2.4.3.3. Kafka becomes unavailable

The Kafka Connect framework records Debezium change events in Kafka using the Kafka producer API. If the Kafka brokers become unavailable, the Debezium MySQL connector pauses until the connection is reestablished and the connector resumes where it last left off.

2.4.4. MySQL purges binlog files

If the Debezium MySQL connector stops for too long, the MySQL server purges older binlog files and the connector’s last position may be lost. When the connector is restarted, the MySQL server no longer has the starting point and the connector performs another initial snapshot. If the snapshot is disabled, the connector fails with an error.

Tip

See How the MySQL connector performs database snapshots for more information on initial snapshots.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.