このコンテンツは選択した言語では利用できません。
Debezium User Guide
For use with Debezium 1.2
Abstract
Preface
Debezium is a set of distributed services that capture row-level changes in your databases so that your applications can see and respond to those changes. Debezium records all row-level changes committed to each database table. Each application reads the transaction logs of interest to view all operations in the order in which they occurred.
This guide provides details about using Debezium connectors:
- Chapter 1, High level overview of Debezium
- Chapter 2, Debezium connector for MySQL
- Chapter 3, Debezium connector for PostgreSQL
- Chapter 4, Debezium connector for MongoDB
- Chapter 5, Debezium connector for SQL Server
- Chapter 6, Debezium connector for Db2
- Chapter 8, Debezium logging
- Chapter 7, Monitoring Debezium
- Chapter 9, Configuring Debezium connectors for your application
Chapter 1. High level overview of Debezium
Debezium is a set of distributed services that capture changes in your databases. Your applications can consume and respond to those changes. Debezium captures each row-level change in each database table in a change event record and streams these records to Kafka topics. Applications read these streams, which provide the change event records in the same order in which they were generated.
More details are in the following sections:
1.1. Debezium features
Debezium is a set of source connectors for Apache Kafka Connect, ingesting changes from different databases using change data capture (CDC). Unlike other approaches such as polling or dual writes, log-based CDC as implemented by Debezium:
- Makes sure that all data changes are captured
- Produces change events with a very low delay (for example, ms range for MySQL or Postgres) while avoiding increased CPU usage of frequent polling
- Requires no changes to your data model (such as "Last Updated" column)
- Can capture deletes
- Can capture old record state and further metadata such as transaction id and causing query (depending on the database’s capabilities and configuration)
To learn more about the advantages of log-based CDC, refer to this blog post.
The actual change data capture feature of Debezium is amended with a range of related capabilities and options:
- Snapshots: optionally, an initial snapshot of a database’s current state can be taken if a connector gets started up and not all logs still exist (typically the case when the database has been running for some time and has discarded any transaction logs not needed any longer for transaction recovery or replication); different modes exist for snapshotting, refer to the docs of the specific connector you’re using to learn more
- Filters: the set of captured schemas, tables and columns can be configured via whitelist/blacklist filters
- Masking: the values from specific columns can be masked, for example, for sensitive data
- Monitoring: most connectors can be monitored using JMX
- Different ready-to-use message transformations: for example, for message routing, extraction of new record state (relational connectors, MongoDB) and routing of events from a transactional outbox table
Refer to the connector documentation for a list of all supported databases and detailed information about the features and configuration options of each connector.
1.2. Description of Debezium architecture
You deploy Debezium by means of Apache Kafka Connect. Kafka Connect is a framework and runtime for implementing and operating:
- Source connectors such as Debezium that send records into Kafka
- Sink connectors that propagate records from Kafka topics to other systems
The following image shows the architecture of a change data capture pipeline based on Debezium:
As shown in the image, the Debezium connectors for MySQL and PostgresSQL are deployed to capture changes to these two types of databases. Each Debezium connector establishes a connection to its source database:
-
The MySQL connector uses a client library for accessing the
binlog
. - The PostgreSQL connector reads from a logical replication stream.
Kafka Connect operates as a separate service besides the Kafka broker.
By default, changes from one database table are written to a Kafka topic whose name corresponds to the table name. If needed, you can adjust the destination topic name by configuring Debezium’s topic routing transformation. For example, you can:
- Route records to a topic whose name is different from the table’s name
- Stream change event records for multiple tables into a single topic
After change event records are in Apache Kafka, different connectors in the Kafka Connect eco-system can stream the records to other systems and databases such as Elasticsearch, data warehouses and analytics systems, or caches such as Infinispan. Depending on the chosen sink connector, you might need to configure Debezium’s new record state extraction transformation. This Kafka Connect SMT propagates the after
structure from Debezium’s change event to the sink connector. This is in place of the verbose change event record that is propagated by default.
Chapter 2. Debezium connector for MySQL
MySQL has a binary log (binlog) that records all operations in the order in which they are committed to the database. This includes changes to table schemas and the data within tables. MySQL uses the binlog for replication and recovery.
The MySQL connector reads the binlog and produces change events for row-level INSERT
, UPDATE
, and DELETE
operations and records the change events in a Kafka topic. Client applications read those Kafka topics.
As MySQL is typically set up to purge binlogs after a specified period of time, the MySQL connector performs and initial consistent snapshot of each of your databases. The MySQL connector reads the binlog from the point at which the snapshot was made.
2.1. Overview of how the MySQL connector works
The Debezium MySQL connector tracks the structure of the tables, performs snapshots, transforms binlog events into Debezium change events and records where those events are recorded in Kafka.
2.1.1. How the MySQL connector uses database schemas
When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot just use the current schema because the connector might be processing events that are relatively old and may have been recorded before the tables' schemas were changed.
To handle this, MySQL includes in the binlog the row-level changes to the data and the DDL statements that are applied to the database. As the connector reads the binlog and comes across these DDL statements, it parses them and updates an in-memory representation of each table’s schema. The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete and to produce the appropriate change event. In a separate database history Kafka topic, the connector also records all DDL statements along with the position in the binlog where each DDL statement appeared.
When the connector restarts after having crashed or been stopped gracefully, the connector starts reading the binlog from a specific position, that is, from a specific point in time. The connector rebuilds the table structures that existed at this point in time by reading the database history Kafka topic and parsing all DDL statements up to the point in the binlog where the connector is starting.
This database history topic is for connector use only. The connector can optionally generate schema change events on a different topic that is intended for consumer applications. This is described in how the MySQL connector exposes schema changes.
When the MySQL connector captures changes in a table to which a schema change tool such as gh-ost
or pt-online-schema-change
is applied then helper tables created during the migration process need to be included among whitelisted tables.
If downstream systems do not need the messages generated by the temporary table then a simple message transform can be written and applied to filter them out.
For information about topic naming conventions, see MySQL connector and Kafka topics.
2.1.2. How the MySQL connector performs database snapshots
When your Debezium MySQL connector is first started, it performs an initial consistent snapshot of your database. The following flow describes how this snapshot is completed.
This is the default snapshot mode which is set as initial
in the snapshot.mode
property. For other snapshots modes, please check out the MySQL connector configuration properties.
- The connector…
Step | Action |
---|---|
| Grabs a global read lock that blocks writes by other database clients. Note The snapshot itself does not prevent other clients from applying DDL which might interfere with the connector’s attempt to read the binlog position and table schemas. The global read lock is kept while the binlog position is read before released in a later step. |
| Starts a transaction with repeatable read semantics to ensure that all subsequent reads within the transaction are done against the consistent snapshot. |
| Reads the current binlog position. |
| Reads the schema of the databases and tables allowed by the connector’s configuration. |
| Releases the global read lock. This now allows other database clients to write to the database. |
|
Writes the DDL changes to the schema change topic, including all necessary Note This happens if applicable. |
|
Scans the database tables and generates |
| Commits the transaction. |
| Records the completed snapshot in the connector offsets. |
2.1.2.1. What happens if the connector fails?
If the connector fails, stops, or is rebalanced while making the initial snapshot, the connector creates a new snapshot once restarted. Once that intial snapshot is completed, the Debezium MySQL connector restarts from the same position in the binlog so it does not miss any updates.
If the connector stops for long enough, MySQL could purge old binlog files and the connector’s position would be lost. If the position is lost, the connector reverts to the initial snapshot for its starting position. For more tips on troubleshooting the Debezium MySQL connector, see MySQL connector common issues.
2.1.2.2. What if Global Read Locks are not allowed?
Some environments do not allow a global read lock. If the Debezium MySQL connector detects that global read locks are not permitted, the connector uses table-level locks instead and performs a snapshot with this method.
The user must have LOCK_TABLES
privileges.
- The connector…
Step | Action |
---|---|
| Starts a transaction with repeatable read semantics to ensure that all subsequent reads within the transaction are done against the consistent snapshot. |
| Reads and filters the names of the databases and tables. |
| Reads the current binlog position. |
| Reads the schema of the databases and tables allowed by the connector’s configuration. |
|
Writes the DDL changes to the schema change topic, including all necessary Note This happens if applicable. |
|
Scans the database tables and generates |
| Commits the transaction. |
| Releases the table-level locks. |
| Records the completed snapshot in the connector offsets. |
2.1.3. How the MySQL connector exposes schema changes
You can configure the Debezium MySQL connector to produce schema change events that include all DDL statements applied to databases in the MySQL server. The connector writes all of these events to a Kafka topic named <serverName>
where serverName
is the name of the connector as specified in the database.server.name
configuration property.
If you choose to use schema change events, use the schema change topic and do not consume the database history topic.
It is vital that there is a global order of the events in the database schema history. Therefore, the database history topic must not be partitioned. This means that a partition count of 1 must be specified when creating this topic. When relying on auto topic creation, make sure that Kafka’s num.partitions
configuration option (the default number of partitions) is set to 1
.
2.1.3.1. Schema change topic structure
Each message that is written to the schema change topic contains a message key which includes the name of the connected database used when applying DDL statements:
{ "schema": { "type": "struct", "name": "io.debezium.connector.mysql.SchemaChangeKey", "optional": false, "fields": [ { "field": "databaseName", "type": "string", "optional": false } ] }, "payload": { "databaseName": "inventory" } }
The schema change event message value contains a structure that includes the DDL statements, the database to which the statements were applied, and the position in the binlog where the statements appeared:
{ "schema": { "type": "struct", "name": "io.debezium.connector.mysql.SchemaChangeValue", "optional": false, "fields": [ { "field": "databaseName", "type": "string", "optional": false }, { "field": "ddl", "type": "string", "optional": false }, { "field": "source", "type": "struct", "name": "io.debezium.connector.mysql.Source", "optional": false, "fields": [ { "type": "string", "optional": true, "field": "version" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "server_id" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "string", "optional": true, "field": "gtid" }, { "type": "string", "optional": false, "field": "file" }, { "type": "int64", "optional": false, "field": "pos" }, { "type": "int32", "optional": false, "field": "row" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "int64", "optional": true, "field": "thread" }, { "type": "string", "optional": true, "field": "db" }, { "type": "string", "optional": true, "field": "table" }, { "type": "string", "optional": true, "field": "query" } ] } ] }, "payload": { "databaseName": "inventory", "ddl": "CREATE TABLE products ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, description VARCHAR(512), weight FLOAT ); ALTER TABLE products AUTO_INCREMENT = 101;", "source" : { "version": "1.2.4.Final", "name": "mysql-server-1", "server_id": 0, "ts_ms": 0, "gtid": null, "file": "mysql-bin.000003", "pos": 154, "row": 0, "snapshot": true, "thread": null, "db": null, "table": null, "query": null } } }
2.1.3.2. Important tips about the schema change topic
The ddl
field may contain multiple DDL statements. Every statement applies to the database in the databaseName
field and appears in the same order as they were applied in the database. The source
field is structured exactly as a standard data change event written to table-specific topics. This field is useful to correlate events on different topic.
.... "payload": { "databaseName": "inventory", "ddl": "CREATE TABLE products ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY,... "source" : { .... } } ....
- What if a client submits DDL statements to multiple databases?
- If MySQL applies them atomically, the connector takes the DDL statements in order, groups them by database, and creates a schema change event for each group.
- If MySQL applies them individually, the connector creates a separate schema change event for each statement.
Additional resources
- If you do not use the schema change topics detailed here, check out the database history topic.
2.1.4. MySQL connector events
The Debezium MySQL connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converver and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating table. See MySQL connector and Kafka topics.
The MySQL connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
2.1.4.1. Change event keys
A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s PRIMARY KEY
(or unique constraint) at the time the connector created the event.
Consider the following customers
table, which is followed by an example of a change event key for this table.
Example table
CREATE TABLE customers ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE KEY ) AUTO_INCREMENT=1001;
Example change event key
Every change event that captures a change to the customers
table has the same event key schema. For as long as the customers
table has the previous definition, every change event that captures a change to the customers
table has the following key structure. In JSON, it looks like this:
{ "schema": { 1 "type": "struct", "name": "mysql-server-1.inventory.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "field": "id", "type": "int32", "optional": false } ] }, "payload": { 5 "id": 1001 } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-name.table-name.
|
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Specifies each field that is expected in the |
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
2.1.4.2. Change event values
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE KEY ) AUTO_INCREMENT=1001;
The value portion of a change event for a change to this table is described for each event type:
2.1.4.2.1. create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "mysql-server-1.inventory.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "mysql-server-1.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": true, "field": "table" }, { "type": "int64", "optional": false, "field": "server_id" }, { "type": "string", "optional": true, "field": "gtid" }, { "type": "string", "optional": false, "field": "file" }, { "type": "int64", "optional": false, "field": "pos" }, { "type": "int32", "optional": false, "field": "row" }, { "type": "int64", "optional": true, "field": "thread" }, { "type": "string", "optional": true, "field": "query" } ], "optional": false, "name": "io.debezium.connector.mysql.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" } ], "optional": false, "name": "mysql-server-1.inventory.customers.Envelope" 4 }, "payload": { 5 "op": "c", 6 "ts_ms": 1465491411815, 7 "before": null, 8 "after": { 9 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 10 "version": "1.2.4.Final", "connector": "mysql", "name": "mysql-server-1", "ts_ms": 0, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 0, "gtid": null, "file": "mysql-bin.000003", "pos": 154, "row": 0, "thread": 7, "query": "INSERT INTO customers (first_name, last_name, email) VALUES ('Anne', 'Kretchmar', 'annek@noanswer.org')" } } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
7 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
8 |
|
An optional field that specifies the state of the row before the event occurred. When the |
9 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
10 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
If the |
2.1.4.2.2. update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "after": { 2 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 3 "version": "1.2.4.Final", "name": "mysql-server-1", "connector": "mysql", "name": "mysql-server-1", "ts_ms": 1465581, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mysql-bin.000003", "pos": 484, "row": 0, "thread": 7, "query": "UPDATE customers SET first_name='Anne Marie' WHERE id=1004" }, "op": "u", 4 "ts_ms": 1465581029523 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that specifies the state of the row before the event occurred. In an update event value, the |
2 |
|
An optional field that specifies the state of the row after the event occurred. You can compare the |
3 |
|
Mandatory field that describes the source metadata for the event. The
If the |
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a DELETE
event and a tombstone event with the old key for the row, followed by an event with the new key for the row. Details are in the next section.
2.1.4.2.3. Primary key updates
An UPDATE
operation that changes a row’s primary key field(s) is known as a primary key change. For a primary key change, in place of an UPDATE
event record, the connector emits a DELETE
event record for the old key and a CREATE
event record for the new (updated) key. These events have the usual structure and content, and in addition, each one has a message header related to the primary key change:
-
The
DELETE
event record has__debezium.newkey
as a message header. The value of this header is the new primary key for the updated row. -
The
CREATE
event record has__debezium.oldkey
as a message header. The value of this header is the previous (old) primary key that the updated row had.
2.1.4.2.4. delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "after": null, 2 "source": { 3 "version": "1.2.4.Final", "connector": "mysql", "name": "mysql-server-1", "ts_ms": 1465581, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mysql-bin.000003", "pos": 805, "row": 0, "thread": 7, "query": "DELETE FROM customers WHERE id=1004" }, "op": "d", 4 "ts_ms": 1465581902461 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
If the |
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
A delete change event record provides a consumer with the information it needs to process the removal of this row. The old values are included because some consumers might require them in order to properly handle the removal.
MySQL connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, after Debezium’s MySQL connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value.
2.1.5. How the MySQL connector maps data types
The Debezium MySQL connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. The MySQL data type of that column dictates how the value is represented in the event.
Columns that store strings are defined in MySQL with a character set and collation. The MySQL connector uses the column’s character set when reading the binary representation of the column values in the binlog events. The following table shows how the connector maps the MySQL data types to both literal and semantic types.
- literal type : how the value is represented using Kafka Connect schema types
- semantic type : how the Kafka Connect schema captures the meaning of the field (schema name)
MySQL type | Literal type | Semantic type |
---|---|---|
|
| n/a |
|
| n/a |
|
|
numBytes = n/8 + (n%8== 0 ? 0 : 1) |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary handling mode setting |
|
| n/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary handling mode setting |
|
| n/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary handling mode setting |
|
| n/a |
|
| n/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary handling mode setting |
|
| n/a |
|
| n/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary handling mode setting |
|
| n/a |
|
| n/a Either the raw bytes (the default), a base64-encoded String, or a hex-encoded String, based on the binary handling mode setting |
|
| n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2.1.5.1. Temporal values
Excluding the TIMESTAMP
data type, MySQL temporal types depend on the value of the time.precision.mode
configuration property. For TIMESTAMP
columns whose default value is specified as CURRENT_TIMESTAMP
or NOW
, the value 1970-01-01 00:00:00
is used as the default value in the Kafka Connect schema.
MySQL allows zero-values for DATE, `DATETIME
, and TIMESTAMP
columns because zero-values are sometimes preferred over null values. The MySQL connector represents zero-values as null values when the column definition allows null values, or as the epoch day when the column does not allow null values.
Temporal values without time zones
The DATETIME
type represents a local date and time such as "2018-01-13 09:48:27". As you can see, there is no time zone information. Such columns are converted into epoch milli-seconds or micro-seconds based on the column’s precision by using UTC. The TIMESTAMP
type represents a timestamp without time zone information and is converted by MySQL from the server (or session’s) current time zone into UTC when writing and vice versa when reading back the value. For example:
-
DATETIME
with a value of2018-06-20 06:37:03
becomes1529476623000
. -
TIMESTAMP
with a value of2018-06-20 06:37:03
becomes2018-06-20T13:37:03Z
.
Such columns are converted into an equivalent io.debezium.time.ZonedTimestamp
in UTC based on the server (or session’s) current time zone. The time zone will be queried from the server by default. If this fails, it must be specified explicitly by the database.serverTimezone
connector configuration property. For example, if the database’s time zone (either globally or configured for the connector by means of the database.serverTimezone property
) is "America/Los_Angeles", the TIMESTAMP value "2018-06-20 06:37:03" is represented by a ZonedTimestamp
with the value "2018-06-20T13:37:03Z".
Note that the time zone of the JVM running Kafka Connect and Debezium does not affect these conversions.
More details about properties related to termporal values are in the documentation for MySQL connector configuration properties.
- time.precision.mode=adaptive_time_microseconds(default)
The MySQL connector determines the literal type and semantic type based on the column’s data type definition so that events represent exactly the values in the database. All time fields are in microseconds. Only positive
TIME
field values in the range of00:00:00.000000
to23:59:59.999999
can be captured correctly.MySQL type Literal type Semantic type DATE
INT32
io.debezium.time.Date
Represents the number of days since the epoch.TIME[(M)]
INT64
io.debezium.time.MicroTime
Represents the time value in microseconds and does not include time zone information. MySQL allowsM
to be in the range of0-6
.DATETIME, DATETIME(0), DATETIME(1), DATETIME(2), DATETIME(3)
INT64
io.debezium.time.Timestamp
Represents the number of milliseconds since the epoch and does not include time zone information.DATETIME(4), DATETIME(5), DATETIME(6)
INT64
io.debezium.time.MicroTimestamp
Represents the number of microseconds since the epoch and does not include time zone information.- time.precision.mode=connect
The MySQL connector uses the predefined Kafka Connect logical types. This approach is less precise than the default approach and the events could be less precise if the database column has a fractional second precision value of greater than
3
. Only values in the range of00:00:00.000
to23:59:59.999
can be handled. Settime.precision.mode=connect
only if you can ensure that theTIME
values in your tables never exceed the supported ranges. Theconnect
setting is expected to be removed in a future version of Debezium.MySQL type Literal type Semantic type DATE
INT32
org.apache.kafka.connect.data.Date
Represents the number of days since the epoch.TIME[(M)]
INT64
org.apache.kafka.connect.data.Time
Represents the time value in microseconds since midnight and does not include time zone information.DATETIME[(M)]
INT64
org.apache.kafka.connect.data.Timestamp
Represents the number of milliseconds since epoch, and does not include time zone information.
2.1.5.2. Decimal values
Decimals are handled via the decimal.handling.mode
property. See MySQL connector configuration properties for more details.
- decimal.handling.mode=precise
MySQL type Literal type Semantic type NUMERIC[(M[,D])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.DECIMAL[(M[,D])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.- decimal.handling.mode=double
MySQL type Literal type Semantic type NUMERIC[(M[,D])]
FLOAT64
n/a
DECIMAL[(M[,D])]
FLOAT64
n/a
- decimal.handling.mode=string
MySQL type Literal type Semantic type NUMERIC[(M[,D])]
STRING
n/a
DECIMAL[(M[,D])]
STRING
n/a
2.1.5.3. Boolean values
MySQL handles the BOOLEAN
value internally in a specific way. The BOOLEAN
column is internally mapped to TINYINT(1)
datatype. When the table is created during streaming then it uses proper BOOLEAN
mapping as Debezium receives the original DDL. During snapshot Debezium executes SHOW CREATE TABLE
to obtain table definition which returns TINYINT(1)
for both BOOLEAN
and TINYINT(1)
columns.
Debezium then has no way how to obtain the original type mapping and will map to TINYINT(1)
.
An example configuration is
converters=boolean boolean.type=io.debezium.connector.mysql.converters.TinyIntOneToBooleanConverter boolean.selector=db1.table1.*, db1.table2.column1
2.1.5.4. Spatial data types
Currently, the Debezium MySQL connector supports the following spatial data types:
MySQL type | Literal type | Semantic type |
---|---|---|
|
|
|
2.1.6. The MySQL connector and Kafka topics
The Debezium MySQL connector writes events for all INSERT
, UPDATE
, and DELETE
operations from a single table to a single Kafka topic. The Kafka topic naming convention is as follows:
serverName.databaseName.tableName
For example, suppose that fulfillment
is the server name and inventory
is the database that contains three tables: orders
, customers
, and products
. The Debezium MySQL connector emits events to three Kafka topics, one for each table in the database:
fulfillment.inventory.orders fulfillment.inventory.customers fulfillment.inventory.products
2.1.7. MySQL supported topologies
The Debezium MySQL connector supports the following MySQL topologies:
- Standalone
- When a single MySQL server is used, the server must have the binlog enabled (and optionally GTIDs enabled) so the Debezium MySQL connector can monitor the server. This is often acceptable, since the binary log can also be used as an incremental backup. In this case, the MySQL connector always connects to and follows this standalone MySQL server instance.
- Master and slave
The Debezium MySQL connector can follow one of the masters or one of the slaves (if that slave has its binlog enabled), but the connector only sees changes in the cluster that are visible to that server. Generally, this is not a problem except for the multi-master topologies.
The connector records its position in the server’s binlog, which is different on each server in the cluster. Therefore, the connector will need to follow just one MySQL server instance. If that server fails, it must be restarted or recovered before the connector can continue.
- High available clusters
- A variety of high availability solutions exist for MySQL, and they make it far easier to tolerate and almost immediately recover from problems and failures. Most HA MySQL clusters use GTIDs so that slaves are able to keep track of all changes on any of the master.
- Multi-master
A multi-master MySQL topology uses one or more MySQL slaves that each replicate from multiple masters. This is a powerful way to aggregate the replication of multiple MySQL clusters, and requires using GTIDs.
The Debezium MySQL connector can use these multi-master MySQL slaves as sources, and can fail over to different multi-master MySQL slaves as long as thew new slave is caught up to the old slave (e.g., the new slave has all of the transactions that were last seen on the first slave). This works even if the connector is only using a subset of databases and/or tables, as the connector can be configured to include or exclude specific GTID sources when attempting to reconnect to a new multi-master MySQL slave and find the correct position in the binlog.
- Hosted
There is support for the Debezium MySQL connector to use hosted options such as Amazon RDS and Amazon Aurora.
ImportantBecause these hosted options do not allow a global read lock, table-level locks are used to create the consistent snapshot.
2.2. Setting up MySQL server
2.2.1. Creating a MySQL user for Debezium
You have to define a MySQL user with appropriate permissions on all databases that the Debezium MySQL connector monitors.
Prerequisites
- You must have a MySQL server.
- You must know basic SQL commands.
Procedure
Create the MySQL user:
mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
Grant the required permissions to the user:
mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user' IDENTIFIED BY 'password';
See permissions explained for notes on each permission.
ImportantIf using a hosted option such as Amazon RDS or Amazon Aurora that do not allow a global read lock, table-level locks are used to create the consistent snapshot. In this case, you need to also grant
LOCK_TABLES
permissions to the user that you create. See Overview of how the MySQL connector works for more details.
Finalize the user’s permissions:
mysql> FLUSH PRIVILEGES;
Permission/item | Description |
---|---|
| Enables the connector to select rows from tables in databases Note This is only used when performing a snapshot. |
|
When performing a snapshot, enables the connector to use the |
|
When performing a snapshot, enables the connector to see database names by issuing the |
| Enables the connector to connect to and read the MySQL server binlog. |
| Enables the connector to run the following commands:
Important This is always required for the connector. |
| Identifies the database to which the permission apply. |
| Specifies the user to which the permissions are granted. |
| Specifies the password for the user. |
2.2.2. Enabling the MySQL binlog for Debezium
You must enable binary logging for MySQL replication. The binary logs record transaction updates for replication tools to propagate changes.
Prerequisites
- You must have a MySQL server.
- You should have appropriate MySQL user privileges.
Procedure
Check if the
log-bin
option is already on or not.mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" FROM information_schema.global_variables WHERE variable_name='log_bin';
If
OFF
, configure your MySQL server configuration file with the following binlog config properties:server-id = 223344 1 log_bin = mysql-bin 2 binlog_format = ROW 3 binlog_row_image = FULL 4 expire_logs_days = 10 5
Confirm your changes by checking the binlog status:
mysql> SELECT variable_value as "BINARY LOGGING STATUS (log-bin) ::" FROM information_schema.global_variables WHERE variable_name='log_bin';
Number | Property | Description |
---|---|---|
1 |
|
The value for the |
2 |
|
The value of |
3 |
|
The |
4 |
|
The |
5 |
|
This is the number of days for automatic binlog file removal. The default is |
2.2.3. Enabling MySQL Global Transaction Identifiers for Debezium
Global transaction identifiers (GTIDs) uniquely identify transactions that occur on a server within a cluster. Though not required for the Debezium MySQL connector, using GTIDs simplifies replication and allows you to more easily confirm if master and slave servers are consistent.
GTIDs are only available from MySQL 5.6.5 and later. See the MySQL documentation for more details.
Prerequisites
- You must have a MySQL server.
- You must know basic SQL commands.
- You must have access to the MySQL configuration file.
Procedure
Enable
gtid_mode
:mysql> gtid_mode=ON
Enable
enforce_gtid_consistency
:mysql> enforce_gtid_consistency=ON
Confirm the changes:
mysql> show global variables like '%GTID%';
response
+--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | enforce_gtid_consistency | ON | | gtid_mode | ON | +--------------------------+-------+
Permission/item | Description |
---|---|
| Boolean that specifies whether GTID mode of the MySQL server is enabled or not.
|
| Boolean that instructs the server whether to enforce GTID consistency by allowing the execution of statements that can be logged in a transactionally safe manner. Required when using GTIDs.
|
2.2.4. Setting up session timeouts for Debezium
When an initial consistent snapshot is made for large databases, your established connection could timeout while the tables are being read. You can prevent this behavior by configuring interactive_timeout
and wait_timeout
in your MySQL configuration file.
Prerequisites
- You must have a MySQL server.
- You must know basic SQL commands.
- You must have access to the MySQL configuration file.
Procedure
Configure
interactive_timeout
:mysql> interactive_timeout=<duration-in-seconds>
Configure
wait_timeout
:mysql> wait_timeout= <duration-in-seconds>
Permission/item | Description |
---|---|
| The number of seconds the server waits for activity on an interactive connection before closing it. See MySQL’s documentation for more details. |
| The number of seconds the server waits for activity on a noninteractive connection before closing it. See MySQL’s documentation for more details. |
2.2.5. Enabling query log events for Debezium
You might want to see the original SQL
statement for each binlog event. Enabling the binlog_rows_query_log_events
option in the MySQL configuration file allows you to do this.
This option is available for MySQL 5.6 and later.
Prerequisites
- You must have a MySQL server.
- You must know basic SQL commands.
- You must have access to the MySQL configuration file.
Procedure
Enable
binlog_rows_query_log_events
:mysql> binlog_rows_query_log_events=ON
Additional information
binlog_rows_query_log_events
is set to a Boolean value that enables/disables support for including the original SQL
statement in the binlog entry.
-
ON
= enabled -
OFF
= disabled
2.3. Deploying the MySQL connector
2.3.1. Installing the MySQL connector
Installing the Debezium MySQL connector is a simple process whereby you only need to download the JAR, extract it to your Kafka Connect environment, and ensure the plug-in’s parent directory is specified in your Kafka Connect environment.
Prerequisites
- Kafka and Kafka Connect are installed.
- MySQL Server is installed and set up to run the Debezium MySQL connector.
Procedure
- Download the Debezium MySQL connector.
- Extract the files into your Kafka Connect environment.
Add the plug-in’s parent directory to your Kafka Connect
plugin.path
:plugin.path=/kafka/connect
This example assumes you have extracted the Debezium MySQL connector to the
/kafka/connect/debezium-connector-mysql
path.
- Restart your Kafka Connect process. This ensures the new JARs are picked up.
2.3.2. Configuring the MySQL connector
Typically, you configure the Debezium MySQL connector in a .yaml
file using the configuration properties available for the connector.
Prerequisites
- You should have completed the installation process for the connector.
Procedure
-
Set the
"name"
of the connector in the.yaml
file. - Set the configuration properties that you require for your Debezium MySQL connector.
For a complete list of configuration properties, see MySQL connector configuration properties.
MySQL connector example configuration
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 1 2 config: 3 database.hostname: mysql 4 database.port: 3306 database.user: debezium database.password: dbz database.server.id: 184054 5 database.server.name: dbserver1 6 database.whitelist: inventory 7 database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 8 database.history.kafka.topic: schema-changes.inventory 9
Item | Description |
---|---|
1 | The name of the connector. |
2 |
Only one task should operate at any one time. Because the MySQL connector reads the MySQL server’s |
3 | The connector’s configuration. |
4 |
The database host, which is the name of the container running the MySQL server ( |
5 | A unique server ID and name. The server name is the logical identifier for the MySQL server or cluster of servers. This name will be used as the prefix for all Kafka topics. |
6 |
Only changes in the |
7 |
The connector will store the history of the database schemas in Kafka using this broker (the same broker to which you are sending events) and topic name. Upon restart, the connector will recover the schemas of the database that existed at the point in time in the |
2.3.3. Adding MySQL connector configuration to Kafka Connect
You can use a provided Debezium container to deploy a Debezium MySQL connector. In this procedure, you build a custom Kafka Connect container image for Debezium, configure the Debezium connector as needed, and then add your connector configuration to your Kafka Connect environment.
Prerequisites
- Podman or Docker is installed and you have sufficient rights to create and manage containers.
- You installed the Debezium MySQL connector archive.
Procedure
Extract the Debezium MySQL connector archive to create a directory structure for the connector plug-in, for example:
tree ./my-plugins/ ./my-plugins/ ├── debezium-connector-mysql │ ├── ...
Create and publish a custom image for running your Debezium connector:
Create a new
Dockerfile
by usingregistry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0
as the base image. In the following example, you would replace my-plugins with the name of your plug-ins directory:FROM registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
Before Kafka Connect starts running the connector, Kafka Connect loads any third-party plug-ins that are in the
/opt/kafka/plugins
directory.Build the container image. For example, if you saved the
Dockerfile
that you created in the previous step asdebezium-container-for-mysql
, and if theDockerfile
is in the current directory, then you would run the following command:podman build -t debezium-container-for-mysql:latest .
Push your custom image to your container registry, for example:
podman push debezium-container-for-mysql:latest
Point to the new container image. Do one of the following:
Edit the
spec.image
property of theKafkaConnector
custom resource. If set, this property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: my-connect-cluster spec: #... image: debezium-container-for-mysql
-
In the
install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml
file, edit theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable to point to the new container image and reinstall the Cluster Operator. If you edit this file you must apply it to your OpenShift cluster.
-
Create a
KafkaConnector
custom resource that defines your Debezium MySQL connector instance. See the connector configuration example. Apply the connector instance, for example:
oc apply -f inventory-connector.yaml
This registers
inventory-connector
and the connector starts to run against theinventory
database.Verify that the connector was created and has started to capture changes in the specified database. You can verify the connector instance by watching the Kafka Connect log output as, for example,
inventory-connector
starts.Display the Kafka Connect log output:
oc logs $(oc get pods -o name -l strimzi.io/name=my-connect-cluster-connect)
Review the log output to verify that the initial snapshot has been executed. You should see something like the following lines:
... INFO Starting snapshot for ... ... INFO Snapshot is using user 'debezium' ...
Results
When the connector starts, it performs a consistent snapshot of the MySQL databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
2.3.4. MySQL connector configuration properties
The configuration properties listed here are required to run the Debezium MySQL connector. There are also advanced MySQL connector properties whose default value rarely needs to be changed and therefore, they do not need to be specified in the connector configuration.
The Debezium MySQL connector supports pass-through configuration when creating the Kafka producer and consumer. See information about pass-through properties at the end of this section, and also see the Kafka documentation for more details about pass-through properties.
Property | Default | Description |
---|---|---|
Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) | ||
The name of the Java class for the connector. Always use a value of | ||
| The maximum number of tasks that should be created for this connector. The MySQL connector always uses a single task and therefore does not use this value, so the default is always acceptable. | |
IP address or hostname of the MySQL database server. | ||
| Integer port number of the MySQL database server. | |
Name of the MySQL database to use when connecting to the MySQL database server. | ||
Password to use when connecting to the MySQL database server. | ||
Logical name that identifies and provides a namespace for the particular MySQL database server/cluster being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector. Only alphanumeric characters and underscores should be used. | ||
random | A numeric ID of this database client, which must be unique across all currently-running database processes in the MySQL cluster. This connector joins the MySQL database cluster as another server (with this unique ID) so it can read the binlog. By default, a random number is generated between 5400 and 6400, though we recommend setting an explicit value. | |
The full name of the Kafka topic where the connector will store the database schema history. | ||
A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection will be used for retrieving database schema history previously stored by the connector, and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process. | ||
empty string |
An optional comma-separated list of regular expressions that match database names to be monitored; any database name not included in the whitelist will be excluded from monitoring. By default all databases will be monitored. May not be used with | |
empty string |
An optional comma-separated list of regular expressions that match database names to be excluded from monitoring; any database name not included in the blacklist will be monitored. May not be used with | |
empty string |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the whitelist will be excluded from monitoring. Each identifier is of the form databaseName.tableName. By default the connector will monitor every non-system table in each monitored database. May not be used with | |
empty string |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the blacklist will be monitored. Each identifier is of the form databaseName.tableName. May not be used with | |
empty string | An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values. Fully-qualified names for columns are of the form databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName. | |
n/a | An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event message values if the field values are longer than the specified number of characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer. Fully-qualified names for columns are of the form databaseName.tableName.columnName. | |
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk ( | |
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be pseudonyms in the change event message values with a field value consisting of the hashed value using the algorithm Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form databaseName.tableName.columnName. Example: column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
where
Note: Depending on the | |
n/a |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters | |
n/a |
An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters | |
|
Time, date, and timestamps can be represented with different kinds of precision, including: | |
|
Specifies how the connector should handle values for | |
|
Specifies how BIGINT UNSIGNED columns should be represented in change events, including: | |
|
Boolean value that specifies whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value includes the DDL statement(s). This is independent of how the connector internally records database history. The default is | |
|
Boolean value that specifies whether the connector should include the original SQL query that generated the change event. | |
|
Specifies how the connector should react to exceptions during deserialization of binlog events. | |
|
Specifies how the connector should react to binlog events that relate to tables that are not present in internal schema representation (i.e. internal representation is not consistent with database) | |
|
Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the | |
| Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048. | |
| Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second. | |
| A positive integer value that specifies the maximum time in milliseconds this connector should wait after trying to connect to the MySQL database server before timing out. Defaults to 30 seconds. | |
A comma-separated list of regular expressions that match source UUIDs in the GTID set used to find the binlog position in the MySQL server. Only the GTID ranges that have sources matching one of these include patterns will be used. May not be used with | ||
A comma-separated list of regular expressions that match source UUIDs in the GTID set used to find the binlog position in the MySQL server. Only the GTID ranges that have sources matching none of these exclude patterns will be used. May not be used with | ||
|
Controls whether a tombstone event should be generated after a delete event. | |
empty string |
A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key. | |
bytes |
Specifies how binary ( |
2.3.4.1. Advanced MySQL connector properties
The following table describes advanced MySQL connector properties.
Property | Default | Description |
---|---|---|
| A boolean value that specifies whether a separate thread should be used to ensure the connection to the MySQL server/cluster is kept alive. | |
| Boolean value that specifies whether built-in system tables should be ignored. This applies regardless of the table whitelist or blacklists. By default system tables are excluded from monitoring, and no events are generated when changes are made to any of the system tables. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. | |
|
The maximum number of times that the connector should attempt to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is | |
|
Boolean value that specifies if connector should ignore malformed or unknown database statements or stop processing and let operator to fix the issue. The safe default is | |
|
Boolean value that specifies if connector should should record all DDL statements or (when | |
|
Specifies whether to use an encrypted connection. The default is
The
The
The
The | |
0 |
The size of a look-ahead buffer used by the binlog reader. | |
|
Specifies the criteria for running a snapshot upon startup of the connector. The default is
| |
|
Controls if and how long the connector holds onto the global MySQL read lock (preventing any updates to the database) while it is performing a snapshot. There are three possible values
| |
Controls which rows from tables will be included in snapshot. | ||
| During a snapshot operation, the connector will query each included table to produce a read event for all rows in that table. This parameter determines whether the MySQL connection will pull all results for a table into memory (which is fast but requires large amounts of memory), or whether the results will instead be streamed (can be slower, but will work for very large tables). The value specifies the minimum number of rows a table must contain before the connector will stream results, and defaults to 1,000. Set this parameter to '0' to skip all table size checks and always stream all results during a snapshot. | |
|
Controls how frequently the heartbeat messages are sent. | |
|
Controls the naming of the topic to which heartbeat messages are sent. | |
A semicolon separated list of SQL statements to be executed when a JDBC connection (not the transaction log reading connection) to the database is established. Use doubled semicolon (';;') to use a semicolon as a character and not as a delimiter. | ||
An interval in milli-seconds that the connector should wait before taking a snapshot after starting up; | ||
Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size. | ||
| Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If table locks cannot be acquired in this time interval, the snapshot will fail. See How the MySQL connector performs database snapshots. | |
MySQL allows user to insert year value as either 2-digit or 4-digit. In case of two digits the value is automatically mapped to 1970 - 2069 range. This is usually done by database. | ||
| Whether field names will be sanitized to adhere to Avro naming requirements. | |
comma-separated list of oplog operations that will be skipped during streaming. The operations include: |
2.3.4.2. Pass-through configuration properties
The MySQL connector also supports pass-through configuration properties that are used when creating the Kafka producer and consumer. Specifically, all connector configuration properties that begin with the database.history.producer.
prefix are used (without the prefix) when creating the Kafka producer that writes to the database history. All properties that begin with the prefix database.history.consumer.
are used (without the prefix) when creating the Kafka consumer that reads the database history upon connector start-up.
For example, the following connector configuration properties can be used to secure connections to the Kafka broker:
database.history.producer.security.protocol=SSL database.history.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks database.history.producer.ssl.keystore.password=test1234 database.history.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks database.history.producer.ssl.truststore.password=test1234 database.history.producer.ssl.key.password=test1234 database.history.consumer.security.protocol=SSL database.history.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks database.history.consumer.ssl.keystore.password=test1234 database.history.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks database.history.consumer.ssl.truststore.password=test1234 database.history.consumer.ssl.key.password=test1234
2.3.4.3. Pass-through properties for database drivers
In addition to the pass-through properties for the Kafka producer and consumer, there are pass-through properties for database drivers. These properties have the database.
prefix. For example, database.tinyInt1isBit=false
is passed to the JDBC URL.
2.3.5. MySQL connector monitoring metrics
The Debezium MySQL connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
- snapshot metrics; for monitoring the connector when performing snapshots
- binlog metrics; for monitoring the connector when reading CDC table data
- schema history metrics; for monitoring the status of the connector’s schema history
Refer to the monitoring documentation for details of how to expose these metrics via JMX.
2.3.5.1. Snapshot metrics
The MBean is debezium.mysql:type=connector-metrics,context=snapshot,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
| The last snapshot event that the connector has read. |
|
| The number of milliseconds since the connector has read and processed the most recent event. |
|
| The total number of events that this connector has seen since last started or reset. |
|
| The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
| The list of tables that are monitored by the connector. |
|
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
| The total number of tables that are being included in the snapshot. |
|
| The number of tables that the snapshot has yet to copy. |
|
| Whether the snapshot was started. |
|
| Whether the snapshot was aborted. |
|
| Whether the snapshot completed. |
|
| The total number of seconds that the snapshot has taken so far, even if not complete. |
|
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. |
The Debezium MySQL connector also provides the following custom snapshot metrics:
Attribute | Type | Description |
---|---|---|
|
| Whether the connector currently holds a global or table write lock. |
2.3.5.2. Binlog metrics
The MBean is debezium.mysql:type=connector-metrics,context=binlog,server=<database.server.name>
.
The transaction-related attributes are only available if binlog event buffering is enabled. See binlog.buffer.size in the advanced connector configuration properties for more details.
Attributes | Type | Description |
---|---|---|
|
| The last streaming event that the connector has read. |
|
| The number of milliseconds since the connector has read and processed the most recent event. |
|
| The total number of events that this connector has seen since last started or reset. |
|
| The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
| The list of tables that are monitored by the connector. |
|
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
| Flag that denotes whether the connector is currently connected to the database server. |
|
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running. |
|
| The number of processed transactions that were committed. |
|
| The coordinates of the last received event. |
|
| Transaction identifier of the last processed transaction. |
The Debezium MySQL connector also provides the following custom binlog metrics:
Attribute | Type | Description |
---|---|---|
|
| The name of the binlog filename that the connector has most recently read. |
|
| The most recent position (in bytes) within the binlog that the connector has read. |
|
| Flag that denotes whether the connector is currently tracking GTIDs from MySQL server. |
|
| The string representation of the most recent GTID set seen by the connector when reading the binlog. |
|
| The number of events that have been skipped by the MySQL connector. Typically events are skipped due to a malformed or unparseable event from MySQL’s binlog. |
|
| The number of disconnects by the MySQL connector. |
|
| The number of processed transactions that were rolled back and not streamed. |
|
|
The number of transactions that have not conformed to expected protocol |
|
|
The number of transactions that have not fitted into the look-ahead buffer. Should be significantly smaller than |
2.3.5.3. Schema history metrics
The MBean is debezium.mysql:type=connector-metrics,context=schema-history,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
|
One of |
|
| The time in epoch seconds at what recovery has started. |
|
| The number of changes that were read during recovery phase. |
|
| the total number of schema changes applied during recovery and runtime. |
|
| The number of milliseconds that elapsed since the last change was recovered from the history store. |
|
| The number of milliseconds that elapsed since the last change was applied. |
|
| The string representation of the last change recovered from the history store. |
|
| The string representation of the last applied change. |
2.4. MySQL connector common issues
2.4.1. Configuration and startup errors
The Debezium MySQL connector fails, reports an error, and stops running when the following startup errors occur:
- The connector’s configuration is invalid.
- The connector cannot connect to the MySQL server using the specified connectivity parameters.
- The connector is attempting to restart at a position in the binlog where MySQL no longer has the history available.
If you receive any of these errors, you receive more details in the error message. The error message also contains workarounds where possible.
2.4.3. Kafka Connect stops
There are three scenarios that cause some issues when Kafka Connect stops:
2.4.3.1. Kafka Connect stops gracefully
When Kafka Connect stops gracefully, there is only a short delay while the Debezium MySQL connector tasks are stopped and restarted on new Kafka Connect processes.
2.4.3.2. Kafka Connect process crashes
If Kafka Connect crashes, the process stops and any Debezium MySQL connector tasks terminate without their most recently-processed offsets being recorded. In distributed mode, Kafka Connect restarts the connector tasks on other processes. However, the MySQL connector resumes from the last offset recorded by the earlier processes. This means that the replacement tasks may generate some of the same events processed prior to the crash, creating duplicate events.
Each change event message includes source-specific information about:
- the event origin
- the MySQL server’s event time
- the binlog filename and position
- GTIDs (if used)
2.4.4. MySQL purges binlog files
If the Debezium MySQL connector stops for too long, the MySQL server purges older binlog files and the connector’s last position may be lost. When the connector is restarted, the MySQL server no longer has the starting point and the connector performs another initial snapshot. If the snapshot is disabled, the connector fails with an error.
See How the MySQL connector performs database snapshots for more information on initial snapshots.
Chapter 3. Debezium connector for PostgreSQL
Debezium’s PostgreSQL connector captures row-level changes in the schemas of a PostgreSQL database. PostgreSQL versions 10, 11, and 12 are supported.
The first time it connects to a PostgreSQL server or cluster, the connector takes a consistent snapshot of all schemas. After that snapshot is complete, the connector continuously captures row-level changes that insert, update, and delete database content and that were committed to a PostgreSQL database. The connector generates data change event records and streams them to Kafka topics. For each table, the default behavior is that the connector streams all generated events to a separate Kafka topic for that table. Applications and services consume data change event records from that topic.
Information and procedures for using a Debezium PostgreSQL connector is organized as follows:
- Section 3.1, “Overview of Debezium PostgreSQL connector”
- Section 3.2, “How Debezium PostgreSQL connectors work”
- Section 3.3, “Descriptions of Debezium PostgreSQL connector data change events”
- Section 3.4, “How Debezium PostgreSQL connectors map data types”
- Section 3.5, “Setting up PostgreSQL to run a Debezium connector”
- Section 3.6, “Deploying and managing Debezium PostgreSQL connectors”
- Section 3.7, “How Debezium PostgreSQL connectors handle faults and problems”
3.1. Overview of Debezium PostgreSQL connector
PostgreSQL’s logical decoding feature was introduced in version 9.4. It is a mechanism that allows the extraction of the changes that were committed to the transaction log and the processing of these changes in a user-friendly manner with the help of an output plug-in. The output plug-in enables clients to consume the changes.
The PostgreSQL connector contains two main parts that work together to read and process database changes:
-
pgoutput
is the standard logical decoding output plug-in in PostgreSQL 10+. This is the only supported logical decoding output plug-in in this Debezium release. This plug-in is maintained by the PostgreSQL community, and used by PostgreSQL itself for logical replication. This plug-in is always present so no additional libraries need to be installed. The Debezium connector interprets the raw replication event stream directly into change events. - Java code (the actual Kafka Connect connector) that reads the changes produced by the logical decoding output plug-in by using PostgreSQL’s streaming replication protocol and the PostgreSQL JDBC driver.
The connector produces a change event for every row-level insert, update, and delete operation that was captured and sends change event records for each table in a separate Kafka topic. Client applications read the Kafka topics that correspond to the database tables of interest, and can react to every row-level event they receive from those topics.
PostgreSQL normally purges write-ahead log (WAL) segments after some period of time. This means that the connector does not have the complete history of all changes that have been made to the database. Therefore, when the PostgreSQL connector first connects to a particular PostgreSQL database, it starts by performing a consistent snapshot of each of the database schemas. After the connector completes the snapshot, it continues streaming changes from the exact point at which the snapshot was made. This way, the connector starts with a consistent view of all of the data, and does not omit any changes that were made while the snapshot was being taken.
The connector is tolerant of failures. As the connector reads changes and produces events, it records the WAL position for each event. If the connector stops for any reason (including communication failures, network problems, or crashes), upon restart the connector continues reading the WAL where it last left off. This includes snapshots. If the connector stops during a snapshot, the connector begins a new snapshot when it restarts.
The connector relies on and reflects the PostgreSQL logical decoding feature, which has the following limitations:
- Logical decoding does not support DDL changes. This means that the connector is unable to report DDL change events back to consumers.
-
Logical decoding replication slots are supported on only
primary
servers. When there is a cluster of PostgreSQL servers, the connector can run on only the activeprimary
server. It cannot run onhot
orwarm
standby replicas. If theprimary
server fails or is demoted, the connector stops. After theprimary
server has recovered, you can restart the connector. If a different PostgreSQL server has been promoted toprimary
, adjust the connector configuration before restarting the connector.
Behavior when things go wrong describes what the connector does when there is a problem.
Debezium currently supports databases with UTF-8 character encoding only. With a single byte character encoding, it is not possible to correctly process strings that contain extended ASCII code characters.
3.2. How Debezium PostgreSQL connectors work
To optimally configure and run a Debezium PostgreSQL connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.
Details are in the following topics:
- Section 3.2.1, “How Debezium PostgreSQL connectors perform database snapshots”
- Section 3.2.2, “How Debezium PostgreSQL connectors stream change event records”
- Section 3.2.3, “Default names of Kafka topics that receive Debezium PostgreSQL change event records”
- Section 3.2.4, “Metadata in Debezium PostgreSQL change event records”
- Section 3.2.5, “Debezium PostgreSQL connector-generated events that represent transaction boundaries”
3.2.1. How Debezium PostgreSQL connectors perform database snapshots
Most PostgreSQL servers are configured to not retain the complete history of the database in the WAL segments. This means that the PostgreSQL connector would be unable to see the entire history of the database by reading only the WAL. Consequently, the first time that the connector starts, it performs an initial consistent snapshot of the database. The default behavior for performing a snapshot consists of the following steps. You can change this behavior by setting the snapshot.mode
connector configuration property to a value other than initial
.
-
Start a transaction with a SERIALIZABLE, READ ONLY, DEFERRABLE isolation level to ensure that subsequent reads in this transaction are against a single consistent version of the data. Any changes to the data due to subsequent
INSERT
,UPDATE
, andDELETE
operations by other clients are not visible to this transaction. Obtain an
ACCESS SHARE MODE
lock on each of the tables being tracked to ensure that no structural changes can occur to any of the tables while the snapshot is taking place. These locks do not prevent tableINSERT
,UPDATE
andDELETE
operations from taking place during the snapshot.This step is omitted when
snapshot.mode
is set toexported
, which allows the connector to perform a lock-free snapshot.- Read the current position in the server’s transaction log.
-
Scan the database tables and schemas, generate a
READ
event for each row and write that event to the appropriate table-specific Kafka topic. - Commit the transaction.
- Record the successful completion of the snapshot in the connector offsets.
If the connector fails, is rebalanced, or stops after Step 1 begins but before Step 6 completes, upon restart the connector begins a new snapshot. After the connector completes its initial snapshot, the PostgreSQL connector continues streaming from the position that it read in step 3. This ensures that the connector does not miss any updates. If the connector stops again for any reason, upon restart, the connector continues streaming changes from where it previously left off.
It is strongly recommended that you configure a PostgreSQL connector to set snapshot.mode
to exported
. The initial
, initial only
and always
modes can lose a few events while a connector switches from performing the snapshot to streaming change event records when a database is under heavy load. This is a known issue and the affected snapshot modes will be reworked to use exported
mode internally (DBZ-2337).
Setting | Description |
---|---|
|
The connector always performs a snapshot when it starts. After the snapshot completes, the connector continues streaming changes from step 3 in the above sequence. This mode is useful in these situations:
|
|
The connector never performs snapshots. When a connector is configured this way, its behavior when it starts is as follows. If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. If no LSN has been stored, the connector starts streaming changes from the point in time when the PostgreSQL logical replication slot was created on the server. The |
| The connector performs a database snapshot and stops before streaming any change event records. If the connector had started but did not complete a snapshot before stopping, the connector restarts the snapshot process and stops when the snapshot completes. |
| The connector performs a database snapshot based on the point in time when the replication slot was created. This mode is an excellent way to perform a snapshot in a lock-free way. |
3.2.2. How Debezium PostgreSQL connectors stream change event records
The PostgreSQL connector typically spends the vast majority of its time streaming changes from the PostgreSQL server to which it is connected. This mechanism relies on PostgreSQL’s replication protocol. This protocol enables clients to receive changes from the server as they are committed in the server’s transaction log at certain positions, which are referred to as Log Sequence Numbers (LSNs).
Whenever the server commits a transaction, a separate server process invokes a callback function from the logical decoding plug-in. This function processes the changes from the transaction, converts them to a specific format (Protobuf or JSON in the case of Debezium plug-in) and writes them on an output stream, which can then be consumed by clients.
The Debezium PostgreSQL connector acts as a PostgreSQL client. When the connector receives changes it transforms the events into Debezium create, update, or delete events that include the LSN of the event. The PostgreSQL connector forwards these change events in records to the Kafka Connect framework, which is running in the same process. The Kafka Connect process asynchronously writes the change event records in the same order in which they were generated to the appropriate Kafka topic.
Periodically, Kafka Connect records the most recent offset in another Kafka topic. The offset indicates source-specific position information that Debezium includes with each event. For the PostgreSQL connector, the LSN recorded in each change event is the offset.
When Kafka Connect gracefully shuts down, it stops the connectors, flushes all event records to Kafka, and records the last offset received from each connector. When Kafka Connect restarts, it reads the last recorded offset for each connector, and starts each connector at its last recorded offset. When the connector restarts, it sends a request to the PostgreSQL server to send the events starting just after that position.
The PostgreSQL connector retrieves schema information as part of the events sent by the logical decoding plug-in. However, the connector does not retrieve information about which columns compose the primary key. The connector obtains this information from the JDBC metadata (side channel). If the primary key definition of a table changes (by adding, removing or renaming primary key columns), there is a tiny period of time when the primary key information from JDBC is not synchronized with the change event that the logical decoding plug-in generates. During this tiny period, a message could be created with an inconsistent key structure. To prevent this inconsistency, update primary key structures as follows:
- Put the database or an application into a read-only mode.
- Let Debezium process all remaining events.
- Stop Debezium.
- Update the primary key definition in the relevant table.
- Put the database or the application into read/write mode.
- Restart Debezium.
PostgreSQL 10+ logical decoding support (pgoutput
)
As of PostgreSQL 10+, there is a logical replication stream mode, called pgoutput
that is natively supported by PostgreSQL. This means that a Debezium PostgreSQL connector can consume that replication stream without the need for additional plug-ins. This is particularly valuable for environments where installation of plug-ins is not supported or not allowed.
See Setting up PostgreSQL for more details.
3.2.3. Default names of Kafka topics that receive Debezium PostgreSQL change event records
The PostgreSQL connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. By default, the Kafka topic name is serverName.schemaName.tableName where:
-
serverName is the logical name of the connector as specified with the
database.server.name
connector configuration property. - schemaName is the name of the database schema where the operation occurred.
- tableName is the name of the database table in which the operation occurred.
For example, suppose that fulfillment
is the logical server name in the configuration for a connector that is capturing changes in a PostgreSQL installation that has a postgres
database and an inventory
schema that contains four tables: products
, products_on_hand
, customers
, and orders
. The connector would stream records to these four Kafka topics:
-
fulfillment.inventory.products
-
fulfillment.inventory.products_on_hand
-
fulfillment.inventory.customers
-
fulfillment.inventory.orders
Now suppose that the tables are not part of a specific schema but were created in the default public
PostgreSQL schema. The names of the Kafka topics would be:
-
fulfillment.public.products
-
fulfillment.public.products_on_hand
-
fulfillment.public.customers
-
fulfillment.public.orders
3.2.4. Metadata in Debezium PostgreSQL change event records
In addition to a database change event, each record produced by a PostgreSQL connector contains some metadata. Metadata includes where the event occurred on the server, the name of the source partition and the name of the Kafka topic and partition where the event should go, for example:
"sourcePartition": { "server": "fulfillment" }, "sourceOffset": { "lsn": "24023128", "txId": "555", "ts_ms": "1482918357011" }, "kafkaPartition": null
-
sourcePartition
always defaults to the setting of thedatabase.server.name
connector configuration property. sourceOffset
contains information about the location of the server where the event occurred:-
lsn
represents the PostgreSQL Log Sequence Number oroffset
in the transaction log. -
txId
represents the identifier of the server transaction that caused the event. -
ts_ms
represents the server time at which the transaction was committed in the form of the number of milliseconds since the epoch.
-
-
kafkaPartition
with a setting ofnull
means that the connector does not use a specific Kafka partition. The PostgreSQL connector uses only one Kafka Connect partition and it places the generated events into one Kafka partition.
3.2.5. Debezium PostgreSQL connector-generated events that represent transaction boundaries
Debezium can generate events that represent transaction boundaries and that enrich data change event messages. For every transaction BEGIN
and END
, Debezium generates an event that contains the following fields:
-
status
-BEGIN
orEND
-
id
- string representation of unique transaction identifier -
event_count
(forEND
events) - total number of events emitted by the transaction -
data_collections
(forEND
events) - an array of pairs ofdata_collection
andevent_count
that provides the number of events emitted by changes originating from given data collection
Example
{ "status": "BEGIN", "id": "571", "event_count": null, "data_collections": null } { "status": "END", "id": "571", "event_count": 2, "data_collections": [ { "data_collection": "s1.a", "event_count": 1 }, { "data_collection": "s2.a", "event_count": 1 } ] }
Transaction events are written to the topic named database.server.name.transaction
.
Change data event enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
-
id
- string representation of unique transaction identifier -
total_order
- absolute position of the event among all events generated by the transaction -
data_collection_order
- the per-data collection position of the event among all events that were emitted by the transaction
Following is an example of a message:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "transaction": { "id": "571", "total_order": "1", "data_collection_order": "1" } }
3.3. Descriptions of Debezium PostgreSQL connector data change events
The Debezium PostgreSQL connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converver and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default behavior is that the connector streams change event records to topics with names that are the same as the event’s originating table.
Starting with Kafka 0.10, Kafka can optionally record the event key and value with the timestamp at which the message was created (recorded by the producer) or written to the log by Kafka.
The PostgreSQL connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the schema and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a schema name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
Details are in the following topics:
3.3.1. About keys in Debezium PostgreSQL change events
For a given table, the change event’s key has a structure that contains a field for each column in the primary key of the table at the time the event was created. Alternatively, if the table has REPLICA IDENTITY
set to FULL
or USING INDEX
there is a field for each unique key constraint.
Consider a customers
table defined in the public
database schema and the example of a change event key for that table.
Example table
CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) );
Example change event key
If the database.server.name
connector configuration property has the value PostgreSQL_server
, every change event for the customers
table while it has this definition has the same key structure, which in JSON looks like this:
{ "schema": { 1 "type": "struct", "name": "PostgreSQL_server.public.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "name": "id", "index": "0", "schema": { "type": "INT32", "optional": "false" } } ] }, "payload": { 5 "id": "1" }, }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-name.table-name.
|
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Specifies each field that is expected in the |
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
Although the column.blacklist
and column.whitelist
connector configuration properties allow you to capture only a subset of table columns, all columns in a primary or unique key are always included in the event’s key.
If the table does not have a primary or unique key, then the change event’s key is null. The rows in a table without a primary or unique key constraint cannot be uniquely identified.
3.3.2. About values in Debezium PostgreSQL change events
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) );
The value portion of a change event for a change to this table varies according to the REPLICA IDENTITY
setting and the operation that the event is for.
Details follow in these sections:
Replica identity
REPLICA IDENTITY is a PostgreSQL-specific table-level setting that determines the amount of information that is available to the logical decoding plug-in for UPDATE
and DELETE
events. More specifically, the setting of REPLICA IDENTITY
controls what (if any) information is available for the previous values of the table columns involved, whenever an UPDATE
or DELETE
event occurs.
There are 4 possible values for REPLICA IDENTITY
:
DEFAULT
- The default behavior is thatUPDATE
andDELETE
events contain the previous values for the primary key columns of a table if that table has a primary key. For anUPDATE
event, only the primary key columns with changed values are present.If a table does not have a primary key, the connector does not emit
UPDATE
orDELETE
events for that table. For a table without a primary key, the connector emits only create events. Typically, a table without a primary key is used for appending messages to the end of the table, which means thatUPDATE
andDELETE
events are not useful.-
NOTHING
- Emitted events forUPDATE
andDELETE
operations do not contain any information about the previous value of any table column. -
FULL
- Emitted events forUPDATE
andDELETE
operations contain the previous values of all columns in the table. -
INDEX
index-name - Emitted events forUPDATE
andDELETE
operations contain the previous values of the columns contained in the specified index.UPDATE
events also contain the indexed columns with the updated values.
create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "PostgreSQL_server.inventory.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "PostgreSQL_server.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "schema" }, { "type": "string", "optional": false, "field": "table" }, { "type": "int64", "optional": true, "field": "txId" }, { "type": "int64", "optional": true, "field": "lsn" }, { "type": "int64", "optional": true, "field": "xmin" } ], "optional": false, "name": "io.debezium.connector.postgresql.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" } ], "optional": false, "name": "PostgreSQL_server.inventory.customers.Envelope" 4 }, "payload": { 5 "before": null, 6 "after": { 7 "id": 1, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 8 "version": "1.2.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": true, "db": "postgres", "schema": "public", "table": "customers", "txId": 555, "lsn": 24023128, "xmin": null }, "op": "c", 9 "ts_ms": 1559033904863 10 } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the row before the event occurred. When the Note
Whether or not this field is available is dependent on the |
7 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
8 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
9 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
10 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1 }, "after": { 2 "id": 1, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 3 "version": "1.2.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": null, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 24023128, "xmin": null }, "op": "u", 4 "ts_ms": 1465584025523 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that contains values that were in the row before the database commit. In this example, only the primary key column, |
2 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
3 |
|
Mandatory field that describes the source metadata for the event. The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a DELETE
event and a tombstone event with the old key for the row, followed by an event with the new key for the row. Details are in the next section.
Primary key updates
An UPDATE
operation that changes a row’s primary key field(s) is known as a primary key change. For a primary key change, in place of sending an UPDATE
event record, the connector sends a DELETE
event record for the old key and a CREATE
event record for the new (updated) key. These events have the usual structure and content, and in addition, each one has a message header related to the primary key change:
-
The
DELETE
event record has__debezium.newkey
as a message header. The value of this header is the new primary key for the updated row. -
The
CREATE
event record has__debezium.oldkey
as a message header. The value of this header is the previous (old) primary key that the updated row had.
delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1 }, "after": null, 2 "source": { 3 "version": "1.2.4.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "snapshot": null, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 46523128, "xmin": null }, "op": "d", 4 "ts_ms": 1465581902461 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
A delete change event record provides a consumer with the information it needs to process the removal of this row.
For a consumer to be able to process a delete event generated for a table that does not have a primary key, set the table’s REPLICA IDENTITY
to FULL
. When a table does not have a primary key and the table’s REPLICA IDENTITY
is set to DEFAULT
or NOTHING
, a delete event has no before
field.
PostgreSQL connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, the PostgreSQL connector follows a delete event with a special tombstone event that has the same key but a null
value.
3.4. How Debezium PostgreSQL connectors map data types
The PostgreSQL connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. How that value is represented in the event depends on the PostgreSQL data type of the column. This section describes these mappings.
Details are in the following sections:
Basic types
The following table describes how the connector maps basic PostgreSQL data types to a literal type and a semantic type in event fields.
-
literal type describes how the value is literally represented using Kafka Connect schema types:
INT8
,INT16
,INT32
,INT64
,FLOAT32
,FLOAT64
,BOOLEAN
,STRING
,BYTES
,ARRAY
,MAP
, andSTRUCT
. - semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
| n/a |
|
| n/a |
|
|
|
|
|
|
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
|
Temporal types
Other than PostgreSQL’s TIMESTAMPTZ
and TIMETZ
data types, which contain time zone information, how temporal types are mapped depends on the value of the time.precision.mode
connector configuration property. The following sections describe these mappings:
time.precision.mode=adaptive
When the time.precision.mode
property is set to adaptive
, the default, the connector determines the literal type and semantic type based on the column’s data type definition. This ensures that events exactly represent the values in the database.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
time.precision.mode=adaptive_time_microseconds
When the time.precision.mode
configuration property is set to adaptive_time_microseconds
, the connector determines the literal type and semantic type for temporal types based on the column’s data type definition. This ensures that events exactly represent the values in the database, except all TIME
fields are captured as microseconds.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
time.precision.mode=connect
When the time.precision.mode
configuration property is set to connect
, the connector uses Kafka Connect logical types. This may be useful when consumers can handle only the built-in Kafka Connect logical types and are unable to handle variable-precision time values. However, since PostgreSQL supports microsecond precision, the events generated by a connector with the connect
time precision mode results in a loss of precision when the database column has a fractional second precision value that is greater than 3.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
|
|
|
|
|
|
|
TIMESTAMP type
The TIMESTAMP
type represents a timestamp without time zone information. Such columns are converted into an equivalent Kafka Connect value based on UTC. For example, the TIMESTAMP
value "2018-06-20 15:13:16.945104" is represented by an io.debezium.time.MicroTimestamp
with the value "1529507596945104" when time.precision.mode
is not set to connect
.
The timezone of the JVM running Kafka Connect and Debezium does not affect this conversion.
Decimal types
The setting of the PostgreSQL connector configuration property, decimal.handling.mode
determines how the connector maps decimal types.
When the decimal.handling.mode
property is set to precise
, the connector uses the Kafka Connect org.apache.kafka.connect.data.Decimal
logical type for all DECIMAL
and NUMERIC
columns. This is the default mode.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
|
|
|
|
There is an exception to this rule. When the NUMERIC
or DECIMAL
types are used without scale constraints, the values coming from the database have a different (variable) scale for each value. In this case, the connector uses io.debezium.data.VariableScaleDecimal
, which contains both the value and the scale of the transferred value.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
|
|
|
|
When the decimal.handling.mode
property is set to double
, the connector represents all DECIMAL
and NUMERIC
values as Java double values and encodes them as shown in the following table.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
| |
|
|
The last possible setting for the decimal.handling.mode
configuration property is string
. In this case, the connector represents DECIMAL
and NUMERIC
values as their formatted string representation, and encodes them as shown in the following table.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
| |
|
|
PostgreSQL supports NaN
(not a number) as a special value to be stored in DECIMAL
/NUMERIC
values when the setting of decimal.handling.mode
is string
or double
. In this case, the connector encodes NaN
as either Double.NaN
or the string constant NAN
.
HSTORE type
When the hstore.handling.mode
connector configuration property is set to json
(the default), the connector represents HSTORE
values as string representations of JSON values and encodes them as shown in the following table. When the hstore.handling.mode
property is set to map
, the connector uses the MAP
schema type for HSTORE
values.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
|
|
|
n/a |
Domain types
PostgreSQL supports user-defined types that are based on other underlying types. When such column types are used, Debezium exposes the column’s representation based on the full type hierarchy.
Capturing changes in columns that use PostgreSQL domain types requires special consideration. When a column is defined to contain a domain type that extends one of the default database types and the domain type defines a custom length or scale, the generated schema inherits that defined length or scale.
When a column is defined to contain a domain type that extends another domain type that defines a custom length or scale, the generated schema does not inherit the defined length or scale because that information is not available in the PostgreSQL driver’s column metadata.
Network address types
PostgreSQL has data types that can store IPv4, IPv6, and MAC addresses. It is better to use these types instead of plain text types to store network addresses. Network address types offer input error checking and specialized operators and functions.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
PostGIS types
The PostgreSQL connector supports all PostGIS data types.
PostGIS data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
|
For format details, see Open Geospatial Consortium Simple Features Access specification. |
|
|
For format details, see Open Geospatial Consortium Simple Features Access specification. |
Toasted values
PostgreSQL has a hard limit on the page size. This means that values that are larger than around 8 KBs need to be stored by using link::https://www.postgresql.org/docs/current/storage-toast.html[TOAST storage]. This impacts replication messages that are coming from the database. Values that were stored by using the TOAST mechanism and that have not been changed are not included in the message, unless they are part of the table’s replica identity. There is no safe way for Debezium to read the missing value out-of-bands directly from the database, as this would potentially lead to race conditions. Consequently, Debezium follows these rules to handle toasted values:
-
Tables with
REPLICA IDENTITY FULL
- TOAST column values are part of thebefore
andafter
fields in change events just like any other column. -
Tables with
REPLICA IDENTITY DEFAULT
- When receiving anUPDATE
event from the database, any unchanged TOAST column value that is not part of the replica identity is not contained in the event. Similarly, when receiving aDELETE
event, no TOAST columns, if any, are in thebefore
field. As Debezium cannot safely provide the column value in this case, the connector returns a placeholder value as defined by the connector configuration property,toasted.value.placeholder
.
3.5. Setting up PostgreSQL to run a Debezium connector
This release of Debezium supports only the native pgoutput
logical replication stream. To set up PostgreSQL so that it uses the pgoutput
plug-in, you must enable a replication slot, and configure a user with sufficient privileges to perform the replication.
Details are in the following topics:
3.5.1. Configuring a replication slot for the Debezium pgoutput
plug-in
PostgreSQL’s logical decoding uses replication slots. To configure a replication slot, specify the following in the postgresql.conf
file:
wal_level=logical max_wal_senders=1 max_replication_slots=1
These settings instruct the PostgreSQL server as follows:
-
wal_level
- Use logical decoding with the write-ahead log. -
max_wal_senders
- Use a maximum of one separate process for processing WAL changes. -
max_replication_slots
- Allow a maximum of one replication slot to be created for streaming WAL changes.
Replication slots are guaranteed to retain all WAL entries that are required for Debezium even during Debezium outages. Consequently, it is important to closely monitor replication slots to avoid:
- Too much disk consumption
- Any conditions, such as catalog bloat, that can happen if a replication slot stays unused for too long
For more information, see the PostgreSQL documentation for replication slots.
Familiarity with the mechanics and configuration of the PostgreSQL write-ahead log is helpful for using the Debezium PostgreSQL connector.
3.5.2. Setting up PostgreSQL permissions required by Debezium connectors
Setting up a PostgreSQL server to run a Debezium connector requires a database user who can perform replications. Replication can be performed only by a database user who has appropriate permissions and only for a configured number of hosts. Also, you must configure the PostgreSQL server to allow replication to take place between the server machine and the host on which the PostgreSQL connector is running.
Prerequisites
- PostgreSQL administrative permissions.
Procedure
To give replication permissions to a user, define a PostgreSQL role that has at least the
REPLICATION
andLOGIN
permissions. For example:CREATE ROLE name REPLICATION LOGIN;
By default, superusers have both of the above roles.
Configure the PostgreSQL server to allow replication to take place between the server machine and the host on which the PostgreSQL connector is running.
pg_hba.conf
file example:... local replication <youruser> trust 1 host replication <youruser> 127.0.0.1/32 trust 2 host replication <youruser> ::1/128 trust 3 ...
Table 3.18. Description of entries Item Description 1
Instructs the server to allow replication for
<youruser>
locally, that is, on the server machine.2
Instructs the server to allow
<youruser>
onlocalhost
to receive replication changes usingIPV4
.3
Instructs the server to allow
<youruser>
onlocalhost
to receive replication changes usingIPV6
.
For more information about network masks, see the PostgreSQL documentation.
3.5.3. Configuring PostgreSQL to manage Debezium WAL disk space consumption
In certain cases, it is possible for PostgreSQL disk space consumed by WAL files to spike or increase out of usual proportions. There are several possible reasons for this situation:
The LSN up to which the connector has received data is available in the
confirmed_flush_lsn
column of the server’spg_replication_slots
view. Data that is older than this LSN is no longer available, and the database is responsible for reclaiming the disk space.Also in the
pg_replication_slots
view, therestart_lsn
column contains the LSN of the oldest WAL that the connector might require. If the value forconfirmed_flush_lsn
is regularly increasing and the value ofrestart_lsn
lags then the database needs to reclaim the space.The database typically reclaims disk space in batch blocks. This is expected behavior and no action by a user is necessary.
-
There are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. This situation can be easily solved with periodic heartbeat events. Set the
heartbeat.interval.ms
connector configuration property. The PostgreSQL instance contains multiple databases and one of them is a high-traffic database. Debezium captures changes in another database that is low-traffic in comparison to the other database. Debezium then cannot confirm the LSN as replication slots work per-database and Debezium is not invoked. As WAL is shared by all databases, the amount used tends to grow until an event is emitted by the database for which Debezium is capturing changes. To overcome this, it is necessary to:
-
Enable periodic heartbeat record generation with the
heartbeat.interval.ms
connector configuration property. - Regularly emit change events from the database for which Debezium is capturing changes.
A separate process would then periodically update the table by either inserting a new row or repeatedly updating the same row. PostgreSQL then invokes Debezium, which confirms the latest LSN and allows the database to reclaim the WAL space. This task can be automated by means of the
heartbeat.action.query
connector configuration property.-
Enable periodic heartbeat record generation with the
3.6. Deploying and managing Debezium PostgreSQL connectors
To deploy a Debezium PostgreSQL connector, add the connector files to Kafka Connect, create a custom container to run the connector, and add connector configuration to your container. Details are in the following topics:
3.6.1. Deploying Debezium PostgreSQL connectors
To deploy a Debezium PostgreSQL connector, you need to build a custom Kafka Connect container image that contains the Debezium connector archive and push this container image to a container registry.You then need to create two custom resources (CRs):
-
A
KafkaConnect
CR that configures your Kafka Connector and that specifies the name of the image that you created to run your Debezium connector. You apply this CR to the OpenShift Kafka instance. -
A
KafkaConnector
CR that configures your Debezium PostgreSQL connector. You apply this CR to the OpenShift instance where Red Hat AMQ Streams is deployed.
Prerequisites
- PostgreSQL is running and you performed the steps to set up PostgreSQL to run a Debezium connector.
- Red Hat AMQ Streams was used to set up and start running Apache Kafka and Kafka Connect on OpenShift. AMQ Streams offers operators and images that bring Kafka to OpenShift.
- Podman or Docker is installed.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector.
Procedure
Create the Debezium PostgreSQL container for Kafka Connect:
- Download the Debezium PostgreSQL connector archive.
Extract the Debezium PostgreSQL connector archive to create a directory structure for the connector plug-in, for example:
./my-plugins/ ├── debezium-connector-postgresql │ ├── ...
Create a Docker file that uses
registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0
as the base image. For example, from a terminal window, enter the following:cat <<EOF >debezium-container-for-postgresql.yaml 1 FROM {DockerKafkaConnect} USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ 2 USER 1001 EOF
(1) - You can specify any file name that you want.
(2) - Replace
my-plugins
with the name of your plug-ins directory.The command creates a Docker file with the name
debezium-container-for-postgresql.yaml
in the current directory.Build the container image from the
debezium-container-for-postgresql.yaml
Docker file that you created in the previous step. From the directory that contains the file, run the following command:podman build -t debezium-container-for-postgresql:latest .
This command builds a container image with the name
debezium-container-for-postgresql
.Push your custom image to a container registry such as
quay.io
or any internal container registry. Ensure that this registry is reachable from your OpenShift instance. For example:podman push debezium-container-for-postgresql:latest
Create a new Debezium PostgreSQL
KafkaConnect
custom resource (CR). For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties as shown in the following example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: image: debezium-container-for-postgresql 2
(1) -
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.(2) -
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply your
KafkaConnect
CR to the OpenShift Kafka instance by running the following command:oc create -f dbz-connect.yaml
This updates your Kafka Connect environment in OpenShift to add a Kafka Connector instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium PostgreSQL connector instance.You configure a Debezium PostgreSQL connector in a
.yaml
file that sets connector configuration properties. A connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed. See the complete list of PostgreSQL connector properties that can be specified in these configurations.The following example configures a Debezium connector that connects to a PostgreSQL server host,
192.168.99.100
, on port5432
. This host has a database namedsampledb
, a schema namedpublic
, andfulfillment
is the server’s logical name.fulfillment-connector.yaml
apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaConnector metadata: name: fulfillment-connector 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.postgresql.PostgresConnector tasksMax: 1 2 config: 3 database.hostname: 192.168.99.100 4 database.port: 5432 database.user: debezium database.password: dbz database.dbname: sampledb database.server.name: fulfillment 5 schema.include.list: public 6 plugin.name: pgoutput 7
(1) - The name of the connector.
(2) - Only one task should operate at any one time. Because the PostgreSQL connector reads the PostgreSQL server’s
binlog
, using a single connector task ensures proper order and event handling. The Kafka Connect service uses connectors to start one or more tasks that do the work, and it automatically distributes the running tasks across the cluster of Kafka Connect services. If any of the services stop or crash, those tasks will be redistributed to running services.(3) - The connector’s configuration.
(4) - The name of the database host that is running the PostgreSQL server. In this example, the database host name is
192.168.99.100
.(5) - A unique server name. The server name is the logical identifier for the PostgreSQL server or cluster of servers. This name is used as the prefix for all Kafka topics that receive change event records.
(6) - The connector captures changes in only the
public
schema. It is possible to configure the connector to capture changes in only the tables that you choose. Seetable.include.list
connector configuration property.(7) - The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server. While the only supported value for PostgreSQL 10 and later is
pgoutput
, you must explicitly setplugin.name
topgoutput
.Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in thefulfillment-connector.yaml
file, you would run the following command:oc apply -f fulfillment-connector.yaml
This registers
fulfillment-connector
and the connector starts to run against thesampledb
database as defined in theKafkaConnector
CR.Verify that the connector was created and has started:
Display the Kafka Connect log output to verify that the connector was created and has started to capture changes in the specified database:
oc logs $(oc get pods -o name -l strimzi.io/cluster=my-connect-cluster)
Review the log output to verify that the initial snapshot has been executed. You should see something like this:
... INFO Starting snapshot for ... ... INFO Snapshot is using user 'debezium' ...
If the connector starts correctly without errors, it creates a topic for each table whose changes the connector is capturing. For the example CR, there would be a topic for each table in the
public
schema. Downstream applications can subscribe to these topics.Verify that the connector created the topics by running the following command:
oc get kafkatopics
Results
When the connector starts, it performs a consistent snapshot of the PostgreSQL server databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
3.6.2. Monitoring Debezium PostgreSQL connector performance
The Debezium PostgreSQL connector provides two types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is capturing changes and streaming change event records.
Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
3.6.2.1. Monitoring Debezium during snapshots of PostgreSQL databases
The MBean is debezium.postgres:type=connector-metrics,context=snapshot,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
| The last snapshot event that the connector has read. |
|
| The number of milliseconds since the connector has read and processed the most recent event. |
|
| The total number of events that this connector has seen since last started or reset. |
|
| The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
| The list of tables that are monitored by the connector. |
|
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
| The total number of tables that are being included in the snapshot. |
|
| The number of tables that the snapshot has yet to copy. |
|
| Whether the snapshot was started. |
|
| Whether the snapshot was aborted. |
|
| Whether the snapshot completed. |
|
| The total number of seconds that the snapshot has taken so far, even if not complete. |
|
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. |
3.6.2.2. Monitoring Debezium PostgreSQL connector record streaming
The MBean is debezium.postgres:type=connector-metrics,context=streaming,server=<database.server.name>
.
Attributes | Type | Description |
---|---|---|
|
| The last streaming event that the connector has read. |
|
| The number of milliseconds since the connector has read and processed the most recent event. |
|
| The total number of events that this connector has seen since last started or reset. |
|
| The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
| The list of tables that are monitored by the connector. |
|
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
| Flag that denotes whether the connector is currently connected to the database server. |
|
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running. |
|
| The number of processed transactions that were committed. |
|
| The coordinates of the last received event. |
|
| Transaction identifier of the last processed transaction. |
3.6.3. Description of Debezium PostgreSQL connector configuration properties
The Debezium PostgreSQL connector has many configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:
The following configuration properties are required unless a default value is available.
Property | Default | Description |
---|---|---|
Unique name for the connector. Attempting to register again with the same name will fail. This property is required by all Kafka Connect connectors. | ||
The name of the Java class for the connector. Always use a value of | ||
| The maximum number of tasks that should be created for this connector. The PostgreSQL connector always uses a single task and therefore does not use this value, so the default is always acceptable. | |
| The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server.
The only supported value is | |
| The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the Debezium connector that you are configuring. Slot names must conform to PostgreSQL replication slot naming rules, which state: "Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character." | |
| Whether or not to delete the logical replication slot when the connector stops in a graceful, expected way. The default behavior is that the replication slot remains configured for the connector when the connector stops. When the connector restarts, having the same replication slot enables the connector to start processing where it left off.
Set to | |
|
The name of the PostgreSQL publication created for streaming changes when using This publication is created at start-up if it does not already exist and it includes all tables. Debezium then applies its own whitelist/blacklist filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time. If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined. | |
IP address or hostname of the PostgreSQL database server. | ||
| Integer port number of the PostgreSQL database server. | |
Name of the PostgreSQL database user for connecting to the PostgreSQL database server. | ||
Password to use when connecting to the PostgreSQL database server. | ||
The name of the PostgreSQL database from which to stream the changes. | ||
Logical name that identifies and provides a namespace for the particular PostgreSQL database server or cluster in which Debezium is capturing changes. Only alphanumeric characters and underscores should be used in the database server logical name. The logical name should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector. | ||
An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Any schema name not included in the whitelist is excluded from having its changes captured. By default, all non-system schemas have their changes captured. Do not also set the | ||
An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Any schema whose name is not included in the blacklist has its changes captured, with the exception of system schemas. Do not also set the | ||
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you want to capture. Any table not included in the whitelist does not have its changes captured. Each identifier is of the form schemaName.tableName. By default, the connector captures changes in every non-system table in each schema whose changes are being captured. Do not also set the | ||
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. Any table not included in the blacklist has it changes captured. Each identifier is of the form schemaName.tableName. Do not also set the | ||
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in change event record values. Fully-qualified names for columns are of the form schemaName.tableName.columnName. Do not also set the | ||
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event record values. Fully-qualified names for columns are of the form schemaName.tableName.columnName. Do not also set the | ||
|
Time, date, and timestamps can be represented with different kinds of precision: | |
|
Specifies how the connector should handle values for | |
|
Specifies how the connector should handle values for | |
|
Specifies how the connector should handle values for | |
|
Whether to use an encrypted connection to the PostgreSQL server. Options include: | |
The path to the file that contains the SSL certificate for the client. See the PostgreSQL documentation for more information. | ||
The path to the file that contains the SSL private key of the client. See the PostgreSQL documentation for more information. | ||
The password to access the client private key from the file specified by | ||
The path to the file that contains the root certificate(s) against which the server is validated. See the PostgreSQL documentation for more information. | ||
| Enable TCP keep-alive probe to verify that the database connection is still alive. See the PostgreSQL documentation for more information. | |
|
Controls whether a tombstone event should be generated after a delete event. | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form schemaName.tableName.columnName. In change event records, values in these columns are truncated if they are longer than the number of characters specified by length in the property name. You can specify multiple properties with different lengths in a single configuration. Length must be a positive integer, for example, | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form schemaName.tableName.columnName. In change event values, the values in the specified table columns are replaced with length number of asterisk ( | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form schemaName.tableName.columnName. In change event values, the values in the specified columns are replaced with pseudonyms. | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns. Fully-qualified names for columns are of the form databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName. | |
n/a |
An optional, comma-separated list of regular expressions that match the database-specific data type name for some columns. Fully-qualified data type names are of the form databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName. | |
empty string |
A semicolon separated list of tables with regular expressions that match table column names. The connector maps values in matching columns to key fields in change event records that it sends to Kafka topics. This is useful when a table does not have a primary key, or when you want to order change event records in a Kafka topic according to a field that is not a primary key. | |
all_tables |
Applies only when streaming changes by using the | |
bytes |
Specifies how binary ( |
The following advanced configuration properties have defaults that work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property | Default | Description |
---|---|---|
|
Specifies the criteria for performing a snapshot when the connector starts: | |
| Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. How the connector performs snapshots provides details. | |
Controls which table rows are included in snapshots. This property affects snapshots only. It does not affect events that are generated by the logical decoding plug-in. Specify a comma-separated list of fully-qualified table names in the form databaseName.tableName. | ||
|
Specifies how the connector should react to exceptions during processing of events: | |
| Positive integer value for the maximum size of the blocking queue. The connector places change events received from streaming replication in the blocking queue before writing them to Kafka. This queue can provide backpressure when, for example, writing records to Kafka is slower that it should be or Kafka is not available. | |
| Positive integer value that specifies the maximum size of each batch of events that the connector processes. | |
| Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 1000 milliseconds, or 1 second. | |
|
Specifies connector behavior when the connector encounters a field whose data type is unknown. The default behavior is that the connector omits the field from the change event and logs a warning. | |
A semicolon separated list of SQL statements that the connector executes when it establishes a JDBC connection to the database. To use a semicolon as a character and not as a delimiter, specify two consecutive semicolons, | ||
|
Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default behavior is that the connector does not send heartbeat messages. | |
|
Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: | |
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. | ||
|
Specify the conditions that trigger a refresh of the in-memory schema for a table. | |
An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors. | ||
| During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch. | |
Semicolon separated list of parameters to pass to the configured logical decoding plug-in. For example, | ||
| Indicates whether field names are sanitized to adhere to Avro naming requirements. | |
| If connecting to a replication slot fails, this is the maximum number of consecutive attempts to connect. | |
| The number of milliseconds to wait between retry attempts when the connector fails to connect to a replication slot. | |
|
Specifies the constant that the connector provides to indicate that the original value is a toasted value that is not provided by the database. If the setting of | |
|
Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify |
Pass-through connector configuration properties
The connector also supports pass-through configuration properties that are used when creating the Kafka producer and consumer.
Be sure to consult the Kafka documentation for all of the configuration properties for Kafka producers and consumers. The PostgreSQL connector does use the new consumer configuration properties.
3.7. How Debezium PostgreSQL connectors handle faults and problems
Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.
If a fault does happen then the system does not lose any events. However, while it is recovering from the fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.
Details are in the following sections:
Configuration and startup errors
In the following situations, the connector fails when trying to start, reports an error/exception in the log, and stops running:
- The connector’s configuration is invalid.
- The connector cannot successfully connect to PostgreSQL by using the specified connection parameters.
- The connector is restarting from a previously-recorded position in the PostgreSQL WAL (by using the LSN) and PostgreSQL no longer has that history available.
In these cases, the error message has details about the problem and possibly a suggested workaround. After you correct the configuration or address the PostgreSQL problem, restart the connector.
The PostgreSQL connector externally stores the last processed offset in the form of a PostgreSQL LSN. After a connector restarts and connects to a server instance, the connector communicates with the server to continue streaming from that particular offset. This offset is available as long as the Debezium replication slot remains intact. Never drop a replication slot on the primary server or you will lose data. See the next section for failure cases in which a slot has been removed.
Cluster failures
As of release 12, PostgreSQL allows logical replication slots only on primary servers. This means that you can point a Debezium PostgreSQL connector to only the active primary server of a database cluster. Also, replication slots themselves are not propagated to replicas. If the primary server goes down, a new primary must be promoted.
The new primary must have a replication slot that is configured for use by the pgoutput
plug-in and the database in which you want to capture changes. Only then can you point the connector to the new server and restart the connector.
There are important caveats when failovers occur and you should pause Debezium until you can verify that you have an intact replication slot that has not lost data. After a failover:
- There must be a process that re-creates the Debezium replication slot before allowing the application to write to the new primary. This is crucial. Without this process, your application can miss change events.
- You might need to verify that Debezium was able to read all changes in the slot before the old primary failed.
One reliable method of recovering and verifying whether any changes were lost is to recover a backup of the failed primary to the point immediately before it failed. While this can be administratively difficult, it allows you to inspect the replication slot for any unconsumed changes.
Kafka Connect process stops gracefully
Suppose that Kafka Connect is being run in distributed mode and a Kafka Connect process is stopped gracefully. Prior to shutting down that process, Kafka Connect migrates the process’s connector tasks to another Kafka Connect process in that group. The new connector tasks start processing exactly where the prior tasks stopped. There is a short delay in processing while the connector tasks are stopped gracefully and restarted on the new processes.
Kafka Connect process crashes
If the Kafka Connector process stops unexpectedly, any connector tasks it was running terminate without recording their most recently processed offsets. When Kafka Connect is being run in distributed mode, Kafka Connect restarts those connector tasks on other processes. However, PostgreSQL connectors resume from the last offset that was recorded by the earlier processes. This means that the new replacement tasks might generate some of the same change events that were processed just prior to the crash. The number of duplicate events depends on the offset flush period and the volume of data changes just before the crash.
Because there is a chance that some events might be duplicated during a recovery from failure, consumers should always anticipate some duplicate events. Debezium changes are idempotent, so a sequence of events always results in the same state.
In each change event record, Debezium connectors insert source-specific information about the origin of the event, including the PostgreSQL server’s time of the event, the ID of the server transaction, and the position in the write-ahead log where the transaction changes were written. Consumers can keep track of this information, especially the LSN, to determine whether an event is a duplicate.
Connector is stopped for a duration
If the connector is gracefully stopped, the database can continue to be used. Any changes are recorded in the PostgreSQL WAL. When the connector restarts, it resumes streaming changes where it left off. That is, it generates change event records for all database changes that were made while the connector was stopped.
A properly configured Kafka cluster is able to handle massive throughput. Kafka Connect is written according to Kafka best practices, and given enough resources a Kafka Connect connector can also handle very large numbers of database change events. Because of this, after being stopped for a while, when a Debezium connector restarts, it is very likely to catch up with the database changes that were made while it was stopped. How quickly this happens depends on the capabilities and performance of Kafka and the volume of changes being made to the data in PostgreSQL.
Chapter 4. Debezium connector for MongoDB
Debezium’s MongoDB connector tracks a MongoDB replica set or a MongoDB sharded cluster for document changes in databases and collections, recording those changes as events in Kafka topics. The connector automatically handles the addition or removal of shards in a sharded cluster, changes in membership of each replica set, elections within each replica set, and awaiting the resolution of communications problems.
4.1. Overview
MongoDB’s replication mechanism provides redundancy and high availability, and is the preferred way to run MongoDB in production. MongoDB connector captures the changes in a replica set or sharded cluster.
A MongoDB replica set consists of a set of servers that all have copies of the same data, and replication ensures that all changes made by clients to documents on the replica set’s primary are correctly applied to the other replica set’s servers, called secondaries. MongoDB replication works by having the primary record the changes in its oplog (or operation log), and then each of the secondaries reads the primary’s oplog and applies in order all of the operations to their own documents. When a new server is added to a replica set, that server first performs an snapshot of all of the databases and collections on the primary, and then reads the primary’s oplog to apply all changes that might have been made since it began the snapshot. This new server becomes a secondary (and able to handle queries) when it catches up to the tail of the primary’s oplog.
The MongoDB connector uses this same replication mechanism, though it does not actually become a member of the replica set. Just like MongoDB secondaries, however, the connector always reads the oplog of the replica set’s primary. And, when the connector sees a replica set for the first time, it looks at the oplog to get the last recorded transaction and then performs a snapshot of the primary’s databases and collections. When all the data is copied, the connector then starts streaming changes from the position it read earlier from the oplog. Operations in the MongoDB oplog are idempotent, so no matter how many times the operations are applied, they result in the same end state.
As the MongoDB connector processes changes, it periodically records the position in the oplog where the event originated. When the MongoDB connector stops, it records the last oplog position that it processed, so that upon restart it simply begins streaming from that position. In other words, the connector can be stopped, upgraded or maintained, and restarted some time later, and it will pick up exactly where it left off without losing a single event. Of course, MongoDB’s oplogs are usually capped at a maximum size, which means that the connector should not be stopped for too long, or else some of the operations in the oplog might be purged before the connector has a chance to read them. In this case, upon restart the connector will detect the missing oplog operations, perform a snapshot, and then proceed with streaming the changes.
The MongoDB connector is also quite tolerant of changes in membership and leadership of the replica sets, of additions or removals of shards within a sharded cluster, and network problems that might cause communication failures. The connector always uses the replica set’s primary node to stream changes, so when the replica set undergoes an election and a different node becomes primary, the connector will immediately stop streaming changes, connect to the new primary, and start streaming changes using the new primary node. Likewise, if connector experiences any problems communicating with the replica set primary, it will try to reconnect (using exponential backoff so as to not overwhelm the network or replica set) and continue streaming changes from where it last left off. In this way the connector is able to dynamically adjust to changes in replica set membership and to automatically handle communication failures.
Additional resources
4.2. Setting up MongoDB
The MongoDB connector uses MongoDB’s oplog to capture the changes, so the connector works only with MongoDB replica sets or with sharded clusters where each shard is a separate replica set. See the MongoDB documentation for setting up a replica set or sharded cluster. Also, be sure to understand how to enable access control and authentication with replica sets.
You must also have a MongoDB user that has the appropriate roles to read the admin
database where the oplog can be read. Additionally, the user must also be able to read the config
database in the configuration server of a sharded cluster and must have listDatabases
privilege action.
4.3. Supported MongoDB topologies
The MongoDB connector can be used with a variety of MongoDB topologies.
4.3.1. MongoDB replica set
The MongoDB connector can capture changes from a single MongoDB replica set. Production replica sets require a minimum of at least three members.
To use the MongoDB connector with a replica set, provide the addresses of one or more replica set servers as seed addresses through the connector’s mongodb.hosts
property. The connector will use these seeds to connect to the replica set, and then once connected will get from the replica set the complete set of members and which member is primary. The connector will start a task to connect to the primary and capture the changes from the primary’s oplog. When the replica set elects a new primary, the task will automatically switch over to the new primary.
When MongoDB is fronted by a proxy (such as with Docker on OS X or Windows), then when a client connects to the replica set and discovers the members, the MongoDB client will exclude the proxy as a valid member and will attempt and fail to connect directly to the members rather than go through the proxy.
In such a case, set the connector’s optional mongodb.members.auto.discover
configuration property to false
to instruct the connector to forgo membership discovery and instead simply use the first seed address (specified via the mongodb.hosts
property) as the primary node. This may work, but still make cause issues when election occurs.
4.3.2. MongoDB sharded cluster
A MongoDB sharded cluster consists of:
- One or more shards, each deployed as a replica set;
- A separate replica set that acts as the cluster’s configuration server
-
One or more routers (also called
mongos
) to which clients connect and that routes requests to the appropriate shards
To use the MongoDB connector with a sharded cluster, configure the connector with the host addresses of the configuration server replica set. When the connector connects to this replica set, it discovers that it is acting as the configuration server for a sharded cluster, discovers the information about each replica set used as a shard in the cluster, and will then start up a separate task to capture the changes from each replica set. If new shards are added to the cluster or existing shards removed, the connector will automatically adjust its tasks accordingly.
4.3.3. MongoDB standalone server
The MongoDB connector is not capable of monitoring the changes of a standalone MongoDB server, since standalone servers do not have an oplog. The connector will work if the standalone server is converted to a replica set with one member.
MongoDB does not recommend running a standalone server in production.
4.4. How the MongoDB connector works
When a MongoDB connector is configured and deployed, it starts by connecting to the MongoDB servers at the seed addresses, and determines the details about each of the available replica sets. Since each replica set has its own independent oplog, the connector will try to use a separate task for each replica set. The connector can limit the maximum number of tasks it will use, and if not enough tasks are available the connector will assign multiple replica sets to each task, although the task will still use a separate thread for each replica set.
When running the connector against a sharded cluster, use a value of tasks.max
that is greater than the number of replica sets. This will allow the connector to create one task for each replica set, and will let Kafka Connect coordinate, distribute, and manage the tasks across all of the available worker processes.
4.4.1. Logical connector name
The connector configuration property mongodb.name
serves as a logical name for the MongoDB replica set or sharded cluster. The connector uses the logical name in a number of ways: as the prefix for all topic names, and as a unique identifier when recording the oplog position of each replica set.
You should give each MongoDB connector a unique logical name that meaningfully describes the source MongoDB system. We recommend logical names begin with an alphabetic or underscore character, and remaining characters that are alphanumeric or underscore.
4.4.2. Performing a snapshot
When a task starts up using a replica set, it uses the connector’s logical name and the replica set name to find an offset that describes the position where the connector previously stopped reading changes. If an offset can be found and it still exists in the oplog, then the task immediately proceeds with streaming changes, starting at the recorded offset position.
However, if no offset is found or if the oplog no longer contains that position, the task must first obtain the current state of the replica set contents by performing a snapshot. This process starts by recording the current position of the oplog and recording that as the offset (along with a flag that denotes a snapshot has been started). The task will then proceed to copy each collection, spawning as many threads as possible (up to the value of the initial.sync.max.threads
configuration property) to perform this work in parallel. The connector will record a separate read event for each document it sees, and that read event will contain the object’s identifier, the complete state of the object, and source information about the MongoDB replica set where the object was found. The source information will also include a flag that denotes the event was produced during a snapshot.
This snapshot will continue until it has copied all collections that match the connector’s filters. If the connector is stopped before the tasks' snapshots are completed, upon restart the connector begins the snapshot again.
Try to avoid task reassignment and reconfiguration while the connector is performing a snapshot of any replica sets. The connector does log messages with the progress of the snapshot. For utmost control, run a separate cluster of Kafka Connect for each connector.
4.4.3. Streaming changes
Once the connector task for a replica set has an offset, it uses the offset to determine the position in the oplog where it should start streaming changes. The task will then connect to the replica set’s primary node and start streaming changes from that position, processing all of the create, insert, and delete operations and converting them into Debezium change events. Each change event includes the position in the oplog where the operation was found, and the connector periodically records this as its most recent offset. The interval at which the offset is recorded is governed by offset.flush.interval.ms
, which is a Kafka Connect worker configuration property.
When the connector is stopped gracefully, the last offset processed is recorded so that, upon restart, the connector will continue exactly where it left off. If the connector’s tasks terminate unexpectedly, however, then the tasks may have processed and generated events after it last records the offset but before the last offset is recorded; upon restart, the connector begins at the last recorded offset, possibly generating some the same events that were previously generated just prior to the crash.
When everything is operating nominally, Kafka consumers will actually see every message exactly once. However, when things go wrong Kafka can only guarantee consumers will see every message at least once. Therefore, your consumers need to anticipate seeing messages more than once.
As mentioned above, the connector tasks always use the replica set’s primary node to stream changes from the oplog, ensuring that the connector sees the most up-to-date operations as possible and can capture the changes with lower latency than if secondaries were to be used instead. When the replica set elects a new primary, the connector immediately stops streaming changes, connects to the new primary, and starts streaming changes from the new primary node at the same position. Likewise, if the connector experiences any problems communicating with the replica set members, it trys to reconnect, by using exponential backoff so as to not overwhelm the replica set, and once connected it continues streaming changes from where it last left off. In this way, the connector is able to dynamically adjust to changes in replica set membership and automatically handle communication failures.
To summarize, the MongoDB connector continues running in most situations. Communication problems might cause the connector to wait until the problems are resolved.
4.4.4. Topics names
The MongoDB connector writes events for all insert, update, and delete operations to documents in each collection to a single Kafka topic. The name of the Kafka topics always takes the form logicalName.databaseName.collectionName, where logicalName is the logical name of the connector as specified with the mongodb.name
configuration property, databaseName is the name of the database where the operation occurred, and collectionName is the name of the MongoDB collection in which the affected document existed.
For example, consider a MongoDB replica set with an inventory
database that contains four collections: products
, products_on_hand
, customers
, and orders
. If the connector monitoring this database were given a logical name of fulfillment
, then the connector would produce events on these four Kafka topics:
-
fulfillment.inventory.products
-
fulfillment.inventory.products_on_hand
-
fulfillment.inventory.customers
-
fulfillment.inventory.orders
Notice that the topic names do not incorporate the replica set name or shard name. As a result, all changes to a sharded collection (where each shard contains a subset of the collection’s documents) all go to the same Kafka topic.
You can set up Kafka to auto-create the topics as they are needed. If not, then you must use Kafka administration tools to create the topics before starting the connector.
4.4.5. Partitions
The MongoDB connector does not make any explicit determination of the topic partitions for events. Instead, it allows Kafka to determine the partition based on the key. You can change Kafka’s partitioning logic by defining in the Kafka Connect worker configuration the name of the Partitioner
implementation.
Kafka maintains total order only for events written to a single topic partition. Partitioning the events by key does mean that all events with the same key always go to the same partition. This ensures that all events for a specific document are always totally ordered.
4.4.6. Data change events
The Debezium MongoDB connector generates a data change event for each document-level operation that inserts, updates, or deletes data. Each event contains a key and a value. The structure of the key and the value depends on the collection that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converver and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating collection. See topic names.
The MongoDB connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and collection names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a collection name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
4.4.6.1. Change event keys
A change event’s key contains the schema for the changed document’s key and the changed document’s actual key. For a given collection, both the schema and its corresponding payload contain a single id
field. The value of this field is the document’s identifier represented as a string that is derived from MongoDB extended JSON serialization strict mode.
Consider a connector with a logical name of fulfillment
, a replica set containing an inventory
database, and a customers
collection that contains documents such as the following.
Example document
{ "_id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }
Example change event key
Every change event that captures a change to the customers
collection has the same event key schema. For as long as the customers
collection has the previous definition, every change event that captures a change to the customers
collection has the following key structure. In JSON, it looks like this:
{ "schema": { 1 "type": "struct", "name": "fulfillment.inventory.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "field": "id", "type": "string", "optional": false } ] }, "payload": { 5 "id": "1004" } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the key for the document that was changed. Key schema names have the format connector-name.database-name.collection-name.
|
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Specifies each field that is expected in the |
5 |
|
Contains the key for the document for which this change event was generated. In this example, the key contains a single |
This example uses a document with an integer identifier, but any valid MongoDB document identifier works the same way, including a document identifier. For a document identifier, an event key’s payload.id
value is a string that represents the updated document’s original _id
field as a MongoDB extended JSON serialization that uses strict mode. The following table provides examples of how different types of _id
fields are represented.
Type | MongoDB _id Value | Key’s payload |
---|---|---|
Integer | 1234 |
|
Float | 12.34 |
|
String | "1234" |
|
Document |
|
|
ObjectId |
|
|
Binary |
|
|
4.4.6.2. Change event values
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample document that was used to show an example of a change event key:
Example document
{ "_id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }
The value portion of a change event for a change to this document is described for each event type:
4.4.6.3. create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
collection:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "string", "optional": true, "name": "io.debezium.data.Json", 2 "version": 1, "field": "after" }, { "type": "string", "optional": true, "name": "io.debezium.data.Json", "version": 1, "field": "patch" }, { "type": "string", "optional": true, "name": "io.debezium.data.Json", "version": 1, "field": "filter" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "rs" }, { "type": "string", "optional": false, "field": "collection" }, { "type": "int32", "optional": false, "field": "ord" }, { "type": "int64", "optional": true, "field": "h" } ], "optional": false, "name": "io.debezium.connector.mongo.Source", 3 "field": "source" }, { "type": "string", "optional": true, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" } ], "optional": false, "name": "dbserver1.inventory.customers.Envelope" 4 }, "payload": { 5 "after": "{\"_id\" : {\"$numberLong\" : \"1004\"},\"first_name\" : \"Anne\",\"last_name\" : \"Kretchmar\",\"email\" : \"annek@noanswer.org\"}", 6 "patch": null, "source": { 7 "version": "1.2.4.Final", "connector": "mongodb", "name": "fulfillment", "ts_ms": 1558965508000, "snapshot": false, "db": "inventory", "rs": "rs0", "collection": "customers", "ord": 31, "h": 1546547425148721999 }, "op": "c", 8 "ts_ms": 1558965515240 9 } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular collection. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the document after the event occurred. In this example, the |
7 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
8 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
9 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
4.4.6.4. update events
The value of a change event for an update in the sample customers
collection has the same schema as a create event for that collection. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. An update event does not have an after
value. Instead, it has these two fields:
-
patch
is a string field that contains the JSON representation of the idempotent update operation -
filter
is a string field that contains the JSON representation of the selection criteria for the update. Thefilter
string can include multiple shard key fields for sharded collections.
Here is an example of a change event value in an event that the connector generates for an update in the customers
collection:
{ "schema": { ... }, "payload": { "op": "u", 1 "ts_ms": 1465491461815, 2 "patch": "{\"$set\":{\"first_name\":\"Anne Marie\"}}", 3 "filter": "{\"_id\" : {\"$numberLong\" : \"1004\"}}", 4 "source": { 5 "version": "1.2.4.Final", "connector": "mongodb", "name": "fulfillment", "ts_ms": 1558965508000, "snapshot": true, "db": "inventory", "rs": "rs0", "collection": "customers", "ord": 6, "h": 1546547425148721999 } } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example, |
2 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
3 |
|
Contains the JSON string representation of the actual MongoDB idempotent change to the document. In this example, the update changed the |
4 |
| Contains the JSON string representation of the MongoDB selection criteria that was used to identify the document to be updated. |
5 |
| Mandatory field that describes the source metadata for the event. This field contains the same information as a create event for the same collection, but the values are different since this event is from a different position in the oplog. The source metadata includes:
|
In a Debezium change event, MongoDB provides the content of the patch
field. The format of this field depends on the version of the MongoDB database. Consequently, be prepared for potential changes to the format when you upgrade to a newer MongoDB database version. Examples in this document were obtained from MongoDB 3.4, In your application, event formats might be different.
In MongoDB’s oplog, update events do not contain the before or after states of the changed document. Consequently, it is not possible for a Debezium connector to provide this information. However, a Debezium connector provides a document’s starting state in create and read events. Downstream consumers of the stream can reconstruct document state by keeping the latest state for each document and comparing the state in a new event with the saved state. Debezium connector’s are not able to keep this state.
4.4.6.5. delete events
The value in a delete change event has the same schema
portion as create and update events for the same collection. The payload
portion in a delete event contains values that are different from create and update events for the same collection. In particular, a delete event contains neither an after
value nor a patch
value. Here is an example of a delete event for a document in the customers
collection:
{ "schema": { ... }, "payload": { "op": "d", 1 "ts_ms": 1465495462115, 2 "filter": "{\"_id\" : {\"$numberLong\" : \"1004\"}}", 3 "source": { 4 "version": "1.2.4.Final", "connector": "mongodb", "name": "fulfillment", "ts_ms": 1558965508000, "snapshot": true, "db": "inventory", "rs": "rs0", "collection": "customers", "ord": 6, "h": 1546547425148721999 } } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory string that describes the type of operation. The |
2 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
3 |
| Contains the JSON string representation of the MongoDB selection criteria that was used to identify the document to be deleted. |
4 |
| Mandatory field that describes the source metadata for the event. This field contains the same information as a create or update event for the same collection, but the values are different since this event is from a different position in the oplog. The source metadata includes:
|
MongoDB connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
All MongoDB connector events for a uniquely identified document have exactly the same key. When a document is deleted, the delete event value still works with log compaction because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that key, the message value must be null
. To make this possible, after Debezium’s MongoDB connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value. A tombstone event informs Kafka that all messages with that same key can be removed.
4.4.7. Transaction Metadata
Debezium can generate events that represents tranaction metadata boundaries and enrich data messages.
4.4.7.1. Transaction boundaries
Debezium generates events for every transaction BEGIN
and END
. Every event contains
-
status
-BEGIN
orEND
-
id
- string representation of unique transaction identifier -
event_count
(forEND
events) - total number of events emmitted by the transaction -
data_collections
(forEND
events) - an array of pairs ofdata_collection
andevent_count
that provides number of events emitted by changes originating from given data collection
Following is an example of what a message looks like:
{ "status": "BEGIN", "id": "1462833718356672513", "event_count": null, "data_collections": null } { "status": "END", "id": "1462833718356672513", "event_count": 2, "data_collections": [ { "data_collection": "rs0.testDB.tablea", "event_count": 1 }, { "data_collection": "rs0.testDB.tableb", "event_count": 1 } ] }
The transaction events are written to the topic named <database.server.name>.transaction
.
4.4.7.2. Data events enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
-
id
- string representation of unique transaction identifier -
total_order
- the absolute position of the event among all events generated by the transaction -
data_collection_order
- the per-data collection position of the event among all events that were emitted by the transaction
Following is an example of what a message looks like:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "transaction": { "id": "1462833718356672513", "total_order": "1", "data_collection_order": "1" } }
4.5. Deploying the MongoDB connector
To deploy a Debezium MongoDB connector, install the Debezium MongoDB connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect.
To install the MongoDB connector, follow the procedures in Installing Debezium on OpenShift. The main steps are:
- Use Red Hat AMQ Streams to set up Apache Kafka and Kafka Connect on OpenShift. AMQ Streams offers operators and images that bring Kafka to OpenShift.
- Download the Debezium MongoDB connector.
- Extract the connector files into your Kafka Connect environment.
Add the connector plug-in’s parent directory to your Kafka Connect
plugin.path
, for example:plugin.path=/kafka/connect
The above example assumes that you extracted the Debezium MongoDB connector to the
/kafka/connect/Debezium-connector-mongodb
path.- Restart your Kafka Connect process to ensure that the new JAR files are picked up.
You also need to set up MongoDB to run a Debezium connector.
Additional resources
For more information about the deployment process, and deploying connectors with AMQ Streams, see the Debezium installation guides.
4.5.1. Example configuration
To use the connector to produce change events for a particular MongoDB replica set or sharded cluster, create a configuration file in JSON. When the connector starts, it will perform a snapshot of the collections in your MongoDB replica sets and start reading the replica sets' oplogs, producing events for every inserted, updated, and deleted document. Optionally filter out collections that are not needed.
Following is an example of the configuration for a MongoDB connector that monitors a MongoDB replica set rs0
at port 27017 on 192.168.99.100, which we logically name fullfillment
. Typically, you configure the Debezium MongoDB connector in a .yaml
file using the configuration properties available for the connector.
apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: inventory-connector 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mongodb.MongoDbConnector 2 config: mongodb.hosts: rs0/192.168.99.100:27017 3 mongodb.name: fulfillment 4 collection.whitelist: inventory[.]* 5
Item | Description |
---|---|
1 | The name of our connector when we register it with a Kafka Connect service. |
2 | The name of the MongoDB connector class. |
3 | The host addresses to use to connect to the MongoDB replica set. |
4 | The logical name of the MongoDB replica set, which forms a namespace for generated events and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro converter is used. |
5 | A list of regular expressions that match the collection namespaces (for example, <dbName>.<collectionName>) of all collections to be monitored. This is optional. |
See the complete list of connector properties that can be specified in these configurations.
This configuration can be sent via POST to a running Kafka Connect service, which will then record the configuration and start up the one connector task that will connect to the MongoDB replica set or sharded cluster, assign tasks for each replica set, perform a snapshot if necessary, read the oplog, and record events to Kafka topics.
4.5.2. Adding connector configuration
You can use a provided Debezium container to deploy a Debezium MongoDB connector. In this procedure, you build a custom Kafka Connect container image for Debezium, configure the Debezium connector as needed, and then add your connector configuration to your Kafka Connect environment.
Prerequisites
- Podman or Docker is installed and you have sufficient rights to create and manage containers.
- You installed the Debezium MongoDB connector archive.
Procedure
Extract the Debezium MongoDB connector archive to create a directory structure for the connector plug-in, for example:
tree ./my-plugins/ ./my-plugins/ ├── debezium-connector-mongodb │ ├── ...
Create and publish a custom image for running your Debezium connector:
Create a new
Dockerfile
by usingregistry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0
as the base image. In the following example, you would replace my-plugins with the name of your plug-ins directory:FROM registry.redhat.io/amq7/amq-streams-kafka-25-rhel7:1.5.0 USER root:root COPY ./my-plugins/ /opt/kafka/plugins/ USER 1001
Before Kafka Connect starts running the connector, Kafka Connect loads any third-party plug-ins that are in the
/opt/kafka/plugins
directory.Build the container image. For example, if you saved the
Dockerfile
that you created in the previous step asdebezium-container-for-mongodb
, and if theDockerfile
is in the current directory, then you would run the following command:podman build -t debezium-container-for-mongodb:latest .
Push your custom image to your container registry, for example:
podman push debezium-container-for-mongodb:latest
Point to the new container image. Do one of the following:
Edit the
spec.image
property of theKafkaConnector
custom resource. If set, this property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator. For example:apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnector metadata: name: my-connect-cluster spec: #... image: debezium-container-for-mongodb
-
In the
install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml
file, edit theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable to point to the new container image and reinstall the Cluster Operator. If you edit this file you must apply it to your OpenShift cluster.
-
Create a
KafkaConnector
custom resource that defines your Debezium MongoDB connector instance. See the connector configuration example. Apply the connector instance, for example:
oc apply -f inventory-connector.yaml
This registers
inventory-connector
and the connector starts to run against theinventory
database.Verify that the connector was created and has started to capture changes in the specified database. You can verify the connector instance by watching the Kafka Connect log output as, for example,
inventory-connector
starts.Display the Kafka Connect log output:
oc logs $(oc get pods -o name -l strimzi.io/name=my-connect-cluster-connect)
Review the log output to verify that the initial snapshot has been executed. You should see something like the following lines:
... INFO Starting snapshot for ... ... INFO Snapshot is using user 'debezium' ...
Results
When the connector starts, it performs a consistent snapshot of the MongoDB databases that the connector is configured for. The connector then starts generating data change events for document-level operations and streaming change event records to Kafka topics.
4.5.3. Monitoring
The Debezium MongoDB connector has two metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
- snapshot metrics; for monitoring the connector when performing snapshots
- streaming metrics; for monitoring the connector when processing oplog events
Please refer to the monitoring documentation for details of how to expose these metrics via JMX.
4.5.3.1. Snapshot Metrics
The MBean is debezium.mongodb:type=connector-metrics,context=snapshot,server=<mongodb.name>
.
Attributes | Type | Description |
---|---|---|
|
| The last snapshot event that the connector has read. |
|
| The number of milliseconds since the connector has read and processed the most recent event. |
|
| The total number of events that this connector has seen since last started or reset. |
|
| The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
| The list of tables that are monitored by the connector. |
|
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. |
|
| The total number of tables that are being included in the snapshot. |
|
| The number of tables that the snapshot has yet to copy. |
|
| Whether the snapshot was started. |
|
| Whether the snapshot was aborted. |
|
| Whether the snapshot completed. |
|
| The total number of seconds that the snapshot has taken so far, even if not complete. |
|
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. |
The Debezium MongoDB connector also provides the following custom snapshot metrics:
Attribute | Type | Description |
---|---|---|
|
| Number of database disconnects. |
4.5.3.2. Streaming Metrics
The MBean is debezium.sql_server:type=connector-metrics,context=streaming,server=<mongodb.name>
.
Attributes | Type | Description |
---|---|---|
|
| The last streaming event that the connector has read. |
|
| The number of milliseconds since the connector has read and processed the most recent event. |
|
| The total number of events that this connector has seen since last started or reset. |
|
| The number of events that have been filtered by whitelist or blacklist filtering rules configured on the connector. |
|
| The list of tables that are monitored by the connector. |
|
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. |
|
| Flag that denotes whether the connector is currently connected to the database server. |
|
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running. |
|
| The number of processed transactions that were committed. |
|
| The coordinates of the last received event. |
|
| Transaction identifier of the last processed transaction. |
The Debezium MongoDB connector also provides the following custom streaming metrics:
Attribute | Type | Description |
---|---|---|
|
| Number of database disconnects. |
|
| Number of primary node elections. |
4.5.4. Connector properties
The following configuration properties are required unless a default value is available.
Property | Default | Description |
---|---|---|
Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) | ||
The name of the Java class for the connector. Always use a value of | ||
The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list can contain a single hostname and port pair. If | ||
A unique name that identifies the connector and/or MongoDB replica set or sharded cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. Only alphanumeric characters and underscores should be used. | ||
Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | ||
Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. | ||
|
Database (authentication source) containing MongoDB credentials. This is required only when MongoDB is configured to use authentication with another authentication database than | |
| Connector will use SSL to connect to MongoDB instances. | |
|
When SSL is enabled this setting controls whether strict hostname checking is disabled during connection phase. If | |
empty string |
An optional comma-separated list of regular expressions that match database names to be monitored; any database name not included in the whitelist is excluded from monitoring. By default all databases is monitored. May not be used with | |
empty string |
An optional comma-separated list of regular expressions that match database names to be excluded from monitoring; any database name not included in the blacklist is monitored. May not be used with | |
empty string |
An optional comma-separated list of regular expressions that match fully-qualified namespaces for MongoDB collections to be monitored; any collection not included in the whitelist is excluded from monitoring. Each identifier is of the form databaseName.collectionName. By default the connector will monitor all collections except those in the | |
empty string |
An optional comma-separated list of regular expressions that match fully-qualified namespaces for MongoDB collections to be excluded from monitoring; any collection not included in the blacklist is monitored. Each identifier is of the form databaseName.collectionName. May not be used with | |
| Specifies the criteria for running a snapshot upon startup of the connector. The default is initial, and specifies the connector reads a snapshot when either no offset is found or if the oplog no longer contains the previous offset. The never option specifies that the connector should never use snapshots, instead the connector should proceed to tail the log. | |
empty string | An optional comma-separated list of the fully-qualified names of fields that should be excluded from change event message values. Fully-qualified names for fields are of the form databaseName.collectionName.fieldName.nestedFieldName, where databaseName and collectionName may contain the wildcard (*) which matches any characters. | |
empty string | An optional comma-separated list of the fully-qualified replacements of fields that should be used to rename fields in change event message values. Fully-qualified replacements for fields are of the form databaseName.collectionName.fieldName.nestedFieldName:newNestedFieldName, where databaseName and collectionName may contain the wildcard (*) which matches any characters, the colon character (:) is used to determine rename mapping of field. The next field replacement is applied to the result of the previous field replacement in the list, so keep this in mind when renaming multiple fields that are in the same path. | |
| The maximum number of tasks that should be created for this connector. The MongoDB connector will attempt to use a separate task for each replica set, so the default is acceptable when using the connector with a single MongoDB replica set. When using the connector with a MongoDB sharded cluster, we recommend specifying a value that is equal to or more than the number of shards in the cluster, so that the work for each replica set can be distributed by Kafka Connect. | |
| Positive integer value that specifies the maximum number of threads used to perform an intial sync of the collections in a replica set. Defaults to 1. | |
|
Controls whether a tombstone event should be generated after a delete event. | |
An interval in milli-seconds that the connector should wait before taking a snapshot after starting up; | ||
|
Specifies the maximum number of documents that should be read in one go from each collection while taking a snapshot. The connector will read the collection contents in multiple batches of this size. |
The following advanced configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property | Default | Description |
---|---|---|
|
Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the oplog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the | |
| Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048. | |
| Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second. | |
| Positive integer value that specifies the initial delay when trying to reconnect to a primary after the first failed connection attempt or when no primary is available. Defaults to 1 second (1000 ms). | |
| Positive integer value that specifies the maximum delay when trying to reconnect to a primary after repeated failed connection attempts or when no primary is available. Defaults to 120 seconds (120,000 ms). | |
|
Positive integer value that specifies the maximum number of failed connection attempts to a replica set primary before an exception occurs and task is aborted. Defaults to 16, which with the defaults for | |
|
Boolean value that specifies whether the addresses in 'mongodb.hosts' are seeds that should be used to discover all members of the cluster or replica set ( | |
|
Controls how frequently heartbeat messages are sent.
Set this parameter to | |
|
Controls the naming of the topic to which heartbeat messages are sent. | |
| Whether field names are sanitized to adhere to Avro naming requirements. | |
comma-separated list of oplog operations that will be skipped during streaming. The operations include: | ||
|
When set to See Transaction Metadata for additional details. |
4.6. MongoDB connector common issues
Debezium is a distributed system that captures all changes in multiple upstream databases, and will never miss or lose an event. Of course, when the system is operating nominally or being administered carefully, then Debezium provides exactly once delivery of every change event. However, if a fault does happen then the system will still not lose any events, although while it is recovering from the fault it may repeat some change events. Thus, in these abnormal situations Debezium (like Kafka) provides at least once delivery of change events.
The rest of this section describes how Debezium handles various kinds of faults and problems.
4.6.1. Configuration and startup errors
The connector will fail upon startup, report an error/exception in the log, and stop running when the connector’s configuration is invalid, or when the connector repeatedly fails to connect to MongoDB using the specified connectivity parameters. Reconnection is done using exponential backoff, and the maximum number of attempts is configurable.
In these cases, the error will have more details about the problem and possibly a suggested work around. The connector can be restarted when the configuration has been corrected or the MongoDB problem has been addressed.
4.6.3. Kafka Connect process stops gracefully
If Kafka Connect is being run in distributed mode, and a Kafka Connect process is stopped gracefully, then prior to shutdown of that processes Kafka Connect will migrate all of the process' connector tasks to another Kafka Connect process in that group, and the new connector tasks will pick up exactly where the prior tasks left off. There is a short delay in processing while the connector tasks are stopped gracefully and restarted on the new processes.
If the group contains only one process and that process is stopped gracefully, then Kafka Connect will stop the connector and record the last offset for each replica set. Upon restart, the replica set tasks will continue exactly where they left off.
4.6.4. Kafka Connect process crashes
If the Kafka Connector process stops unexpectedly, then any connector tasks it was running will terminate without recording their most recently-processed offsets. When Kafka Connect is being run in distributed mode, it will restart those connector tasks on other processes. However, the MongoDB connectors will resume from the last offset recorded by the earlier processes, which means that the new replacement tasks may generate some of the same change events that were processed just prior to the crash. The number of duplicate events depends on the offset flush period and the volume of data changes just before the crash.
Because there is a chance that some events may be duplicated during a recovery from failure, consumers should always anticipate some events may be duplicated. Debezium changes are idempotent, so a sequence of events always results in the same state.
Debezium also includes with each change event message the source-specific information about the origin of the event, including the MongoDB event’s unique transaction identifier (h
) and timestamp (sec
and ord
). Consumers can keep track of other of these values to know whether it has already seen a particular event.
4.6.6. Connector is stopped for a duration
If the connector is gracefully stopped, the replica sets can continue to be used and any new changes are recorded in MongoDB’s oplog. When the connector is restarted, it will resume streaming changes for each replica set where it last left off, recording change events for all of the changes that were made while the connector was stopped. If the connector is stopped long enough such that MongoDB purges from its oplog some operations that the connector has not read, then upon startup the connector will perform a snapshot.
A properly configured Kafka cluster is capable of massive throughput. Kafka Connect is written with Kafka best practices, and given enough resources will also be able to handle very large numbers of database change events. Because of this, when a connector has been restarted after a while, it is very likely to catch up with the database, though how quickly will depend upon the capabilities and performance of Kafka and the volume of changes being made to the data in MongoDB.
If the connector remains stopped for long enough, MongoDB might purge older oplog files and the connector’s last position may be lost. In this case, when the connector configured with initial snapshot mode (the default) is finally restarted, the MongoDB server will no longer have the starting point and the connector will fail with an error.
4.6.7. MongoDB loses writes
It is possible for MongoDB to lose commits in specific failure situations. For example, if the primary applies a change and records it in its oplog before it then crashes unexpectedly, the secondary nodes may not have had a chance to read those changes from the primary’s oplog before the primary crashed. If one such secondary is then elected as primary, its oplog is missing the last changes that the old primary had recorded and no longer has those changes.
In these cases where MongoDB loses changes recorded in a primary’s oplog, it is possible that the MongoDB connector may or may not capture these lost changes. At this time, there is no way to prevent this side effect of MongoDB.
Chapter 5. Debezium connector for SQL Server
Debezium’s SQL Server Connector can monitor and record the row-level changes in the schemas of a SQL Server database.
The first time it connects to a SQL Server database/cluster, it reads a consistent snapshot of all of the schemas. When that snapshot is complete, the connector continuously streams the changes that were committed to SQL Server and generates corresponding insert, update and delete events. All of the events for each table are recorded in a separate Kafka topic, where they can be easily consumed by applications and services.
5.1. Overview
The functionality of the connector is based upon change data capture feature provided by SQL Server Standard (since SQL Server 2016 SP1) or Enterprise edition. Using this mechanism a SQL Server capture process monitors all databases and tables the user is interested in and stores the changes into specifically created CDC tables that have stored procedure facade.
The database operator must enable CDC for the table(s) that should be captured by the connector. The connector then produces a change event for every row-level insert, update, and delete operation that was published via the CDC API, recording all the change events for each table in a separate Kafka topic. The client applications read the Kafka topics that correspond to the database tables they’re interested in following, and react to every row-level event it sees in those topics.
The database operator normally enables CDC in the mid-life of a database an/or table. This means that the connector does not have the complete history of all changes that have been made to the database. Therefore, when the SQL Server connector first connects to a particular SQL Server database, it starts by performing a consistent snapshot of each of the database schemas. After the connector completes the snapshot, it continues streaming changes from the exact point at which the snapshot was made. This way, we start with a consistent view of all of the data, yet continue reading without having lost any of the changes made while the snapshot was taking place.
The connector is also tolerant of failures. As the connector reads changes and produces events, it records the position in the database log (LSN / Log Sequence Number), that is associated with CDC record, with each event. If the connector stops for any reason (including communication failures, network problems, or crashes), upon restart it simply continues reading the CDC tables where it last left off. This includes snapshots: if the snapshot was not completed when the connector is stopped, upon restart it begins a new snapshot.
5.2. Setting up SQL Server
Before using the SQL Server connector to monitor the changes committed on SQL Server, first enable CDC on a monitored database. Please bear in mind that CDC cannot be enabled for the master
database.
-- ==== -- Enable Database for CDC template -- ==== USE MyDB GO EXEC sys.sp_cdc_enable_db GO
Then enable CDC for each table that you plan to monitor.
-- ==== -- Enable a Table Specifying Filegroup Option Template -- ==== USE MyDB GO EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'MyTable', @role_name = N'MyRole', @filegroup_name = N'MyDB_CT', @supports_net_changes = 0 GO
Verify that the user have access to the CDC table.
-- ==== -- Verify the user of the connector have access, this query should not have empty result -- ==== EXEC sys.sp_cdc_help_change_data_capture GO
If the result is empty then please make sure that the user has privileges to access both the capture instance and CDC tables.
5.2.1. SQL Server on Azure
The SQL Server plug-in has not been tested with SQL Server on Azure. We welcome any feedback from a user to try the plug-in with database in managed environments.
5.3. How the SQL Server connector works
5.3.1. Snapshots
SQL Server CDC is not designed to store the complete history of database changes. It is thus necessary that Debezium establishes the baseline of current database content and streams it to the Kafka. This is achieved via a process called snapshotting.
By default (snapshotting mode initial) the connector will upon the first startup perform an initial consistent snapshot of the database (meaning the structure and data within any tables to be captured as per the connector’s filter configuration).
Each snapshot consists of the following steps:
- Determine the tables to be captured
-
Obtain a lock on each of the monitored tables to ensure that no structural changes can occur to any of the tables. The level of the lock is determined by
snapshot.isolation.mode
configuration option. - Read the maximum LSN ("log sequence number") position in the server’s transaction log.
- Capture the structure of all relevant tables.
- Optionally release the locks obtained in step 2, i.e. the locks are held usually only for a short period of time.
-
Scan all of the relevant database tables and schemas as valid at the LSN position read in step 3, and generate a
READ
event for each row and write that event to the appropriate table-specific Kafka topic. - Record the successful completion of the snapshot in the connector offsets.
5.3.2. Reading the change data tables
Upon first start-up, the connector takes a structural snapshot of the structure of the captured tables and persists this information in its internal database history topic. Then the connector identifies a change table for each of the source tables and executes the main loop
- For each change table read all changes that were created between last stored maximum LSN and current maximum LSN
- Order the read changes incrementally according to commit LSN and change LSN. This ensures that the changes are replayed by Debezium in the same order as were made to the database.
- Pass commit and change LSNs as offsets to Kafka Connect.
- Store the maximum LSN and repeat the loop.
After a restart, the connector will resume from the offset (commit and change LSNs) where it left off before.
The connector is able to detect whether CDC is enabled or disabled for whitelisted source tables and adjust its behavior.
5.3.3. Topic names
The SQL Server connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. The name of the Kafka topics always takes the form serverName.schemaName.tableName, where serverName is the logical name of the connector as specified with the database.server.name
configuration property, schemaName is the name of the schema where the operation occurred, and tableName is the name of the database table on which the operation occurred.
For example, consider a SQL Server installation with an inventory
database that contains four tables: products
, products_on_hand
, customers
, and orders
in schema dbo
. If the connector monitoring this database were given a logical server name of fulfillment
, then the connector would produce events on these four Kafka topics:
-
fulfillment.dbo.products
-
fulfillment.dbo.products_on_hand
-
fulfillment.dbo.customers
-
fulfillment.dbo.orders
5.3.4. Schema change topic
For a table for which CDC is enabled, the Debezium SQL Server connector stores the history of schema changes to that table in a database history topic. This topic reflects an internal connector state and you should not use it. If your application needs to track schema changes, there is a public schema change topic. The name of the schema change topic is the same as the logical server name specified in the connector configuration.
The format of messages that a connector emits to its schema change topic is in an incubating state and can change without notice.
Debezium emits a message to the schema change topic when:
- You enable CDC for a table.
- You disable CDC for a table.
- You alter the structure of a table for which CDC is enabled by following the schema evolution procedure.
A message to the schema change topic contains a logical representation of the table schema, for example:
{ "schema": { ... }, "payload": { "source": { "version": "1.2.4.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 1588252618953, "snapshot": "true", "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": null, "commit_lsn": "00000025:00000d98:00a2", "event_serial_no": null }, "databaseName": "testDB", 1 "schemaName": "dbo", "ddl": null, 2 "tableChanges": [ 3 { "type": "CREATE", 4 "id": "\"testDB\".\"dbo\".\"customers\"", 5 "table": { 6 "defaultCharsetName": null, "primaryKeyColumnNames": [ 7 "id" ], "columns": [ 8 { "name": "id", "jdbcType": 4, "nativeType": null, "typeName": "int identity", "typeExpression": "int identity", "charsetName": null, "length": 10, "scale": 0, "position": 1, "optional": false, "autoIncremented": false, "generated": false }, { "name": "first_name", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 2, "optional": false, "autoIncremented": false, "generated": false }, { "name": "last_name", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 3, "optional": false, "autoIncremented": false, "generated": false }, { "name": "email", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 4, "optional": false, "autoIncremented": false, "generated": false } ] } } ] } }
Item | Field name(s) | Description |
---|---|---|
1 |
| Identifies the database and the schema that contain the change. |
2 |
|
Always |
3 |
| An array of one or more items that contain the schema changes generated by a DDL command. |
4 |
| Describes the kind of change. The value is one of the following:
|
5 |
| Full identifier of the table that was created, altered, or dropped. |
6 |
| Represents table metadata after the applied change. |
7 |
| List of columns that compose the table’s primary key. |
8 |
| Metadata for each column in the changed table. |
In messages to the schema change topic, the key is the name of the database that contains the schema change. In the following example, the payload
field contains the key:
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.sqlserver.SchemaChangeKey" }, "payload": { "databaseName": "testDB" } }
5.3.5. Change data events
The Debezium SQL Server connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converver and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating table. See topic names.
The SQL Server connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
5.3.5.1. Change Event Keys
A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s primary key (or unique key constraint) at the time the connector created the event.
Consider the following customers
table, which is followed by an example of a change event key for this table.
Example table
CREATE TABLE customers ( id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE );
Example change event key
Every change event that captures a change to the customers
table has the same event key schema. For as long as the customers
table has the previous definition, every change event that captures a change to the customers
table has the following key structure, which in JSON, looks like this:
{ "schema": { 1 "type": "struct", "fields": [ 2 { "type": "int32", "optional": false, "field": "id" } ], "optional": false, 3 "name": "server1.dbo.customers.Key" 4 }, "payload": { 5 "id": 1004 } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Specifies each field that is expected in the |
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-schema-name.table-name.
|
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
5.3.5.2. Change event values
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers ( id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE );
The value portion of a change event for a change to this table is described for each event type.
5.3.5.2.1. create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "server1.dbo.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "server1.dbo.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "schema" }, { "type": "string", "optional": false, "field": "table" }, { "type": "string", "optional": true, "field": "change_lsn" }, { "type": "string", "optional": true, "field": "commit_lsn" }, { "type": "int64", "optional": true, "field": "event_serial_no" } ], "optional": false, "name": "io.debezium.connector.sqlserver.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" } ], "optional": false, "name": "server1.dbo.customers.Envelope" 4 }, "payload": { 5 "before": null, 6 "after": { 7 "id": 1005, "first_name": "john", "last_name": "doe", "email": "john.doe@example.org" }, "source": { 8 "version": "1.2.4.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 1559729468470, "snapshot": false, "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": "00000027:00000758:0003", "commit_lsn": "00000027:00000758:0005", "event_serial_no": "1" }, "op": "c", 9 "ts_ms": 1559729471739 10 } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the row before the event occurred. When the |
7 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
8 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
9 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
10 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
5.3.5.2.2. update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1005, "first_name": "john", "last_name": "doe", "email": "john.doe@example.org" }, "after": { 2 "id": 1005, "first_name": "john", "last_name": "doe", "email": "noreply@example.org" }, "source": { 3 "version": "1.2.4.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 1559729995937, "snapshot": false, "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": "00000027:00000ac0:0002", "commit_lsn": "00000027:00000ac0:0007", "event_serial_no": "2" }, "op": "u", 4 "ts_ms": 1559729998706 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that specifies the state of the row before the event occurred. In an update event value, the |
2 |
|
An optional field that specifies the state of the row after the event occurred. You can compare the |
3 |
|
Mandatory field that describes the source metadata for the event. The
The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a delete event and a tombstone event with the old key for the row, followed by a create event with the new key for the row.
5.3.5.2.3. delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, }, "payload": { "before": { 1 "id": 1005, "first_name": "john", "last_name": "doe", "email": "noreply@example.org" }, "after": null, 2 "source": { 3 "version": "1.2.4.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 1559730445243, "snapshot": false, "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": "00000027:00000db0:0005", "commit_lsn": "00000027:00000db0:0007", "event_serial_no": "1" }, "op": "d", 4 "ts_ms": 1559730450205 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
SQL Server connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, after Debezium’s SQL Server connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value.
5.3.6. Transaction Metadata
Debezium can generate events that represents tranaction metadata boundaries and enrich data messages.
5.3.6.1. Transaction boundaries
Debezium generates events for every transaction BEGIN
and END
. Every event contains
-
status
-BEGIN
orEND
-
id
- string representation of unique transaction identifier -
event_count
(forEND
events) - total number of events emmitted by the transaction -
data_collections
(forEND
events) - an array of pairs ofdata_collection
andevent_count
that provides number of events emitted by changes originating from given data collection
Following is an example of what a message looks like:
{ "status": "BEGIN", "id": "00000025:00000d08:0025", "event_count": null, "data_collections": null } { "status": "END", "id": "00000025:00000d08:0025", "event_count": 2, "data_collections": [ { "data_collection": "testDB.dbo.tablea", "event_count": 1 }, { "data_collection": "testDB.dbo.tableb", "event_count": 1 } ] }
The transaction events are written to the topic named <database.server.name>.transaction
.
5.3.6.2. Data events enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
-
id
- string representation of unique transaction identifier -
total_order
- the absolute position of the event among all events generated by the transaction -
data_collection_order
- the per-data collection position of the event among all events that were emitted by the transaction
Following is an example of what a message looks like:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "transaction": { "id": "00000025:00000d08:0025", "total_order": "1", "data_collection_order": "1" } }
5.3.7. Database schema evolution
Debezium is able to capture schema changes over time. Due to the way CDC is implemented in SQL Server, it is necessary to work in co-operation with a database operator in order to ensure the connector continues to produce data change events when the schema is updated.
As was already mentioned before, Debezium uses SQL Server’s change data capture functionality. This means that SQL Server creates a capture table that contains all changes executed on the source table. Unfortunately, the capture table is static and needs to be updated when the source table structure changes. This update is not done by the connector itself but must be executed by an operator with elevated privileges.
There are generally two procedures how to execute the schema change:
- cold - this is executed when Debezium is stopped
- hot - executed while Debezium is running
Both approaches have their own advantages and disadvantages.
In both cases, it is critically important to execute the procedure completely before a new schema update on the same source table is made. It is thus recommended to execute all DDLs in a single batch so the procedure is done only once.
Not all schema changes are supported when CDC is enabled for a source table. One such exception identified is renaming a column or changing its type, SQL Server will not allow executing the operation.
Although not required by SQL Server’s CDC mechanism itself, a new capture instance must be created when altering a column from NULL
to NOT NULL
or vice versa. This is required so that the SQL Server connector can pick up that changed information. Otherwise, emitted change events will have the optional
value for the corresponding field (true
or false
) set to match the original value.
5.3.7.1. Cold schema update
This is the safest procedure but might not be feasible for applications with high-availability requirements. The operator should follow this sequence of steps
- Suspend the application that generates the database records
- Wait for Debezium to stream all unstreamed changes
- Stop the connector
- Apply all changes to the source table schema
-
Create a new capture table for the update source table using
sys.sp_cdc_enable_table
procedur