Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Source connectors
Debezium provides a library of source connectors that capture changes from a variety of database management systems. Each connector produces change events with very similar structures, making it easy for your applications to consume and respond to events, regardless of their origin.
Currently, Debezium provides source connectors for the following databases:
2.1. Debezium connector for Db2
Debezium’s Db2 connector can capture row-level changes in the tables of a Db2 database. For information about the Db2 Database versions that are compatible with this connector, see the Debezium Supported Configurations page.
This connector is strongly inspired by the Debezium implementation of SQL Server, which uses a SQL-based polling model that puts tables into "capture mode". When a table is in capture mode, the Debezium Db2 connector generates and streams a change event for each row-level update to that table.
A table that is in capture mode has an associated change-data table, which Db2 creates. For each change to a table that is in capture mode, Db2 adds data about that change to the table’s associated change-data table. A change-data table contains an entry for each state of a row. It also has special entries for deletions. The Debezium Db2 connector reads change events from change-data tables and emits the events to Kafka topics.
The first time a Debezium Db2 connector connects to a Db2 database, the connector reads a consistent snapshot of the tables for which the connector is configured to capture changes. By default, this is all non-system tables. There are connector configuration properties that let you specify which tables to put into capture mode, or which tables to exclude from capture mode.
When the snapshot is complete the connector begins emitting change events for committed updates to tables that are in capture mode. By default, change events for a particular table go to a Kafka topic that has the same name as the table. Applications and services consume change events from these topics.
The connector requires the use of the abstract syntax notation (ASN) libraries, which are available as a standard part of Db2 for Linux. To use the ASN libraries, you must have a license for IBM InfoSphere Data Replication (IIDR). You do not have to install IIDR to use the ASN libraries.
Information and procedures for using a Debezium Db2 connector is organized as follows:
- Section 2.1.1, “Overview of Debezium Db2 connector”
- Section 2.1.2, “How Debezium Db2 connectors work”
- Section 2.1.3, “Descriptions of Debezium Db2 connector data change events”
- Section 2.1.4, “How Debezium Db2 connectors map data types”
- Section 2.1.5, “Setting up Db2 to run a Debezium connector”
- Section 2.1.6, “Deployment of Debezium Db2 connectors”
- Section 2.1.7, “Monitoring Debezium Db2 connector performance”
- Section 2.1.8, “Managing Debezium Db2 connectors”
- Section 2.1.9, “Updating schemas for Db2 tables in capture mode for Debezium connectors”
2.1.1. Overview of Debezium Db2 connector
The Debezium Db2 connector is based on the ASN Capture/Apply agents that enable SQL Replication in Db2. A capture agent:
- Generates change-data tables for tables that are in capture mode.
- Monitors tables in capture mode and stores change events for updates to those tables in their corresponding change-data tables.
The Debezium connector uses a SQL interface to query change-data tables for change events.
The database administrator must put the tables for which you want to capture changes into capture mode. For convenience and for automating testing, there are Debezium management user-defined functions (UDFs) in C that you can compile and then use to do the following management tasks:
- Start, stop, and reinitialize the ASN agent
- Put tables into capture mode
- Create the replication (ASN) schemas and change-data tables
- Remove tables from capture mode
Alternatively, you can use Db2 control commands to accomplish these tasks.
After the tables of interest are in capture mode, the connector reads their corresponding change-data tables to obtain change events for table updates. The connector emits a change event for each row-level insert, update, and delete operation to a Kafka topic that has the same name as the changed table. This is default behavior that you can modify. Client applications read the Kafka topics that correspond to the database tables of interest and can react to each row-level change event.
Typically, the database administrator puts a table into capture mode in the middle of the life of a table. This means that the connector does not have the complete history of all changes that have been made to the table. Therefore, when the Db2 connector first connects to a particular Db2 database, it starts by performing a consistent snapshot of each table that is in capture mode. After the connector completes the snapshot, the connector streams change events from the point at which the snapshot was made. In this way, the connector starts with a consistent view of the tables that are in capture mode, and does not drop any changes that were made while it was performing the snapshot.
Debezium connectors are tolerant of failures. As the connector reads and produces change events, it records the log sequence number (LSN) of the change-data table entry. The LSN is the position of the change event in the database log. If the connector stops for any reason, including communication failures, network problems, or crashes, upon restarting it continues reading the change-data tables where it left off. This includes snapshots. That is, if the snapshot was not complete when the connector stopped, upon restart the connector begins a new snapshot.
2.1.2. How Debezium Db2 connectors work
To optimally configure and run a Debezium Db2 connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and handles schema changes.
Details are in the following topics:
- Section 2.1.2.1, “How Debezium Db2 connectors perform database snapshots”
- Section 2.1.2.2, “Ad hoc snapshots”
- Section 2.1.2.3, “Incremental snapshots”
- Section 2.1.2.5, “How Debezium Db2 connectors read change-data tables”
- Section 2.1.2.6, “Default names of Kafka topics that receive Debezium Db2 change event records”
- Section 2.1.2.8, “About the Debezium Db2 connector schema change topic”
- Section 2.1.2.9, “Debezium Db2 connector-generated events that represent transaction boundaries”
2.1.2.1. How Debezium Db2 connectors perform database snapshots
Db2`s replication feature is not designed to store the complete history of database changes. As a result, the Debezium Db2 connector cannot retrieve the entire history of the database from the logs. To enable the connector to establish a baseline for the current state of the database, the first time that the connector starts, it performs an initial consistent snapshot of the tables that are in capture mode. For each change that the snapshot captures, the connector emits a read
event to the Kafka topic for the captured table.
You can find more information about snapshots in the following sections:
2.1.2.1.1. Default workflow that the Debezium Db2 connector uses to perform an initial snapshot
The following workflow lists the steps that Debezium takes to create a snapshot. These steps describe the process for a snapshot when the snapshot.mode
configuration property is set to its default value, which is initial
. You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property. If you configure a different snapshot mode, the connector completes the snapshot by using a modified version of this workflow.
- Establish a connection to the database.
-
Determine which tables are in capture mode and should be included in the snapshot. By default, the connector captures the data for all non-system tables. After the snapshot completes, the connector continues to stream data for the specified tables. If you want the connector to capture data only from specific tables you can direct the connector to capture the data for only a subset of tables or table elements by setting properties such as
table.include.list
ortable.exclude.list
. -
Obtain a lock on each of the tables in capture mode. This lock ensures that no schema changes can occur in those tables until the snapshot completes. The level of the lock is determined by the
snapshot.isolation.mode
connector configuration property. - Read the highest (most recent) LSN position in the server’s transaction log.
Capture the schema of all tables or all tables that are designated for capture. The connector persists schema information in its internal database schema history topic. The schema history provides information about the structure that is in effect when a change event occurs.
NoteBy default, the connector captures the schema of every table in the database that is in capture mode, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data.
For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see Understanding why initial snapshots capture the schema for all tables.
- Release any locks obtained in Step 3. Other database clients can now write to any previously locked tables.
At the LSN position read in Step 4, the connector scans the tables that are designated for capture. During the scan, the connector completes the following tasks:
- Confirms that the table was created before the snapshot began. If the table was created after the snapshot began, the connector skips the table. After the snapshot is complete, and the connector transitions to streaming, it emits change events for any tables that were created after the snapshot began.
-
Produces a
read
event for each row that is captured from a table. Allread
events contain the same LSN position, which is the LSN position that was obtained in step 4. -
Emits each
read
event to the Kafka topic for the source table. - Releases data table locks, if applicable.
- Record the successful completion of the snapshot in the connector offsets.
The resulting initial snapshot captures the current state of each row in the captured tables. From this baseline state, the connector captures subsequent changes as they occur.
After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts.
After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 4 so that it does not miss any updates.
If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.
Setting | Description |
---|---|
| The connector performs a snapshot every time that it starts. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot as described in the default workflow for creating an initial snapshot. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot. After the snapshot completes, the connector stops, and does not stream event records for subsequent database changes. |
|
Deprecated, see |
|
The connector captures the structure of all relevant tables, performing all the steps described in the default snapshot workflow, except that it does not create |
| Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth. Warning Do not use this mode to perform a snapshot if schema changes were committed to the database after the last connector shutdown. |
| After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
|
For more information, see snapshot.mode
in the table of connector configuration properties.
2.1.2.1.2. Description of why initial snapshots capture the schema history for all tables
The initial snapshot that a connector runs captures two types of information:
- Table data
-
Information about
INSERT
,UPDATE
, andDELETE
operations in tables that are named in the connector’stable.include.list
property. - Schema data
- DDL statements that describe the structural changes that are applied to tables. Schema data is persisted to both the internal schema history topic, and to the connector’s schema change topic, if one is configured.
After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture. By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture. Connectors require that the table’s schema is present in the schema history topic before they can capture a table. By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, Debezium prepares the connector to readily capture event data from these tables should that later become necessary. If the initial snapshot does not capture a table’s schema, you must add the schema to the history topic before the connector can capture data from the table.
In some cases, you might want to limit schema capture in the initial snapshot. This can be useful when you want to reduce the time required to complete a snapshot. Or when Debezium connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database.
Additional information
- Capturing data from tables not captured by the initial snapshot (no schema change)
- Capturing data from tables not captured by the initial snapshot (schema change)
-
Setting the
schema.history.internal.store.only.captured.tables.ddl
property to specify the tables from which to capture schema information. -
Setting the
schema.history.internal.store.only.captured.databases.ddl
property to specify the logical databases from which to capture schema changes.
2.1.2.1.3. Capturing data from tables not captured by the initial snapshot (no schema change)
In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot. Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database. If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error.
You might still be able to capture data from the table, but you must perform additional steps to add the table schema.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- No schema changes were applied to the table between the LSNs of the earliest and latest change table entry that the connector reads. For information about capturing data from a new table that has undergone structural changes, see Section 2.1.2.1.4, “Capturing data from tables not captured by the initial snapshot (schema change)”.
Procedure
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
Apply the following changes to the connector configuration:
(Optional) Set the value of
schema.history.internal.captured.tables.ddl
tofalse
. This setting causes the snapshot to capture the schema for all tables, and guarantees that, in the future, the connector can reconstruct the schema history for all tables.
NoteSnapshots that capture the schema for all tables require more time to complete.
-
Add the tables that you want the connector to capture to
table.include.list
. Set the
snapshot.mode
to one of the following values:initial
-
When you restart the connector, it takes a full snapshot of the database that captures the table data and table structures.
If you select this option, consider setting the value of theschema.history.internal.captured.tables.ddl
property tofalse
to enable the connector to capture the schema of all tables. schema_only
- When you restart the connector, it takes a snapshot that captures only the table schema. Unlike a full data snapshot, this option does not capture any table data. Use this option if you want to restart the connector more quickly than with a full snapshot.
-
Restart the connector. The connector completes the type of snapshot specified by the
snapshot.mode
. (Optional) If the connector performed a
schema_only
snapshot, after the snapshot completes, initiate an incremental snapshot to capture data from the tables that you added. The connector runs the snapshot while it continues to stream real-time changes from the tables. Running an incremental snapshot captures the following data changes:- For tables that the connector previously captured, the incremental snapsot captures changes that occur while the connector was down, that is, in the interval between the time that the connector was stopped, and the current restart.
- For newly added tables, the incremental snapshot captures all existing table rows.
2.1.2.1.4. Capturing data from tables not captured by the initial snapshot (schema change)
If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change. When Debezium captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event. If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results.
If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available. You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- A schema change was applied to the table so that the records to be captured do not have a uniform structure.
Procedure
- Initial snapshot captured the schema for all tables (
store.only.captured.tables.ddl
was set tofalse
) -
Edit the
table.include.list
property to specify the tables that you want to capture. - Restart the connector.
- Initiate an incremental snapshot if you want to capture existing data from the newly added tables.
-
Edit the
- Initial snapshot did not capture the schema for all tables (
store.only.captured.tables.ddl
was set totrue
) If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures:
- Procedure 1: Schema snapshot, followed by incremental snapshot
In this procedure, the connector first performs a schema snapshot. You can then initiate an incremental snapshot to enable the connector to synchronize data.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toschema_only
. -
Edit the
table.include.list
to add the tables that you want to capture.
-
Set the value of the
- Restart the connector.
- Wait for Debezium to capture the schema of the new and existing tables. Data changes that occurred any tables after the connector stopped are not captured.
- To ensure that no data is lost, initiate an incremental snapshot.
- Procedure 2: Initial snapshot, followed by optional incremental snapshot
In this procedure the connector performs a full initial snapshot of the database. As with any initial snapshot, in a database with many large tables, running an initial snapshot can be a time-consuming operation. After the snapshot completes, you can optionally trigger an incremental snapshot to capture any changes that occur while the connector is off-line.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Edit the
table.include.list
to add the tables that you want to capture. Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toinitial
. -
(Optional) Set
schema.history.internal.store.only.captured.tables.ddl
tofalse
.
-
Set the value of the
- Restart the connector. The connector takes a full database snapshot. After the snapshot completes, the connector transitions to streaming.
- (Optional) To capture any data that changed while the connector was off-line, initiate an incremental snapshot.
2.1.2.2. Ad hoc snapshots
By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.
However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. You might want to perform an ad hoc snapshot after any of the following changes occur in your Debezium environment:
- The connector configuration is modified to capture a different set of tables.
- Kafka topics are deleted and must be rebuilt.
- Data corruption occurs due to a configuration error or some other problem.
You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling tables. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table.
When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.
Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database.
You specify the tables to capture by sending an execute-snapshot
message to the signaling table. Set the type of the execute-snapshot
signal to incremental
or blocking
, and provide the names of the tables to include in the snapshot, as described in the following table:
Field | Default | Value |
---|---|---|
|
|
Specifies the type of snapshot that you want to run. |
| N/A |
An array that contains regular expressions matching the fully-qualified names of the tables to include in the snapshot. |
| N/A |
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
|
| N/A | An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. |
Triggering an ad hoc incremental snapshot
You initiate an ad hoc incremental snapshot by adding an entry with the execute-snapshot
signal type to the signaling table, or by sending a signal message to a Kafka signaling topic. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
For more information, see Incremental snapshots.
Triggering an ad hoc blocking snapshot
You initiate an ad hoc blocking snapshot by adding an entry with the execute-snapshot
signal type to the signaling table or signaling topic. After the connector processes the message, it begins the snapshot operation. The connector temporarily stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the connector resumes streaming.
For more information, see Blocking snapshots.
2.1.2.3. Incremental snapshots
To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows.
As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:
- You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
- If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning.
-
You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its
table.include.list
property.
Incremental snapshot process
When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size. Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ
event. That event represents the value of the row when the snapshot for the chunk began.
As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT
, UPDATE
, or DELETE
operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.
How Debezium resolves collisions among records with the same primary key
In some cases, the UPDATE
or DELETE
events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ
event for that row. When the snapshot eventually emits the corresponding READ
event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.
Snapshot window
To assist in resolving collisions between late-arriving READ
events and streamed events that modify the same table row, Debezium employs a so-called snapshot window. The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key..
For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ
operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE
or DELETE
operations for each change.
As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ
events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ
event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ
events for which no related transaction log events exist. Debezium emits these remaining READ
events to the table’s Kafka topic.
The connector repeats the process for each snapshot chunk.
Currently, you can use either of the following methods to initiate an incremental snapshot:
The Debezium connector for Db2 does not support schema changes while an incremental snapshot is running.
2.1.2.3.1. Triggering an incremental snapshot
To initiate an incremental snapshot, you can send an ad hoc snapshot signal to the signaling table on the source database. You submit snapshot signals as SQL INSERT
queries.
After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation.
The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the type of snapshot operation. Debezium currently supports the incremental
and blocking
snapshot types.
To specify the tables to include in the snapshot, provide a data-collections
array that lists the tables, or an array of regular expressions used to match tables, for example,
{"data-collections": ["public.MyFirstTable", "public.MySecondTable"]}
The data-collections
array for an incremental snapshot signal has no default value. If the data-collections
array is empty, Debezium interprets the empty array to mean that no action is required, and it does not perform a snapshot.
If the name of a table that you want to include in a snapshot contains a dot (.
), a space, or some other non-alphanumeric character, you must escape the table name in double quotes.
For example, to include a table that exists in the public
schema and that has the name My.Table
, use the following format: "public.\"My.Table\""
.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to trigger an incremental snapshot
Send a SQL query to add the ad hoc incremental snapshot request to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example,
INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["schema1.table1", "schema1.table2"], 4 "type":"incremental", 5 "additional-conditions":[{"data-collection": "schema1.table1" ,"filter":"color=\'blue\'"}]}'); 6
The values of the
id
,type
, anddata
parameters in the command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.3. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1
schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its ownid
string as a watermarking signal.3
execute-snapshot
The
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
A required component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot.
The array lists regular expressions that use the formatschema.table
to match the fully-qualified names of the tables. This format is the same as the one that you use to specify the name of the connector’s signaling table.5
incremental
An optional
type
component of thedata
field of a signal that specifies the type of snapshot operation to run.
Valid values areincremental
andblocking
.
If you do not specify a value, the connector defaults to performing an incremental snapshot.6
additional-conditions
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
Each additional condition is an object withdata-collection
andfilter
properties. You can specify different filters for each data collection.
* Thedata-collection
property is the fully-qualified name of the data collection that the filter applies to. For more information about theadditional-conditions
parameter, see Section 2.1.2.3.2, “Running an ad hoc incremental snapshots withadditional-conditions
”.
2.1.2.3.2. Running an ad hoc incremental snapshots with additional-conditions
If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-conditions
parameter to the snapshot signal.
The SQL query for a typical snapshot takes the following form:
SELECT * FROM <tableName> ....
By adding an additional-conditions
parameter, you append a WHERE
condition to the SQL query, as in the following example:
SELECT * FROM <data-collection> WHERE <filter> ....
The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example, suppose you have a products
table that contains the following columns:
-
id
(primary key) -
color
-
quantity
If you want an incremental snapshot of the products
table to include only the data items where color=blue
, you can use the following SQL statement to trigger the snapshot:
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue"}]}');
The additional-conditions
parameter also enables you to pass conditions that are based on more than one column. For example, using the products
table from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue
and quantity>10
:
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue AND quantity>10"}]}');
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
Example 2.1. Incremental snapshot event message
{ "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "ts_us":"1620393591654547", "ts_ns":"1620393591654547920", "transaction":null }
Item | Field name | Description |
---|---|---|
1 |
|
Specifies the type of snapshot operation to run. |
2 |
|
Specifies the event type. |
2.1.2.3.3. Using the Kafka signaling channel to trigger an incremental snapshot
You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is execute-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports the |
| N/A |
An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. |
| N/A |
An optional array of additional conditions that specifies criteria that the connector evaluates to designate a subset of records to include in a snapshot. |
Example 2.2. An execute-snapshot
Kafka message
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
Ad hoc incremental snapshots with additional-conditions
Debezium uses the additional-conditions
field to select a subset of a table’s content.
Typically, when Debezium runs a snapshot, it runs a SQL query such as:
SELECT * FROM <tableName> ….
When the snapshot request includes an additional-conditions
property, the data-collection
and filter
parameters of the property are appended to the SQL query, for example:
SELECT * FROM <data-collection> WHERE <filter> ….
For example, given a products
table with the columns id
(primary key), color
, and brand
, if you want a snapshot to include only content for which color='blue'
, when you request the snapshot, you could add the additional-conditions
property to filter the content:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue'"}]}}`
You can also use the additional-conditions
property to pass conditions based on multiple columns. For example, using the same products
table as in the previous example, if you want a snapshot to include only the content from the products
table for which color='blue'
, and brand='MyBrand'
, you could send the following request:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
2.1.2.3.4. Stopping an incremental snapshot
In some situations, it might be necessary to stop an incremental snapshot. For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations. You can stop a snapshot that is already running by sending a signal to the signaling table on the source database.
You submit a stop snapshot signal to the signaling table by sending it in a SQL INSERT
query. The stop-snapshot signal specifies the type
of the snapshot operation as incremental
, and optionally specifies the tables that you want to omit from the currently running snapshot. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it’s in progress.
Additional resources
You can also stop an incremental snapshot by sending a JSON message to the Kafka signaling topic.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to stop an incremental snapshot
Send a SQL query to stop the ad hoc incremental snapshot to the signaling table:
INSERT INTO <signalTable> (id, type, data) values ('<id>', 'stop-snapshot', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"incremental"}');
For example,
INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["schema1.table1", "schema1.table2"], 4 "type":"incremental"}'); 5
The values of the
id
,type
, anddata
parameters in the signal command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.6. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1
schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string.3
stop-snapshot
Specifies
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
An optional component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot.
The array lists regular expressions which match tables by their fully-qualified names in the formatschema.table
If you omit this component from the
data
field, the signal stops the entire incremental snapshot that is in progress.5
incremental
A required component of the
data
field of a signal that specifies the type of snapshot operation that is to be stopped.
Currently, the only valid option isincremental
.
If you do not specify atype
value, the signal fails to stop the incremental snapshot.
2.1.2.3.5. Using the Kafka signaling channel to stop an incremental snapshot
You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is stop-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports only the |
| N/A |
An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of table names or regular expressions to match table names to remove from the snapshot. |
The following example shows a typical stop-snapshot
Kafka message:
Key = `test_connector` Value = `{"type":"stop-snapshot","data": {"data-collections": ["schema1.table1", "schema1.table2"], "type": "INCREMENTAL"}}`
2.1.2.4. Blocking snapshots
To provide more flexibility in managing snapshots, Debezium includes a supplementary ad hoc snapshot mechanism, known as a blocking snapshot. Blocking snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
A blocking snapshot behaves just like an initial snapshot, except that you can trigger it at run time.
You might want to run a blocking snapshot rather than use the standard initial snapshot process in the following situations:
- You add a new table and you want to complete the snapshot while the connector is running.
- You add a large table, and you want the snapshot to complete in less time than is possible with an incremental snapshot.
Blocking snapshot process
When you run a blocking snapshot, Debezium stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the streaming is resumed.
Configure snapshot
You can set the following properties in the data
component of a signal:
- data-collections: to specify which tables must be snapshot
additional-conditions: You can specify different filters for different table.
-
The
data-collection
property is the fully-qualified name of the table for which the filter will be applied. -
The
filter
property will have the same value used in thesnapshot.select.statement.overrides
-
The
For example:
{"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]}
Possible duplicates
A delay might exist between the time that you send the signal to trigger the snapshot, and the time when streaming stops and the snapshot starts. As a result of this delay, after the snapshot completes, the connector might emit some event records that duplicate records captured by the snapshot.
2.1.2.5. How Debezium Db2 connectors read change-data tables
After a complete snapshot, when a Debezium Db2 connector starts for the first time, the connector identifies the change-data table for each source table that is in capture mode. The connector does the following for each change-data table:
- Reads change events that were created between the last stored, highest LSN and the current, highest LSN.
- Orders the change events according to the commit LSN and the change LSN for each event. This ensures that the connector emits the change events in the order in which the table changes occurred.
- Passes commit and change LSNs as offsets to Kafka Connect.
- Stores the highest LSN that the connector passed to Kafka Connect.
After a restart, the connector resumes emitting change events from the offset (commit and change LSNs) where it left off. While the connector is running and emitting change events, if you remove a table from capture mode or add a table to capture mode, the connector detects the change, and modifies its behavior accordingly.
2.1.2.6. Default names of Kafka topics that receive Debezium Db2 change event records
By default, the Db2 connector writes change events for all of the INSERT
, UPDATE
, and DELETE
operations that occur in a table to a single Apache Kafka topic that is specific to that table. The connector uses the following convention to name change event topics:
topicPrefix.schemaName.tableName
The following list provides definitions for the components of the default name:
- topicPrefix
-
The topic prefix as specified by the
topic.prefix
connector configuration property. - schemaName
- The name of the schema in which the operation occurred.
- tableName
- The name of the table in which the operation occurred.
For example, consider a Db2 installation with the mydatabase
database, which contains four tables: PRODUCTS
, PRODUCTS_ON_HAND
, CUSTOMERS
, and ORDERS
that are in the MYSCHEMA
schema. The connector would emit events to these four Kafka topics:
-
mydatabase.MYSCHEMA.PRODUCTS
-
mydatabase.MYSCHEMA.PRODUCTS_ON_HAND
-
mydatabase.MYSCHEMA.CUSTOMERS
-
mydatabase.MYSCHEMA.ORDERS
The connector applies similar naming conventions to label its internal database schema history topics, schema change topics, and transaction metadata topics.
If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing.
2.1.2.7. How Debezium Db2 connectors handle database schema changes
When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot necessarily apply the current schema to every event. If an event is relatively old, it’s possible that it was recorded before the current schema was applied.
To ensure correct processing of events that occur after a schema change, the Debezium Db2 connector stores a snapshot of the new schema based on the structures of the Db2 change data tables, which mirror the structures of their associated data tables. The connector stores the table schema information, together with the LSN of operations the result in schema changes, in the database schema history Kafka topic. The connector uses the stored schema representation to produce change events that correctly mirror the structure of tables at the time of each insert, update, or delete operation.
When the connector restarts after either a crash or a graceful stop, it resumes reading entries in the Db2 change data tables from the last position that it read. Based on the schema information that the connector reads from the database schema history topic, the connector applies the table structures that existed at the position where the connector restarts.
If you update the schema of a Db2 table that is in capture mode, it’s important that you also update the schema of the corresponding change table. You must be a Db2 database administrator with elevated privileges to update database schema. For more information about how to update Db2 database schema in Debezium environments, see Schema history eveolution.
The database schema history topic is for internal connector use only. Optionally, the connector can also emit schema change events to a different topic that is intended for consumer applications.
Additional resources
- Default names for topics that receive Debezium event records.
2.1.2.8. About the Debezium Db2 connector schema change topic
You can configure a Debezium Db2 connector to produce schema change events that describe schema changes that are applied to tables in the database.
Debezium emits a message to the schema change topic when:
- A new table goes into capture mode.
- A table is removed from capture mode.
- During a database schema update, there is a change in the schema for a table that is in capture mode.
The connector writes schema change events to a Kafka schema change topic that has the name <topicPrefix>
where <topicPrefix>
is the topic prefix that is specified in the topic.prefix
connector configuration property.
The schema for the schema change event has the following elements:
name
- The name of the schema change event message.
type
- The type of the change event message.
version
- The version of the schema. The version is an integer that is incremented each time the schema is changed.
fields
- The fields that are included in the change event message.
Example: Schema of the Db2 connector schema change topic
The following example shows a typical schema in JSON format.
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.db2.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "inventory" } }
Messages that the connector sends to the schema change topic contain a payload that includes the following elements:
databaseName
-
The name of the database to which the statements are applied. The value of
databaseName
serves as the message key. pos
- The position in the transaction log where the statements appear.
tableChanges
-
A structured representation of the entire table schema after the schema change. The
tableChanges
field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser.
For a table that is in capture mode, the connector not only stores the history of schema changes in the schema change topic, but also in an internal database schema history topic. The internal database schema history topic is for connector use only and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic.
Never partition the database schema history topic. For the database schema history topic to function correctly, it must maintain a consistent, global order of the event records that the connector emits to it.
To ensure that the topic is not split among partitions, set the partition count for the topic by using one of the following methods:
-
If you create the database schema history topic manually, specify a partition count of
1
. -
If you use the Apache Kafka broker to create the database schema history topic automatically, the topic is created, set the value of the Kafka
num.partitions
configuration option to1
.
The format of messages that a connector emits to its schema change topic is in an incubating state and can change without notice.
Example: Message emitted to the Db2 connector schema change topic
The following example shows a message in the schema change topic. The message contains a logical representation of the table schema.
{ "schema": { ... }, "payload": { "source": { "version": "2.7.3.Final", "connector": "db2", "name": "db2", "ts_ms": 0, "snapshot": "true", "db": "testdb", "schema": "DB2INST1", "table": "CUSTOMERS", "change_lsn": null, "commit_lsn": "00000025:00000d98:00a2", "event_serial_no": null }, "ts_ms": 1588252618953, 1 "databaseName": "TESTDB", 2 "schemaName": "DB2INST1", "ddl": null, 3 "tableChanges": [ 4 { "type": "CREATE", 5 "id": "\"DB2INST1\".\"CUSTOMERS\"", 6 "table": { 7 "defaultCharsetName": null, "primaryKeyColumnNames": [ 8 "ID" ], "columns": [ 9 { "name": "ID", "jdbcType": 4, "nativeType": null, "typeName": "int identity", "typeExpression": "int identity", "charsetName": null, "length": 10, "scale": 0, "position": 1, "optional": false, "autoIncremented": false, "generated": false }, { "name": "FIRST_NAME", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 2, "optional": false, "autoIncremented": false, "generated": false }, { "name": "LAST_NAME", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 3, "optional": false, "autoIncremented": false, "generated": false }, { "name": "EMAIL", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 4, "optional": false, "autoIncremented": false, "generated": false } ], "attributes": [ 10 { "customAttribute": "attributeValue" } ] } } ] } }
Item | Field name | Description |
---|---|---|
1 |
| Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium. |
2 |
| Identifies the database and the schema that contain the change. |
3 |
|
Always |
4 |
| An array of one or more items that contain the schema changes generated by a DDL command. |
5 |
| Describes the kind of change. The value is one of the following:
|
6 |
| Full identifier of the table that was created, altered, or dropped. |
7 |
| Represents table metadata after the applied change. |
8 |
| List of columns that compose the table’s primary key. |
9 |
| Metadata for each column in the changed table. |
10 |
| Custom attribute metadata for each table change. |
In messages that the connector sends to the schema change topic, the message key is the name of the database that contains the schema change. In the following example, the payload
field contains the key:
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.db2.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "TESTDB" } }
2.1.2.9. Debezium Db2 connector-generated events that represent transaction boundaries
Debezium can generate events that represent transaction boundaries and that enrich change data event messages.
Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.
Debezium generates transaction boundary events for the BEGIN
and END
delimiters in every transaction. Transaction boundary events contain the following fields:
status
-
BEGIN
orEND
. id
- String representation of the unique transaction identifier.
ts_ms
-
The time of a transaction boundary event (
BEGIN
orEND
event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count
(forEND
events)- Total number of events emmitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
elements that indicates the number of events that the connector emits for changes that originate from a data collection.
Example
{ "status": "BEGIN", "id": "00000025:00000d08:0025", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "00000025:00000d08:0025", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "testDB.dbo.tablea", "event_count": 1 }, { "data_collection": "testDB.dbo.tableb", "event_count": 1 } ] }
Unless overridden via the topic.transaction
option, the connector emits transaction events to the <topic.prefix>
.transaction
topic.
Data change event enrichment
When transaction metadata is enabled the connector enriches the change event Envelope
with a new transaction
field. This field provides information about every event in the form of a composite of fields:
id
- String representation of unique transaction identifier.
total_order
- The absolute position of the event among all events generated by the transaction.
data_collection_order
- The per-data collection position of the event among all events that were emitted by the transaction.
Following is an example of a message:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "ts_us": "1580390884335875", "ts_ns": "1580390884335875412", "transaction": { "id": "00000025:00000d08:0025", "total_order": "1", "data_collection_order": "1" } }
2.1.3. Descriptions of Debezium Db2 connector data change events
The Debezium Db2 connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating table. For more information, see topic names.
The Debezium Db2 connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
Also, Db2 names for databases, schemas, and tables can be case sensitive. This means that the connector could emit event records for more than one table to the same Kafka topic.
Details are in the following topics:
2.1.3.1. About keys in Debezium db2 change events
A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s PRIMARY KEY
(or unique constraint) at the time the connector created the event.
Consider the following customers
table, which is followed by an example of a change event key for this table.
Example table
CREATE TABLE customers ( ID INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY, FIRST_NAME VARCHAR(255) NOT NULL, LAST_NAME VARCHAR(255) NOT NULL, EMAIL VARCHAR(255) NOT NULL UNIQUE );
Example change event key
Every change event that captures a change to the customers
table has the same event key schema. For as long as the customers
table has the previous definition, every change event that captures a change to the customers
table has the following key structure. In JSON, it looks like this:
{ "schema": { 1 "type": "struct", "fields": [ 2 { "type": "int32", "optional": false, "field": "ID" } ], "optional": false, 3 "name": "mydatabase.MYSCHEMA.CUSTOMERS.Key" 4 }, "payload": { 5 "ID": 1004 } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Specifies each field that is expected in the |
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-name.table-name.
|
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
2.1.3.2. About values in Debezium Db2 change events
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
Example table
CREATE TABLE customers ( ID INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY, FIRST_NAME VARCHAR(255) NOT NULL, LAST_NAME VARCHAR(255) NOT NULL, EMAIL VARCHAR(255) NOT NULL UNIQUE );
The event value portion of every change event for the customers
table specifies the same schema. The event value’s payload varies according to the event type:
create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" }, { "type": "string", "optional": false, "field": "FIRST_NAME" }, { "type": "string", "optional": false, "field": "LAST_NAME" }, { "type": "string", "optional": false, "field": "EMAIL" } ], "optional": true, "name": "mydatabase.MYSCHEMA.CUSTOMERS.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" }, { "type": "string", "optional": false, "field": "FIRST_NAME" }, { "type": "string", "optional": false, "field": "LAST_NAME" }, { "type": "string", "optional": false, "field": "EMAIL" } ], "optional": true, "name": "mydatabase.MYSCHEMA.CUSTOMERS.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "int64", "optional": false, "field": "ts_us" }, { "type": "int64", "optional": false, "field": "ts_ns" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "schema" }, { "type": "string", "optional": false, "field": "table" }, { "type": "string", "optional": true, "field": "change_lsn" }, { "type": "string", "optional": true, "field": "commit_lsn" }, ], "optional": false, "name": "io.debezium.connector.db2.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "mydatabase.MYSCHEMA.CUSTOMERS.Envelope" 4 }, "payload": { 5 "before": null, 6 "after": { 7 "ID": 1005, "FIRST_NAME": "john", "LAST_NAME": "doe", "EMAIL": "john.doe@example.org" }, "source": { 8 "version": "2.7.3.Final", "connector": "db2", "name": "myconnector", "ts_ms": 1559729468470, "ts_us": 1559729468470476, "ts_ns": 1559729468470476000, "snapshot": false, "db": "mydatabase", "schema": "MYSCHEMA", "table": "CUSTOMERS", "change_lsn": "00000027:00000758:0003", "commit_lsn": "00000027:00000758:0005", }, "op": "c", 9 "ts_ms": 1559729471739, 10 "ts_us": 1559729471739762, 11 "ts_ns": 1559729471739762314 12 } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the row before the event occurred. When the |
7 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
8 |
|
Mandatory field that describes the source metadata for the event. The
|
9 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
10 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the update event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "ID": 1005, "FIRST_NAME": "john", "LAST_NAME": "doe", "EMAIL": "john.doe@example.org" }, "after": { 2 "ID": 1005, "FIRST_NAME": "john", "LAST_NAME": "doe", "EMAIL": "noreply@example.org" }, "source": { 3 "version": "2.7.3.Final", "connector": "db2", "name": "myconnector", "ts_ms": 1559729995937, "ts_us": 1559729995937497, "ts_ns": 1559729995937497000, "snapshot": false, "db": "mydatabase", "schema": "MYSCHEMA", "table": "CUSTOMERS", "change_lsn": "00000027:00000ac0:0002", "commit_lsn": "00000027:00000ac0:0007", }, "op": "u", 4 "ts_ms": 1559729998706, 5 "ts_us": 1559729998706647, 6 "ts_ns": 1559729998706647825 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that specifies the state of the row before the event occurred. In an update event value, the |
2 |
|
An optional field that specifies the state of the row after the event occurred. You can compare the |
3 |
|
Mandatory field that describes the source metadata for the event. The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a DELETE
event and a tombstone event with the old key for the row, followed by an event with the new key for the row.
delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The event value payload
in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, }, "payload": { "before": { 1 "ID": 1005, "FIRST_NAME": "john", "LAST_NAME": "doe", "EMAIL": "noreply@example.org" }, "after": null, 2 "source": { 3 "version": "2.7.3.Final", "connector": "db2", "name": "myconnector", "ts_ms": 1559730445243, "ts_us": 1559730445243482, "ts_ns": 1559730445243482000, "snapshot": false, "db": "mydatabase", "schema": "MYSCHEMA", "table": "CUSTOMERS", "change_lsn": "00000027:00000db0:0005", "commit_lsn": "00000027:00000db0:0007" }, "op": "d", 4 "ts_ms": 1559730450205, 5 "ts_us": 1559730450205521, 6 "ts_ns": 1559730450205521475 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
A delete change event record provides a consumer with the information it needs to process the removal of this row. The old values are included because some consumers might require them in order to properly handle the removal.
Db2 connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, after Debezium’s Db2 connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value.
2.1.4. How Debezium Db2 connectors map data types
For a complete description of the data types that Db2 supports, see Data Types in the Db2 documentation.
The Db2 connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. How that value is represented in the event depends on the Db2 data type of the column. This section describes these mappings. If the default data type conversions do not meet your needs, you can create a custom converter for the connector.
Details are in the following sections:
Basic types
The following table describes how the connector maps each Db2 data type to a literal type and a semantic type in event fields.
-
literal type describes how the value is represented using Kafka Connect schema types:
INT8
,INT16
,INT32
,INT64
,FLOAT32
,FLOAT64
,BOOLEAN
,STRING
,BYTES
,ARRAY
,MAP
, andSTRUCT
. - semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.
Db2 data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
| Only snapshots can be taken from tables with BOOLEAN type columns. Currently SQL Replication on Db2 does not support BOOLEAN, so Debezium can not perform CDC on those tables. Consider using a different type. |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
|
|
|
|
|
|
|
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
|
|
|
|
|
| n/a |
|
| n/a |
|
| n/a |
|
|
|
If present, a column’s default value is propagated to the corresponding field’s Kafka Connect schema. Change events contain the field’s default value unless an explicit column value had been given. Consequently, there is rarely a need to obtain the default value from the schema.
Temporal types
Except for the DATETIMEOFFSET
data type, which contains time zone information, Db2 maps temporal types based on the value of the time.precision.mode
connector configuration property. The following sections describe these mappings:
time.precision.mode=adaptive
When the time.precision.mode
configuration property is set to adaptive
, the default, the connector determines the literal type and semantic type based on the column’s data type definition. This ensures that events exactly represent the values in the database.
Db2 data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
time.precision.mode=connect
When the time.precision.mode
configuration property is set to connect
, the connector uses Kafka Connect logical types. This may be useful when consumers can handle only the built-in Kafka Connect logical types and are unable to handle variable-precision time values. However, since Db2 supports tenth of a microsecond precision, the events generated by a connector with the connect
time precision results in a loss of precision when the database column has a fractional second precision value that is greater than 3.
Db2 data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
Timestamp types
The DATETIME
type represents a timestamp without time zone information. Such columns are converted into an equivalent Kafka Connect value based on UTC. For example, the DATETIME
value "2018-06-20 15:13:16.945104" is represented by an io.debezium.time.Timestamp
with the value "1529507596000".
The timezone of the JVM running Kafka Connect and Debezium does not affect this conversion.
Db2 data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
2.1.5. Setting up Db2 to run a Debezium connector
For Debezium to capture change events that are committed to Db2 tables, a Db2 database administrator with the necessary privileges must configure tables in the database for change data capture. After you begin to run Debezium you can adjust the configuration of the capture agent to optimize performance.
For details about setting up Db2 for use with the Debezium connector, see the following sections:
2.1.5.1. Configuring Db2 tables for change data capture
To put tables into capture mode, Debezium provides a set of user-defined functions (UDFs) for your convenience. The procedure here shows how to install and run these management UDFs. Alternatively, you can run Db2 control commands to put tables into capture mode. The administrator must then enable CDC for each table that you want Debezium to capture.
Prerequisites
-
You are logged in to Db2 as the
db2instl
user. - On the Db2 host, the Debezium management UDFs are available in the $HOME/asncdctools/src directory. UDFs are available from the Debezium examples repository.
-
The Db2 command
bldrtn
is on PATH, e.g. by runningexport PATH=$PATH:/opt/ibm/db2/V11.5.0.0/samples/c/
with Db2 11.5
Procedure
Compile the Debezium management UDFs on the Db2 server host by using the
bldrtn
command provided with Db2:cd $HOME/asncdctools/src
bldrtn asncdc
Start the database if it is not already running. Replace
DB_NAME
with the name of the database that you want Debezium to connect to.db2 start db DB_NAME
Ensure that JDBC can read the Db2 metadata catalog:
cd $HOME/sqllib/bnd
db2 connect to DB_NAME db2 bind db2schema.bnd blocking all grant public sqlerror continue
Ensure that the database was recently backed-up. The ASN agents must have a recent starting point to read from. If you need to perform a backup, run the following commands, which prune the data so that only the most recent version is available. If you do not need to retain the older versions of the data, specify
dev/null
for the backup location.Back up the database. Replace
DB_NAME
andBACK_UP_LOCATION
with appropriate values:db2 backup db DB_NAME to BACK_UP_LOCATION
Restart the database:
db2 restart db DB_NAME
Connect to the database to install the Debezium management UDFs. It is assumed that you are logged in as the
db2instl
user so the UDFs should be installed on thedb2inst1
user.db2 connect to DB_NAME
Copy the Debezium management UDFs and set permissions for them:
cp $HOME/asncdctools/src/asncdc $HOME/sqllib/function
chmod 777 $HOME/sqllib/function
Enable the Debezium UDF that starts and stops the ASN capture agent:
db2 -tvmf $HOME/asncdctools/src/asncdc_UDF.sql
Create the ASN control tables:
$ db2 -tvmf $HOME/asncdctools/src/asncdctables.sql
Enable the Debezium UDF that adds tables to capture mode and removes tables from capture mode:
$ db2 -tvmf $HOME/asncdctools/src/asncdcaddremove.sql
After you set up the Db2 server, use the UDFs to control Db2 replication (ASN) with SQL commands. Some of the UDFs expect a return value in which case you use the SQL
VALUE
statement to invoke them. For other UDFs, use the SQLCALL
statement.Start the ASN agent from an SQL client:
VALUES ASNCDC.ASNCDCSERVICES('start','asncdc');
or from the shell:
db2 "VALUES ASNCDC.ASNCDCSERVICES('start','asncdc');"
The preceding statement returns one of the following results:
-
asncap is already running
start -->
<COMMAND>
In this case, enter the specified
<COMMAND>
in the terminal window as shown in the following example:/database/config/db2inst1/sqllib/bin/asncap capture_schema=asncdc capture_server=SAMPLE &
-
Put tables into capture mode. Invoke the following statement for each table that you want to put into capture. Replace
MYSCHEMA
with the name of the schema that contains the table you want to put into capture mode. Likewise, replaceMYTABLE
with the name of the table to put into capture mode:CALL ASNCDC.ADDTABLE('MYSCHEMA', 'MYTABLE');
Reinitialize the ASN service:
VALUES ASNCDC.ASNCDCSERVICES('reinit','asncdc');
Additional resource
2.1.5.2. Effect of Db2 capture agent configuration on server load and latency
When a database administrator enables change data capture for a source table, the capture agent begins to run. The agent reads new change event records from the transaction log and replicates the event records to a capture table. Between the time that a change is committed in the source table, and the time that the change appears in the corresponding change table, there is always a small latency interval. This latency interval represents a gap between when changes occur in the source table and when they become available for Debezium to stream to Apache Kafka.
Ideally, for applications that must respond quickly to changes in data, you want to maintain close synchronization between the source and capture tables. You might imagine that running the capture agent to continuously process change events as rapidly as possible might result in increased throughput and reduced latency — populating change tables with new event records as soon as possible after the events occur, in near real time. However, this is not necessarily the case. There is a performance penalty to pay in the pursuit of more immediate synchronization. Each time that the change agent queries the database for new event records, it increases the CPU load on the database host. The additional load on the server can have a negative effect on overall database performance, and potentially reduce transaction efficiency, especially during times of peak database use.
It’s important to monitor database metrics so that you know if the database reaches the point where the server can no longer support the capture agent’s level of activity. If you experience performance issues while running the capture agent, adjust capture agent settings to reduce CPU load.
2.1.5.3. Db2 capture agent configuration parameters
On Db2, the IBMSNAP_CAPPARMS
table contains parameters that control the behavior of the capture agent. You can adjust the values for these parameters to balance the configuration of the capture process to reduce CPU load and still maintain acceptable levels of latency.
Specific guidance about how to configure Db2 capture agent parameters is beyond the scope of this documentation.
In the IBMSNAP_CAPPARMS
table, the following parameters have the greatest effect on reducing CPU load:
COMMIT_INTERVAL
- Specifies the number of seconds that the capture agent waits to commit data to the change data tables.
- A higher value reduces the load on the database host and increases latency.
-
The default value is
30
.
SLEEP_INTERVAL
- Specifies the number of seconds that the capture agent waits to start a new commit cycle after it reaches the end of the active transaction log.
- A higher value reduces the load on the server, and increases latency.
-
The default value is
5
.
Additional resources
- For more information about capture agent parameters, see the Db2 documentation.
2.1.6. Deployment of Debezium Db2 connectors
You can use either of the following methods to deploy a Debezium Db2 connector:
Due to licensing requirements, the Debezium Db2 connector archive does not include the Db2 JDBC driver that Debezium requires to connect to a Db2 database. To enable the connector to access the database, you must add the driver to your connector environment. For information about how to obtain the driver, see Obtaining the Db2 JDBC driver.
Additional resources
2.1.6.1. Obtaining the Db2 JDBC driver
Due to licensing requirements, the Db2 JDBC driver file that Debezium requires to connect to an Db2 database is not included in the Debezium Db2 connector archive. The driver is available for download from Maven Central. Depending on the deployment method that you use, you retrieve the driver by adding a command to the Kafka Connect custom resource or to the Dockerfile that you use to build the connector image.
-
If you use Streams for Apache Kafka to add the connector to your Kafka Connect image, add the Maven Central location for the driver to
builds.plugins.artifact.url
in theKafkaConnect
custom resource as shown in Section 2.1.6.3, “Using Streams for Apache Kafka to deploy a Debezium Db2 connector”. -
If you use a Dockerfile to build a container image for the connector, insert a
curl
command in the Dockerfile to specify the URL for downloading the required driver file from Maven Central. For more information, see Section 2.1.6.4, “Deploying a Debezium Db2 connector by building a custom Kafka Connect container image from a Dockerfile”.
2.1.6.2. Db2 connector deployment using Streams for Apache Kafka
Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in.
During the deployment process, you create and use the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. -
A
KafkaConnector
CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying theKafkaConnector
CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The spec.build.output
parameter in the KafkaConnect
CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically.
If you use a KafkaConnect
resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information.
Additional resources
- Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift.
2.1.6.3. Using Streams for Apache Kafka to deploy a Debezium Db2 connector
With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use.
During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect
custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
The newly created container is pushed to the container registry that is specified in .spec.build.output
, and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector
custom resources to start the connectors that are included in the build.
Prerequisites
- You have access to an OpenShift cluster on which the cluster Operator is installed.
- The Streams for Apache Kafka Operator is running.
- An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Kafka Connect is deployed on Streams for Apache Kafka
- You have a Red Hat build of Debezium license.
-
The OpenShift
oc
CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
- To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub
- An account and permissions to create and manage images in the registry.
- To store the build image as a native OpenShift ImageStream
- An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams on OpenShift Container Platform.
Procedure
- Log in to the OpenShift cluster.
Create a Debezium
KafkaConnect
custom resource (CR) for the connector, or modify an existing one. For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifies themetadata.annotations
andspec.build
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
Example 2.3. A
dbz-connect.yaml
file that defines aKafkaConnect
custom resource that includes a Debezium connectorIn the example that follows, the custom resource is configured to download the following artifacts:
- The Debezium Db2 connector archive.
- The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector.
- The Debezium scripting SMT archive and the associated language dependencies that you want to use with the Debezium connector. The SMT archive and language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT.
- The Db2 JDBC driver, which is required to connect to Db2 databases, but is not included in the connector archive.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-db2 artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-db2/2.7.3.Final-redhat-00001/debezium-connector-db2-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat-<build-number>/apicurio-registry-distro-connect-converter-2.4.4.Final-redhat-<build-number>.zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar - type: jar 11 url: https://repo1.maven.org/maven2/com/ibm/db2/jcc/11.5.0.0/jcc-11.5.0.0.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ...
Table 2.18. Descriptions of Kafka Connect configuration settings Item Description 1
Sets the
strimzi.io/use-connector-resources
annotation to"true"
to enable the Cluster Operator to useKafkaConnector
resources to configure connectors in this Kafka Connect cluster.2
The
spec.build
configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.3
The
build.output
specifies the registry in which the newly built image is stored.4
Specifies the name and image name for the image output. Valid values for
output.type
aredocker
to push into a container registry such as Docker Hub or Quay, orimagestream
to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying thebuild.output
in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}.5
The
plugins
configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-inname
, and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component.6
The value of
artifacts.type
specifies the file type of the artifact specified in theartifacts.url
. Valid types arezip
,tgz
, orjar
. Debezium connector archives are provided in.zip
file format. JDBC driver files are in.jar
format. Thetype
value must match the type of the file that is referenced in theurl
field.7
The value of
artifacts.url
specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. The OpenShift cluster must have access to the specified server.8
(Optional) Specifies the artifact
type
andurl
for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter.9
(Optional) Specifies the artifact
type
andurl
for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy.10
(Optional) Specifies the artifact
type
andurl
for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT.ImportantIf you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components,
artifacts.url
must specify the location of a JAR file, and the value ofartifacts.type
must also be set tojar
. Invalid values cause the connector fails at runtime.To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries:
-
groovy
-
groovy-jsr223
(scripting agent) -
groovy-json
(module for parsing JSON strings)
The Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript.
11
Specifies the location of the Db2 JDBC driver in Maven Central. The required driver is not included in the Debezium Db2 connector archive.
Apply the
KafkaConnect
build specification to the OpenShift cluster by entering the following command:oc create -f dbz-connect.yaml
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster.Create a
KafkaConnector
resource to define an instance of each connector that you want to deploy.
For example, create the followingKafkaConnector
CR, and save it asdb2-inventory-connector.yaml
Example 2.4.
db2-inventory-connector.yaml
file that defines theKafkaConnector
custom resource for a Debezium connectorapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-db2 1 spec: class: io.debezium.connector.db2.Db2ConnectorConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: db2.debezium-db2.svc.cluster.local 5 database.port: 50000 6 database.user: debezium 7 database.password: dbz 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-db2 10 table.include.list: public.inventory 11 ...
Table 2.19. Descriptions of connector configuration settings Item Description 1
The name of the connector to register with the Kafka Connect cluster.
2
The name of the connector class.
3
The number of tasks that can operate concurrently.
4
The connector’s configuration.
5
The address of the host database instance.
6
The port number of the database instance.
7
The name of the account that Debezium uses to connect to the database.
8
The password that Debezium uses to connect to the database user account.
9
The name of the database to capture changes from.
10
The topic prefix for the database instance or cluster.
The specified name must be formed only from alphanumeric characters or underscores.
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster.
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector.11
The list of tables from which the connector captures change events.
Create the connector resource by running the following command:
oc create -n <namespace> -f <kafkaConnector>.yaml
For example,
oc create -n debezium -f db2-inventory-connector.yaml
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by
spec.config.database.dbname
in theKafkaConnector
CR. After the connector pod is ready, Debezium is running.
You are now ready to verify the Debezium Db2 deployment.
2.1.6.4. Deploying a Debezium Db2 connector by building a custom Kafka Connect container image from a Dockerfile
To deploy a Debezium Db2 connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive, and then push this container image to a container registry. You then need to create the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance. Theimage
property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat Streams for Apache Kafka is deployed. Streams for Apache Kafka offers operators and images that bring Apache Kafka to OpenShift. -
A
KafkaConnector
CR that defines your Debezium Db2 connector. Apply this CR to the same OpenShift instance where you applied theKafkaConnect
CR.
Prerequisites
- Db2 is running and you completed the steps to set up Db2 to work with a Debezium connector.
- Streams for Apache Kafka is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Managing Streams for Apache Kafka on OpenShift.
- Podman or Docker is installed.
- The Kafka Connect server has access to Maven Central to download the required JDBC driver for Db2. You can also use a local copy of the driver, or one that is available from a local Maven repository or other HTTP server.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector.
Procedure
Create the Debezium Db2 container for Kafka Connect:
Create a Dockerfile that uses
registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0
as the base image. For example, from a terminal window, enter the following command:cat <<EOF >debezium-container-for-db2.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-db2/2.7.3.Final-redhat-00001/debezium-connector-db2-2.7.3.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-db2-2.7.3.Final-redhat-00001-plugin.zip \ && rm debezium-connector-db2-2.7.3.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://repo1.maven.org/maven2/com/ibm/db2/jcc/11.5.0.0/jcc-11.5.0.0.jar USER 1001 EOF
Item Description 1
You can specify any file name that you want.
2
Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory.
The command creates a Dockerfile with the name
debezium-container-for-db2.yaml
in the current directory.Build the container image from the
debezium-container-for-db2.yaml
Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:podman build -t debezium-container-for-db2:latest .
docker build -t debezium-container-for-db2:latest .
The preceding commands build a container image with the name
debezium-container-for-db2
.Push your custom image to a container registry, such as quay.io or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:
podman push <myregistry.io>/debezium-container-for-db2:latest
docker push <myregistry.io>/debezium-container-for-db2:latest
Create a new Debezium Db2
KafkaConnect
custom resource (CR). For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: #... image: debezium-container-for-db2 2 ...
Item Description 1
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.2
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply the
KafkaConnect
CR to the OpenShift Kafka Connect environment by entering the following command:oc create -f dbz-connect.yaml
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium Db2 connector instance.You configure a Debezium Db2 connector in a
.yaml
file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.The following example configures a Debezium connector that connects to a Db2 server host,
192.168.99.100
, on port50000
. This host has a database namedmydatabase
, a table with the nameinventory
, andinventory-connector-db2
is the server’s logical name.Db2
inventory-connector.yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-db2 1 labels: strimzi.io/cluster: my-connect-cluster annotations: strimzi.io/use-connector-resources: 'true' spec: class: io.debezium.connector.db2.Db2Connector 2 tasksMax: 1 3 config: 4 database.hostname: 192.168.99.100 5 database.port: 50000 6 database.user: db2inst1 7 database.password: Password! 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-db2 10 table.include.list: public.inventory 11 ...
Table 2.20. Descriptions of connector configuration settings Item Description 1
The name of the connector when we register it with a Kafka Connect cluster.
2
The name of this Db2 connector class.
3
Only one task should operate at any one time.
4
The connector’s configuration.
5
The database host, which is the address of the Db2 instance.
6
The port number of the Db2 instance.
7
The name of the Db2 user.
8
The password for the Db2 user.
9
The name of the database to capture changes from.
10
The logical name of the Db2 instance/cluster, which forms a namespace and is used in the names of the Kafka topics to which the connector writes, the names of Kafka Connect schemas, and the namespaces of the corresponding Avro schema when the Avro Connector is used.
11
The connector captures changes from the
public.inventory
table only.Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in theinventory-connector.yaml
file, you would run the following command:oc apply -f inventory-connector.yaml
The preceding command registers
inventory-connector
and the connector starts to run against themydatabase
database as defined in theKafkaConnector
CR.
For the complete list of the configuration properties that you can set for the Debezium Db2 connector, see Db2 connector properties.
Results
After the connector starts, it performs a consistent snapshot of the Db2 database tables that the connector is configured to capture changes for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
2.1.6.5. Verifying that the Debezium Db2 connector is running
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc):
- Verify the connector status.
- Verify that the connector generates topics.
- Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table.
Prerequisites
- A Debezium connector is deployed to Streams for Apache Kafka on OpenShift.
-
The OpenShift
oc
CLI client is installed. - You have access to the OpenShift Container Platform web console.
Procedure
Check the status of the
KafkaConnector
resource by using one of the following methods:From the OpenShift Container Platform web console:
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaConnector
. - From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-db2.
- In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc describe KafkaConnector <connector-name> -n <project>
For example,
oc describe KafkaConnector inventory-connector-db2 -n debezium
The command returns status information that is similar to the following output:
Example 2.5.
KafkaConnector
resource statusName: inventory-connector-db2 Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-db2 Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-db2.inventory inventory-connector-db2.inventory.addresses inventory-connector-db2.inventory.customers inventory-connector-db2.inventory.geom inventory-connector-db2.inventory.orders inventory-connector-db2.inventory.products inventory-connector-db2.inventory.products_on_hand Events: <none>
Verify that the connector created Kafka topics:
From the OpenShift Container Platform web console.
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaTopic
. -
From the KafkaTopics list, click the name of the topic that you want to check, for example,
inventory-connector-db2.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d
. - In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc get kafkatopics
The command returns status information that is similar to the following output:
Example 2.6.
KafkaTopic
resource statusNAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-db2--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-db2.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-db2.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-db2.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-db2.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-db2.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-db2.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
Check topic content.
- From a terminal window, enter the following command:
oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=<topic-name>
For example,
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-db2.inventory.products_on_hand
The format for specifying the topic name is the same as the
oc describe
command returns in Step 1, for example,inventory-connector-db2.inventory.addresses
.For each event in the topic, the command returns information that is similar to the following output:
Example 2.7. Content of a Debezium change event
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory-connector-db2.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-db2.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-db2.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"int64","optional":false,"field":"ts_us"},{"type":"int64","optional":false,"field":"ts_ns"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.db2.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"int64","optional":true,"field":"ts_us"},{"type":"int64","optional":true,"field":"ts_ns"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"inventory-connector-db2.inventory.products_on_hand.Envelope"},"payload":{"before":null,"after":{"product_id":101,"quantity":3},"source":{"version":"2.7.3.Final-redhat-00001","connector":"db2","name":"inventory-connector-db2","ts_ms":1638985247805,"ts_us":1638985247805000000,"ts_ns":1638985247805000000,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"db2-bin.000003","pos":156,"row":0,"thread":null,"query":null},"op":"r","ts_ms":1638985247805,"ts_us":1638985247805102,"ts_ns":1638985247805102588,"transaction":null}}
In the preceding example, the
payload
value shows that the connector snapshot generated a read ("op" ="r"
) event from the tableinventory.products_on_hand
. The"before"
state of theproduct_id
record isnull
, indicating that no previous value exists for the record. The"after"
state shows aquantity
of3
for the item withproduct_id
101
.
2.1.6.6. Descriptions of Debezium Db2 connector configuration properties
The Debezium Db2 connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:
- Required configuration properties
- Advanced configuration properties
- Database schema history connector configuration properties that control how Debezium processes events that it reads from the database schema history topic.
Pass-through Db2 connector configuration properties
- Pass-through database schema history properties for configuring producer and consumer clients
- Pass-through Kafka signals configuration properties
- Pass-through Kafka signals consumer client configuration properties
- Pass-through sink notification configuration properties
- Pass-through database driver configuration properties
Required Debezium Db2 connector configuration properties
The following configuration properties are required unless a default value is available.
Property | Default | Description |
---|---|---|
No default | Unique name for the connector. Attempting to register again with the same name will fail. This property is required by all Kafka Connect connectors. | |
No default |
The name of the Java class for the connector. Always use a value of | |
| The maximum number of tasks that should be created for this connector. The Db2 connector always uses a single task and therefore does not use this value, so the default is always acceptable. | |
No default | IP address or hostname of the Db2 database server. | |
| Integer port number of the Db2 database server. | |
No default | Name of the Db2 database user for connecting to the Db2 database server. | |
No default | Password to use when connecting to the Db2 database server. | |
No default | The name of the Db2 database from which to stream the changes | |
No default |
Topic prefix which provides a namespace for the particular Db2 database server that hosts the database for which Debezium is capturing changes. Only alphanumeric characters, hyphens, dots and underscores must be used in the topic prefix name. The topic prefix should be unique across all other connectors, since this topic prefix is used for all Kafka topics that receive records from this connector. Warning Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. The connector is also unable to recover its database schema history topic. | |
No default |
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you want the connector to capture. When this property is set, the connector captures changes only from the specified tables. Each identifier is of the form schemaName.tableName. By default, the connector captures changes in every non-system table.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table it does not match substrings that might be present in a table name. | |
No default |
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want the connector to capture. The connector captures changes in each non-system table that is not included in the exclude list. Each identifier is of the form schemaName.tableName.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table it does not match substrings that might be present in a table name. | |
empty string |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. Fully-qualified names for columns are of the form schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. If you include this property in the configuration, do not also set the | |
empty string |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event values. Fully-qualified names for columns are of the form schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. Primary key columns are always included in the event’s key, even if they are excluded from the value. If you include this property in the configuration, do not set the | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form schemaName.tableName.columnName.
A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. | |
|
Time, date, and timestamps can be represented with different kinds of precision: | |
|
Controls whether a delete event is followed by a tombstone event. | |
| Boolean value that specifies whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change is recorded with a key that contains the database name and a value that is a JSON structure that describes the schema update. This is independent of how the connector internally records database schema history. | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration. | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set
The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. You can specify multiple properties with different lengths in a single configuration. | |
n/a | An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName. | |
n/a | An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName. For the list of Db2-specific data type names, see the Db2 data type mappings . | |
empty string | A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns.
The property can list entries for multiple tables. Use a semicolon to separate entries for different tables in the list. | |
none |
Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
| |
none |
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
See Avro naming for more details. |
Advanced connector configuration properties
The following advanced configuration properties have defaults that work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property | Default | Description |
---|---|---|
No default |
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example,
You must set the
For each converter that you configure for a connector, you must also add a
For example, isbn.type: io.debezium.test.IsbnConverter
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter. isbn.schema.name: io.debezium.db2.type.Isbn | |
initial |
Specifies the criteria for performing a snapshot when the connector starts:
| |
exclusive | Controls whether and for how long the connector holds a table lock. Table locks prevent other database clients from performing certain table operations during a snapshot. You can set the following values:
| |
|
Specifies how the connector queries data while performing a snapshot.
This setting enables you to manage snapshot content in a more flexible manner compared to using the | |
|
During a snapshot, controls the transaction isolation level and how long the connector locks the tables that are in capture mode. The possible values are: | |
|
Specifies how the connector handles exceptions during processing of events. The possible values are: | |
| Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 500 milliseconds, or 0.5 second. | |
| Positive integer value that specifies the maximum size of each batch of events that the connector processes. | |
|
Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of | |
|
A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. | |
|
Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default behavior is that the connector does not send heartbeat messages. | |
No default | An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors. | |
0 |
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of the | |
All tables specified in |
An optional, comma-separated list of regular expressions that match the fully-qualified names ( To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. | |
| During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch. | |
|
Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this interval, the snapshot fails. How the connector performs snapshots provides details. Other possible settings are: | |
No default | Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a comma-separated list of fully-qualified table names in the form
From a "snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM customers.orders WHERE delete_flag = 0 ORDER BY id DESC"
In the resulting snapshot, the connector includes only the records for which | |
|
Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify | |
|
A comma-separated list of operation types that will be skipped during streaming. The operations include: | |
No default |
Fully-qualified name of the data collection that is used to send signals to the connector. Use the following format to specify the collection name: | |
source | List of the signaling channel names that are enabled for the connector. By default, the following channels are available:
| |
No default | List of the notification channel names that are enabled for the connector. By default, the following channels are available:
| |
| The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. | |
|
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
| |
|
The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to | |
|
Specify the delimiter for topic name, defaults to | |
| The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. | |
|
Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: | |
|
Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: | |
| Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. Important Parallel initial snapshots is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. | |
|
Defines tags that customize MBean object names by adding metadata that provides contextual information. Specify a comma-separated list of key-value pairs. Each key represents a tag for the MBean object name, and the corresponding value represents a value for the key, for example, The connector appends the specified tags to the base MBean object name. Tags can help you to organize and categorize metrics data. You can define tags to identify particular application instances, environments, regions, versions, and so forth. For more information, see Customized MBean names. | |
|
Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
| |
|
Specifies the time, in milliseconds, that the connector waits for a query to complete. Set the value to |
Debezium Db2 connector database schema history configuration properties
Debezium provides a set of schema.history.internal.*
properties that control how the connector interacts with the schema history topic.
The following table describes the schema.history.internal
properties for configuring the Debezium connector.
Property | Default | Description |
---|---|---|
No default | The full name of the Kafka topic where the connector stores the database schema history. | |
No default | A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while fetching cluster information using Kafka admin client. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while create kafka history topic using Kafka admin client. | |
|
The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is | |
|
A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is | |
|
A Boolean value that specifies whether the connector records schema structures from all tables in a schema or database, or only from tables that are designated for capture.
| |
|
A Boolean value that specifies whether the connector records schema structures from all logical databases in the database instance.
|
Pass-through Db2 connector configuration properties
The connector supports pass-through properties that enable Debezium to specify custom configuration options for fine-tuning the behavior of the Apache Kafka producer and consumer. For information about the full range of configuration properties for Kafka producers and consumers, see the Kafka documentation.
Pass-through properties for configuring how producer and consumer clients interact with schema history topics
Debezium relies on an Apache Kafka producer to write schema changes to database schema history topics. Similarly, it relies on a Kafka consumer to read from database schema history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the schema.history.internal.producer.*
and schema.history.internal.consumer.*
prefixes. The pass-through producer and consumer database schema history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:
schema.history.internal.producer.security.protocol=SSL schema.history.internal.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.producer.ssl.keystore.password=test1234 schema.history.internal.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.producer.ssl.truststore.password=test1234 schema.history.internal.producer.ssl.key.password=test1234 schema.history.internal.consumer.security.protocol=SSL schema.history.internal.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.consumer.ssl.keystore.password=test1234 schema.history.internal.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.consumer.ssl.truststore.password=test1234 schema.history.internal.consumer.ssl.key.password=test1234
Debezium strips the prefix from the property name before it passes the property to the Kafka client.
For more information about Kafka producer configuration properties and Kafka consumer configuration properties, see the Apache Kafka documentation .
Pass-through properties for configuring how the Db2 connector interacts with the Kafka signaling topic
Debezium provides a set of signal.*
properties that control how the connector interacts with the Kafka signals topic.
The following table describes the Kafka signal
properties.
Property | Default | Description |
---|---|---|
<topic.prefix>-signal | The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. | |
kafka-signal | The name of the group ID that is used by Kafka consumers. | |
No default | A list of the host and port pairs that the connector uses to establish its initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. | |
| Specifies whether the Kafka consumer writes an offset commit after it reads a message from the signaling topic. The value that you assign to this property determines whether the connector can process requests that the signaling topic receives while the connector is offline. Choose one of the following settings:
|
Pass-through properties for configuring the Kafka consumer client for the signaling channel
The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.*
. For example, the connector passes properties such as signal.consumer.security.protocol=SSL
to the Kafka consumer.
Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer.
Pass-through properties for configuring the Db2 connector sink notification channel
The following table describes properties that you can use to configure the Debezium sink notification
channel.
Property | Default | Description |
---|---|---|
No default |
The name of the topic that receives notifications from Debezium. This property is required when you configure the |
Debezium connector pass-through database driver configuration properties
The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix driver.*
. For example, the connector passes properties such as driver.foobar=false
to the JDBC URL.
Debezium strips the prefixes from the properties before it passes the properties to the database driver.
2.1.7. Monitoring Debezium Db2 connector performance
The Debezium Db2 connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Apache ZooKeeper, Apache Kafka, and Kafka Connect provide.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is capturing changes and streaming change event records.
- Schema history metrics provide information about the status of the connector’s schema history.
Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
2.1.7.1. Customized names for Db2 connector snapshot and streaming MBean objects
Debezium connectors expose metrics via the MBean name for the connector. These metrics, which are specific to each connector instance, provide data about the behavior of the connector’s snapshot, streaming, and schema history processes.
By default, when you deploy a correctly configured connector, Debezium generates a unique MBean name for each of the different connector metrics. To view the metrics for a connector process, you configure your observability stack to monitor its MBean. But these default MBean names depend on the connector configuration; configuration changes can result in changes to the MBean names. A change to the MBean name breaks the linkage between the connector instance and the MBean, disrupting monitoring activity. In this scenario, you must reconfigure the observability stack to use the new MBean name if you want to resume monitoring.
To prevent monitoring disruptions that result from MBean name changes, you can configure custom metrics tags. You configure custom metrics by adding the custom.metric.tags
property to the connector configuration. The property accepts key-value pairs in which each key represents a tag for the MBean object name, and the corresponding value represents the value of that tag. For example: k1=v1,k2=v2
. Debezium appends the specified tags to the MBean name of the connector.
After you configure the custom.metric.tags
property for a connector, you can configure the observability stack to retrieve metrics associated with the specified tags. The observability stack then uses the specified tags, rather than the mutable MBean names to uniquely identify connectors. Later, if Debezium redefines how it constructs MBean names, or if the topic.prefix
in the connector configuration changes, metrics collection is uninterrupted, because the metrics scrape task uses the specified tag patterns to identify the connector.
A further benefit of using custom tags, is that you can use tags that reflect the architecture of your data pipeline, so that metrics are organized in a way that suits you operational needs. For example, you might specify tags with values that declare the type of connector activity, the application context, or the data source, for example, db1-streaming-for-application-abc
. If you specify multiple key-value pairs, all of the specified pairs are appended to the connector’s MBean name.
The following example illustrates how tags modify the default MBean name.
Example 2.8. How custom tags modify the connector MBean name
By default, the Db2 connector uses the following MBean name for streaming metrics:
debezium.db2:type=connector-metrics,context=streaming,server=<topic.prefix>
If you set the value of custom.metric.tags
to database=salesdb-streaming,table=inventory
, Debezium generates the following custom MBean name:
debezium.db2:type=connector-metrics,context=streaming,server=<topic.prefix>,database=salesdb-streaming,table=inventory
2.1.7.2. Monitoring Debezium during snapshots of Db2 databases
The MBean is debezium.db2:type=connector-metrics,context=snapshot,server=<topic.prefix>
.
Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
The following table lists the snapshot metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last snapshot event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of events that this connector has seen since last started or reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The total number of tables that are being included in the snapshot. | |
| The number of tables that the snapshot has yet to copy. | |
| Whether the snapshot was started. | |
| Whether the snapshot was paused. | |
| Whether the snapshot was aborted. | |
| Whether the snapshot completed. | |
| The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. | |
| The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. | |
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The connector also provides the following additional snapshot metrics when an incremental snapshot is executed:
Attributes | Type | Description |
---|---|---|
| The identifier of the current snapshot chunk. | |
| The lower bound of the primary key set defining the current chunk. | |
| The upper bound of the primary key set defining the current chunk. | |
| The lower bound of the primary key set of the currently snapshotted table. | |
| The upper bound of the primary key set of the currently snapshotted table. |
2.1.7.3. Monitoring Debezium Db2 connector record streaming
The MBean is debezium.db2:type=connector-metrics,context=streaming,server=<topic.prefix>
.
The following table lists the streaming metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last streaming event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of data change events reported by the source database since the last connector start, or since a metrics reset. Represents the data change workload for Debezium to process. | |
| The total number of create events processed by the connector since its last start or metrics reset. | |
| The total number of update events processed by the connector since its last start or metrics reset. | |
| The total number of delete events processed by the connector since its last start or metrics reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| Flag that denotes whether the connector is currently connected to the database server. | |
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. | |
| The number of processed transactions that were committed. | |
| The coordinates of the last received event. | |
| Transaction identifier of the last processed transaction. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
2.1.7.4. Monitoring Debezium Db2 connector schema history
The MBean is debezium.db2:type=connector-metrics,context=schema-history,server=<topic.prefix>
.
The following table lists the schema history metrics that are available.
Attributes | Type | Description |
---|---|---|
|
One of | |
| The time in epoch seconds at what recovery has started. | |
| The number of changes that were read during recovery phase. | |
| the total number of schema changes applied during recovery and runtime. | |
| The number of milliseconds that elapsed since the last change was recovered from the history store. | |
| The number of milliseconds that elapsed since the last change was applied. | |
| The string representation of the last change recovered from the history store. | |
| The string representation of the last applied change. |
2.1.8. Managing Debezium Db2 connectors
After you deploy a Debezium Db2 connector, use the Debezium management UDFs to control Db2 replication (ASN) with SQL commands. Some of the UDFs expect a return value in which case you use the SQL VALUE
statement to invoke them. For other UDFs, use the SQL CALL
statement.
Task | Command and notes |
---|---|
| |
| |
| |
| |
| |
|
2.1.9. Updating schemas for Db2 tables in capture mode for Debezium connectors
While a Debezium Db2 connector can capture schema changes, to update a schema, you must collaborate with a database administrator to ensure that the connector continues to produce change events. This is required by the way that Db2 implements replication.
For each table in capture mode, the replication feature in Db2 creates a change-data table that contains all changes to that source table. However, change-data table schemas are static. If you update the schema for a table in capture mode then you must also update the schema of its corresponding change-data table. A Debezium Db2 connector cannot do this. A database administrator with elevated privileges must update schemas for tables that are in capture mode.
It is vital to execute a schema update procedure completely before there is a new schema update on the same table. Consequently, the recommendation is to execute all DDLs in a single batch so the schema update procedure is done only once.
There are generally two procedures for updating table schemas:
Each approach has advantages and disadvantages.
2.1.9.1. Performing offline schema updates for Debezium Db2 connectors
You stop the Debezium Db2 connector before you perform an offline schema update. While this is the safer schema update procedure, it might not be feasible for applications with high-availability requirements.
Prerequisites
- One or more tables that are in capture mode require schema updates.
Procedure
- Suspend the application that updates the database.
- Wait for the Debezium connector to stream all unstreamed change event records.
- Stop the Debezium connector.
- Apply all changes to the source table schema.
-
In the ASN register table, mark the tables with updated schemas as
INACTIVE
. - Reinitialize the ASN capture service.
- Remove the source table with the old schema from capture mode by running the Debezium UDF for removing tables from capture mode.
- Add the source table with the new schema to capture mode by running the Debezium UDF for adding tables to capture mode.
-
In the ASN register table, mark the updated source tables as
ACTIVE
. - Reinitialize the ASN capture service.
- Resume the application that updates the database.
- Restart the Debezium connector.
2.1.9.2. Performing online schema updates for Debezium Db2 connectors
An online schema update does not require application and data processing downtime. That is, you do not stop the Debezium Db2 connector before you perform an online schema update. Also, an online schema update procedure is simpler than the procedure for an offline schema update.
However, when a table is in capture mode, after a change to a column name, the Db2 replication feature continues to use the old column name. The new column name does not appear in Debezium change events. You must restart the connector to see the new column name in change events.
Prerequisites
- One or more tables that are in capture mode require schema updates.
Procedure when adding a column to the end of a table
- Lock the source tables whose schema you want to change.
-
In the ASN register table, mark the locked tables as
INACTIVE
. - Reinitialize the ASN capture service.
- Apply all changes to the schemas for the source tables.
- Apply all changes to the schemas for the corresponding change-data tables.
-
In the ASN register table, mark the source tables as
ACTIVE
. - Reinitialize the ASN capture service.
- Optional. Restart the connector to see updated column names in change events.
Procedure when adding a column to the middle of a table
- Lock the source table(s) to be changed.
-
In the ASN register table, mark the locked tables as
INACTIVE
. - Reinitialize the ASN capture service.
For each source table to be changed:
- Export the data in the source table.
- Truncate the source table.
- Alter the source table and add the column.
- Load the exported data into the altered source table.
- Export the data in the source table’s corresponding change-data table.
- Truncate the change-data table.
- Alter the change-data table and add the column.
- Load the exported data into the altered change-data table.
-
In the ASN register table, mark the tables as
INACTIVE
. This marks the old change-data tables as inactive, which allows the data in them to remain but they are no longer updated. - Reinitialize the ASN capture service.
- Optional. Restart the connector to see updated column names in change events.
2.2. Debezium connector for MariaDB (Technology Preview)
The Debezium connector for MariaDB is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
MariaDB has a binary log (binlog) that records all operations in the order in which they are committed to the database. This includes changes to table schemas as well as changes to the data in tables. MariaDB uses the binlog for replication and recovery.
The Debezium MariaDB connector reads the binlog, produces change events for row-level INSERT
, UPDATE
, and DELETE
operations, and emits the change events to Kafka topics. Client applications read those Kafka topics.
Because MariaDB is typically set up to purge binlogs after a specified period of time, the MariaDB connector performs an initial consistent snapshot of each of your databases. The MariaDB connector reads the binlog from the point at which the snapshot was made.
For information about the MariaDB Database versions that are compatible with this connector, see the Debezium Supported Configurations page.
Information and procedures for using a Debezium MariaDB connector are organized as follows:
- Section 2.2.1, “How Debezium MariaDB connectors work”
- Section 2.2.2, “Descriptions of Debezium MariaDB connector data change events”
- Section 2.2.3, “How Debezium MariaDB connectors map data types”
- Section 2.2.4, “Custom converters for mapping MariaDB data to alternative data types”
- Section 2.2.5, “Setting up MariaDB to run a Debezium connector”
- Section 2.2.6, “Deployment of Debezium MariaDB connectors”
- Section 2.2.7, “Monitoring Debezium MariaDB connector performance”
- Section 2.2.8, “How Debezium MariaDB connectors handle faults and problems”
2.2.1. How Debezium MariaDB connectors work
An overview of the MariaDB topologies that the connector supports is useful for planning your application. To optimally configure and run a Debezium MariaDB connector, it is helpful to understand how the connector tracks the structure of tables, exposes schema changes, performs snapshots, and determines Kafka topic names.
Details are in the following topics:
- Section 2.2.1.1, “MariaDB topologies supported by Debezium connectors”
- Section 2.2.1.2, “How Debezium MariaDB connectors handle database schema changes”
- Section 2.2.1.3, “How Debezium MariaDB connectors expose database schema changes”
- Section 2.2.1.4, “How Debezium MariaDB connectors perform database snapshots”
- Section 2.2.1.5, “Ad hoc snapshots”
- Section 2.2.1.6, “Incremental snapshots”
- Section 2.2.1.8, “Default names of Kafka topics that receive Debezium MariaDB change event records”
2.2.1.1. MariaDB topologies supported by Debezium connectors
The Debezium MariaDB connector supports the following MariaDB topologies:
- Standalone
- When a single MariaDB server is used, the server must have the binlog enabled so the Debezium MariaDB connector can monitor the server. This is often acceptable, since the binary log can also be used as an incremental [backup]. In this case, the MariaDB connector always connects to and follows this standalone MariaDB server instance.
- Primary and replica
The Debezium MariaDB connector can follow one of the primary servers, or one of the replicas (if that replica has its binlog enabled), but the connector detects changes only in the cluster that is visible to that server. Generally, this is not a problem except for the multi-primary topologies.
The connector records its position in the server’s binlog, which is different on each server in the cluster. Therefore, the connector must follow just one MariaDB server instance. If that server fails, that server must be restarted or recovered before the connector can continue.
- High available clusters
- A variety of [high availability] solutions exist for MariaDB, and they make it significantly easier to tolerate and almost immediately recover from problems and failures. Because HA MariaDB clusters use GTIDs, replicas are able to track all of the changes that occur on any primary server.
- Multi-primary
uses one or more MariaDB replica nodes that each replicate from multiple primary servers. Cluster replication provides a powerful way to aggregate the replication of multiple MariaDB clusters.
A Debezium MariaDB connector can use these multi-primary MariaDB replicas as sources, and can fail over to different multi-primary MariaDB replicas as long as the new replica is caught up to the old replica. That is, the new replica has all transactions that were seen on the first replica. This works even if the connector is using only a subset of databases and/or tables, because the connector can be configured to include or exclude specific GTID sources when attempting to reconnect to a new multi-primary MariaDB replica and find the correct position in the binlog.
- Hosted
The Debezium MariaDB connector can use hosted database options such as Amazon RDS and Amazon Aurora.
Because these hosted options do not permit the use of global read locks, the connector uses table-level locks when it creates a consistent snapshot.
2.2.1.2. How Debezium MariaDB connectors handle database schema changes
When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot necessarily apply the current schema to every event. If an event is relatively old, it’s possible that it was recorded before the current schema was applied.
To ensure correct processing of events that occur after a schema change, MariaDB includes in the transaction log not only the row-level changes that affect the data, but also the DDL statements that are applied to the database. As the connector encounters these DDL statements in the binlog, it parses them and updates an in-memory representation of each table’s schema. The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete operation and to produce the appropriate change event. In a separate database schema history Kafka topic, the connector records all DDL statements along with the position in the binlog where each DDL statement appeared.
When the connector restarts after either a crash or a graceful stop, it starts reading the binlog from a specific position, that is, from a specific point in time. The connector rebuilds the table structures that existed at this point in time by reading the database schema history Kafka topic and parsing all DDL statements up to the point in the binlog where the connector is starting.
This database schema history topic is for internal connector use only. Optionally, the connector can also emit schema change events to a different topic that is intended for consumer applications.
When the MariaDB connector captures changes in a table to which a schema change tool such as gh-ost
or pt-online-schema-change
is applied, there are helper tables created during the migration process. You must configure the connector to capture changes that occur in these helper tables. If consumers do not need the records the connector generates for helper tables, configure a single message transform (SMT) to remove these records from the messages that the connector emits.
Additional resources
- Default names for topics that receive Debezium event records.
2.2.1.3. How Debezium MariaDB connectors expose database schema changes
You can configure a Debezium MariaDB connector to produce schema change events that describe schema changes that are applied to tables in the database. The connector writes schema change events to a Kafka topic named <topicPrefix>
, where topicPrefix
is the namespace specified in the topic.prefix
connector configuration property. Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
The schema for the schema change event has the following elements:
name
- The name of the schema change event message.
type
- The type of the change event message.
version
- The version of the schema. The version is an integer that is incremented each time the schema is changed.
fields
- The fields that are included in the change event message.
Example: Schema of the MariaDB connector schema change topic
The following example shows a typical schema in JSON format.
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.mariadb.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "inventory" } }
The payload of a schema change event message includes the following elements:
ddl
-
Provides the SQL
CREATE
,ALTER
, orDROP
statement that results in the schema change. databaseName
-
The name of the database to which the DDL statements are applied. The value of
databaseName
serves as the message key. pos
- The position in the binlog where the statements appear.
tableChanges
-
A structured representation of the entire table schema after the schema change. The
tableChanges
field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser.
For a table that is in capture mode, the connector not only stores the history of schema changes in the schema change topic, but also in an internal database schema history topic. The internal database schema history topic is for connector use only, and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic.
Never partition the database schema history topic. For the database schema history topic to function correctly, it must maintain a consistent, global order of the event records that the connector emits to it.
To ensure that the topic is not split among partitions, set the partition count for the topic by using one of the following methods:
-
If you create the database schema history topic manually, specify a partition count of
1
. -
If you use the Apache Kafka broker to create the database schema history topic automatically, the topic is created, set the value of the Kafka
num.partitions
configuration option to1
.
The format of the messages that a connector emits to its schema change topic is in an incubating state and is subject to change without notice.
Example: Message emitted to the MariaDB connector schema change topic
The following example shows a typical schema change message in JSON format. The message contains a logical representation of the table schema.
{ "schema": { }, "payload": { "source": { 1 "version": "2.7.3.Final", "connector": "mariadb", "name": "mariadb", "ts_ms": 1651535750218, 2 "ts_us": 1651535750218000, 3 "ts_ns": 1651535750218000000, 4 "snapshot": "false", "db": "inventory", "sequence": null, "table": "customers", "server_id": 223344, "gtid": null, "file": "mariadb-bin.000003", "pos": 570, "row": 0, "thread": null, "query": null }, "databaseName": "inventory", 5 "schemaName": null, "ddl": "ALTER TABLE customers ADD middle_name varchar(255) AFTER first_name", 6 "tableChanges": [ 7 { "type": "ALTER", 8 "id": "\"inventory\".\"customers\"", 9 "table": { 10 "defaultCharsetName": "utf8mb4", "primaryKeyColumnNames": [ 11 "id" ], "columns": [ 12 { "name": "id", "jdbcType": 4, "nativeType": null, "typeName": "INT", "typeExpression": "INT", "charsetName": null, "length": null, "scale": null, "position": 1, "optional": false, "autoIncremented": true, "generated": true }, { "name": "first_name", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 2, "optional": false, "autoIncremented": false, "generated": false }, { "name": "middle_name", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 3, "optional": true, "autoIncremented": false, "generated": false }, { "name": "last_name", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 4, "optional": false, "autoIncremented": false, "generated": false }, { "name": "email", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 5, "optional": false, "autoIncremented": false, "generated": false } ], "attributes": [ 13 { "customAttribute": "attributeValue" } ] } } ] } }
Item | Field name | Description |
---|---|---|
1 |
|
The |
2 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
3 |
|
Identifies the database and the schema that contains the change. The value of the |
4 |
|
This field contains the DDL that is responsible for the schema change. The |
5 |
| An array of one or more items that contain the schema changes generated by a DDL command. |
6 |
| Describes the kind of change. The value is one of the following:
|
7 |
|
Full identifier of the table that was created, altered, or dropped. In the case of a table rename, this identifier is a concatenation of |
8 |
| Represents table metadata after the applied change. |
9 |
| List of columns that compose the table’s primary key. |
10 |
| Metadata for each column in the changed table. |
11 |
| Custom attribute metadata for each table change. |
For more information, see schema history topic.
2.2.1.4. How Debezium MariaDB connectors perform database snapshots
When a Debezium MariaDB connector is first started, it performs an initial consistent snapshot of your database. This snapshot enables the connector to establish a baseline for the current state of the database.
Debezium can use different modes when it runs a snapshot. The snapshot mode is determined by the snapshot.mode
configuration property. The default value of the property is initial
. You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property.
You can find more information about snapshots in the following sections:
The connector completes a series of tasks when it performs the snapshot. The exact steps vary with the snapshot mode and with the table locking policy that is in effect for the database. The Debezium MariaDB connector completes different steps when it performs an initial snapshot that uses a global read lock or table-level locks.
2.2.1.4.1. Initial snapshots that use a global read lock
You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property. If you configure a different snapshot mode, the connector completes the snapshot by using a modified version of this workflow. For information about the snapshot process in environments that do not permit global read locks, see the snapshot workflow for table-level locks.
Default workflow that the Debezium MariaDB connector uses to perform an initial snapshot with a global read lock
The following table shows the steps in the workflow that Debezium follows to create a snapshot with a global read lock.
Step | Action |
---|---|
1 | Establish a connection to the database. |
2 |
Determine the tables to be captured. By default, the connector captures the data for all non-system tables. After the snapshot completes, the connector continues to stream data for the specified tables. If you want the connector to capture data only from specific tables you can direct the connector to capture the data for only a subset of tables or table elements by setting properties such as |
3 |
Obtain a global read lock on the tables to be captured to block writes by other database clients. |
4 Note The use of these isolation semantics can slow the progress of the snapshot. If the snapshot takes too long to complete, consider using a different isolation configuration, or skip the initial snapshot and run an incremental snapshot instead. | 5 |
Read the current binlog position. | 6 |
Capture the structure of all tables in the database, or all tables that are designated for capture. The connector persists schema information in its internal database schema history topic, including all necessary Note
By default, the connector captures the schema of every table in the database, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. | 7 |
Release the global read lock obtained in Step 3. Other database clients can now write to the database. | 8 |
At the binlog position that the connector read in Step 5, the connector begins to scan the tables that are designated for capture. During the scan, the connector completes the following tasks:
| 9 |
Commit the transaction. | 10 |
The resulting initial snapshot captures the current state of each row in the captured tables. From this baseline state, the connector captures subsequent changes as they occur.
After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts.
After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 5 so that it does not miss any updates.
If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.
After the connector restarts, if the logs have been pruned, the connector’s position in the logs might no longer available. The connector then fails, and returns an error that indicates that a new snapshot is required. To configure the connector to automatically initiate a snapshot in this situation, set the value of the snapshot.mode
property to when_needed
. For more tips on troubleshooting the Debezium MariaDB connector, see behavior when things go wrong.
2.2.1.4.2. Initial snapshots that use table-level locks
In some database environments, administrators do not permit global read locks. If the Debezium MariaDB connector detects that global read locks are not permitted, the connector uses table-level locks when it performs snapshots. For the connector to perform a snapshot that uses table-level locks, the database account that the Debezium connector uses to connect to MariaDB must have LOCK TABLES
privileges.
Default workflow that the Debezium MariaDB connector uses to perform an initial snapshot with table-level locks
The following table shows the steps in the workflow that Debezium follows to create a snapshot with table-level read locks. For information about the snapshot process in environments that do not permit global read locks, see the snapshot workflow for global read locks.
Step | Action |
---|---|
1 | Establish a connection to the database. |
2 |
Determine the tables to be captured. By default, the connector captures all non-system tables. To have the connector capture a subset of tables or table elements, you can set a number of |
3 | Obtain table-level locks. |
4 | 5 |
Read the current binlog position. | 6 |
Read the schema of the databases and tables for which the connector is configured to capture changes. The connector persists schema information in its internal database schema history topic, including all necessary Note By default, the connector captures the schema of every table in the database, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see Understanding why initial snapshots capture the schema for all tables. | 7 |
At the binlog position that the connector read in Step 5, the connector begins to scan the tables that are designated for capture. During the scan, the connector completes the following tasks:
| 8 |
Commit the transaction. | 9 |
Release the table-level locks. Other database clients can now write to any previously locked tables. | 10 |
Setting | Description |
---|---|
| The connector performs a snapshot every time that it starts. The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables every time that the connector starts. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot as described in the default workflow for creating an initial snapshot. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot. After the snapshot completes, the connector stops, and does not stream event records for subsequent database changes. |
|
Deprecated, see |
|
The connector captures the structure of all relevant tables, performing all the steps described in the default workflow for creating an initial snapshot, except that it does not create |
|
When the connector starts, rather than performing a snapshot, it immediately begins to stream event records for subsequent database changes. This option is under consideration for future deprecation, in favor of the |
|
Deprecated, see |
|
Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth. |
| After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
|
For more information, see snapshot.mode
in the table of connector configuration properties.
2.2.1.4.3. Description of why initial snapshots capture the schema history for all tables
The initial snapshot that a connector runs captures two types of information:
- Table data
-
Information about
INSERT
,UPDATE
, andDELETE
operations in tables that are named in the connector’stable.include.list
property. - Schema data
- DDL statements that describe the structural changes that are applied to tables. Schema data is persisted to both the internal schema history topic, and to the connector’s schema change topic, if one is configured.
After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture. By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture. Connectors require that the table’s schema is present in the schema history topic before they can capture a table. By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, Debezium prepares the connector to readily capture event data from these tables should that later become necessary. If the initial snapshot does not capture a table’s schema, you must add the schema to the history topic before the connector can capture data from the table.
In some cases, you might want to limit schema capture in the initial snapshot. This can be useful when you want to reduce the time required to complete a snapshot. Or when Debezium connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database.
Additional information
- Capturing data from tables not captured by the initial snapshot (no schema change)
- Capturing data from tables not captured by the initial snapshot (schema change)
-
Setting the
schema.history.internal.store.only.captured.tables.ddl
property to specify the tables from which to capture schema information. -
Setting the
schema.history.internal.store.only.captured.databases.ddl
property to specify the logical databases from which to capture schema changes.
2.2.1.4.4. Capturing data from tables not captured by the initial snapshot (no schema change)
In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot. Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database. If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error.
You might still be able to capture data from the table, but you must perform additional steps to add the table schema.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- In the transaction log, all entries for the table use the same schema. For information about capturing data from a new table that has undergone structural changes, see Capturing data from tables not captured by the initial snapshot (schema change).
Procedure
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Apply the following changes to the connector configuration:
-
Set the
snapshot.mode
toschema_only_recovery
. -
Set the value of
schema.history.internal.store.only.captured.tables.ddl
tofalse
. -
Add the tables that you want the connector to capture to
table.include.list
. This guarantees that in the future, the connector can reconstruct the schema history for all tables.
-
Set the
- Restart the connector. The snapshot recovery process rebuilds the schema history based on the current structure of the tables.
- (Optional) After the snapshot completes, initiate an incremental snapshot to capture existing data for newly added tables along with changes to other tables that occurred while that connector was off-line.
-
(Optional) Reset the
snapshot.mode
back toschema_only
to prevent the connector from initiating recovery after a future restart.
2.2.1.4.5. Capturing data from tables not captured by the initial snapshot (schema change)
If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change. When Debezium captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event. If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results.
If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available. You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- A schema change was applied to the table so that the records to be captured do not have a uniform structure.
Procedure
- Initial snapshot captured the schema for all tables (
store.only.captured.tables.ddl
was set tofalse
) -
Edit the
table.include.list
property to specify the tables that you want to capture. - Restart the connector.
- Initiate an incremental snapshot if you want to capture existing data from the newly added tables.
-
Edit the
- Initial snapshot did not capture the schema for all tables (
store.only.captured.tables.ddl
was set totrue
) If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures:
- Procedure 1: Schema snapshot, followed by incremental snapshot
In this procedure, the connector first performs a schema snapshot. You can then initiate an incremental snapshot to enable the connector to synchronize data.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toschema_only
. -
Edit the
table.include.list
to add the tables that you want to capture.
-
Set the value of the
- Restart the connector.
- Wait for Debezium to capture the schema of the new and existing tables. Data changes that occurred any tables after the connector stopped are not captured.
- To ensure that no data is lost, initiate an incremental snapshot.
- Procedure 2: Initial snapshot, followed by optional incremental snapshot
In this procedure the connector performs a full initial snapshot of the database. As with any initial snapshot, in a database with many large tables, running an initial snapshot can be a time-consuming operation. After the snapshot completes, you can optionally trigger an incremental snapshot to capture any changes that occur while the connector is off-line.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Edit the
table.include.list
to add the tables that you want to capture. Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toinitial
. -
(Optional) Set
schema.history.internal.store.only.captured.tables.ddl
tofalse
.
-
Set the value of the
- Restart the connector. The connector takes a full database snapshot. After the snapshot completes, the connector transitions to streaming.
- (Optional) To capture any data that changed while the connector was off-line, initiate an incremental snapshot.
2.2.1.5. Ad hoc snapshots
By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.
However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. You might want to perform an ad hoc snapshot after any of the following changes occur in your Debezium environment:
- The connector configuration is modified to capture a different set of tables.
- Kafka topics are deleted and must be rebuilt.
- Data corruption occurs due to a configuration error or some other problem.
You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling tables. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table.
When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.
Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database.
You specify the tables to capture by sending an execute-snapshot
message to the signaling table. Set the type of the execute-snapshot
signal to incremental
or blocking
, and provide the names of the tables to include in the snapshot, as described in the following table:
Field | Default | Value |
---|---|---|
|
|
Specifies the type of snapshot that you want to run. |
| N/A |
An array that contains regular expressions matching the fully-qualified names of the tables to include in the snapshot. |
| N/A |
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
|
| N/A | An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. |
Triggering an ad hoc incremental snapshot
You initiate an ad hoc incremental snapshot by adding an entry with the execute-snapshot
signal type to the signaling table, or by sending a signal message to a Kafka signaling topic. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
For more information, see Incremental snapshots.
Triggering an ad hoc blocking snapshot
You initiate an ad hoc blocking snapshot by adding an entry with the execute-snapshot
signal type to the signaling table or signaling topic. After the connector processes the message, it begins the snapshot operation. The connector temporarily stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the connector resumes streaming.
For more information, see Blocking snapshots.
2.2.1.6. Incremental snapshots
To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows.
As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:
- You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
- If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning.
-
You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its
table.include.list
property.
Incremental snapshot process
When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size. Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ
event. That event represents the value of the row when the snapshot for the chunk began.
As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT
, UPDATE
, or DELETE
operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.
How Debezium resolves collisions among records with the same primary key
In some cases, the UPDATE
or DELETE
events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ
event for that row. When the snapshot eventually emits the corresponding READ
event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.
Snapshot window
To assist in resolving collisions between late-arriving READ
events and streamed events that modify the same table row, Debezium employs a so-called snapshot window. The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key..
For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ
operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE
or DELETE
operations for each change.
As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ
events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ
event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ
events for which no related transaction log events exist. Debezium emits these remaining READ
events to the table’s Kafka topic.
The connector repeats the process for each snapshot chunk.
Currently, you can use either of the following methods to initiate an incremental snapshot:
2.2.1.6.1. Triggering an incremental snapshot
To initiate an incremental snapshot, you can send an ad hoc snapshot signal to the signaling table on the source database. You submit snapshot signals as SQL INSERT
queries.
After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation.
The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the type of snapshot operation. Debezium currently supports the incremental
and blocking
snapshot types.
To specify the tables to include in the snapshot, provide a data-collections
array that lists the tables, or an array of regular expressions used to match tables, for example,
{"data-collections": ["public.MyFirstTable", "public.MySecondTable"]}
The data-collections
array for an incremental snapshot signal has no default value. If the data-collections
array is empty, Debezium interprets the empty array to mean that no action is required, and it does not perform a snapshot.
If the name of a table that you want to include in a snapshot contains a dot (.
), a space, or some other non-alphanumeric character, you must escape the table name in double quotes.
For example, to include a table that exists in the db1
database, and that has the name My.Table
, use the following format: "db1.\"My.Table\""
.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to trigger an incremental snapshot
Send a SQL query to add the ad hoc incremental snapshot request to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example,
INSERT INTO db1.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["db1.table1", "db1.table2"], 4 "type":"incremental", 5 "additional-conditions":[{"data-collection": "db1.table1" ,"filter":"color=\'blue\'"}]}'); 6
The values of the
id
,type
, anddata
parameters in the command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.28. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1
database.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its ownid
string as a watermarking signal.3
execute-snapshot
The
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
A required component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot.
The array lists regular expressions that use the formatdatabase.table
to match the fully-qualified names of the tables. This format is the same as the one that you use to specify the name of the connector’s signaling table.5
incremental
An optional
type
component of thedata
field of a signal that specifies the type of snapshot operation to run.
Valid values areincremental
andblocking
.
If you do not specify a value, the connector defaults to performing an incremental snapshot.6
additional-conditions
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
Each additional condition is an object withdata-collection
andfilter
properties. You can specify different filters for each data collection.
* Thedata-collection
property is the fully-qualified name of the data collection that the filter applies to. For more information about theadditional-conditions
parameter, see Running an ad hoc incremental snapshots withadditional-conditions
.
Running an ad hoc incremental snapshots with additional-conditions
If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-conditions
parameter to the snapshot signal.
The SQL query for a typical snapshot takes the following form:
SELECT * FROM <tableName> ....
By adding an additional-conditions
parameter, you append a WHERE
condition to the SQL query, as in the following example:
SELECT * FROM <data-collection> WHERE <filter> ....
The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example, suppose you have a products
table that contains the following columns:
-
id
(primary key) -
color
-
quantity
If you want an incremental snapshot of the products
table to include only the data items where color=blue
, you can use the following SQL statement to trigger the snapshot:
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue"}]}');
The additional-conditions
parameter also enables you to pass conditions that are based on more than one column. For example, using the products
table from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue
and quantity>10
:
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue AND quantity>10"}]}');
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
Example 2.9. Incremental snapshot event message
{ "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "ts_us":"1620393591654547", "ts_ns":"1620393591654547920", "transaction":null }
Item | Field name | Description |
---|---|---|
1 |
|
Specifies the type of snapshot operation to run. |
2 |
|
Specifies the event type. |
2.2.1.6.2. Using the Kafka signaling channel to trigger an incremental snapshot
You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is execute-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports the |
| N/A |
An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. |
| N/A |
An optional array of additional conditions that specifies criteria that the connector evaluates to designate a subset of records to include in a snapshot. |
Example 2.10. An execute-snapshot
Kafka message
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
Ad hoc incremental snapshots with additional-conditions
Debezium uses the additional-conditions
field to select a subset of a table’s content.
Typically, when Debezium runs a snapshot, it runs a SQL query such as:
SELECT * FROM <tableName> ….
When the snapshot request includes an additional-conditions
property, the data-collection
and filter
parameters of the property are appended to the SQL query, for example:
SELECT * FROM <data-collection> WHERE <filter> ….
For example, given a products
table with the columns id
(primary key), color
, and brand
, if you want a snapshot to include only content for which color='blue'
, when you request the snapshot, you could add the additional-conditions
property to filter the content:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue'"}]}}`
You can also use the additional-conditions
property to pass conditions based on multiple columns. For example, using the same products
table as in the previous example, if you want a snapshot to include only the content from the products
table for which color='blue'
, and brand='MyBrand'
, you could send the following request:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
2.2.1.6.3. Stopping an incremental snapshot
In some situations, it might be necessary to stop an incremental snapshot. For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations. You can stop a snapshot that is already running by sending a signal to the signaling table on the source database.
You submit a stop snapshot signal to the signaling table by sending it in a SQL INSERT
query. The stop-snapshot signal specifies the type
of the snapshot operation as incremental
, and optionally specifies the tables that you want to omit from the currently running snapshot. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it’s in progress.
Additional resources
You can also stop an incremental snapshot by sending a JSON message to the Kafka signaling topic.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to stop an incremental snapshot
Send a SQL query to stop the ad hoc incremental snapshot to the signaling table:
INSERT INTO <signalTable> (id, type, data) values ('<id>', 'stop-snapshot', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"incremental"}');
For example,
INSERT INTO db1.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["db1.table1", "db1.table2"], 4 "type":"incremental"}'); 5
The values of the
id
,type
, anddata
parameters in the signal command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.31. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1
database.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string.3
stop-snapshot
Specifies
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
An optional component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot.
The array lists regular expressions which match tables by their fully-qualified names in the formatdatabase.table
If you omit this component from the
data
field, the signal stops the entire incremental snapshot that is in progress.5
incremental
A required component of the
data
field of a signal that specifies the type of snapshot operation that is to be stopped.
Currently, the only valid option isincremental
.
If you do not specify atype
value, the signal fails to stop the incremental snapshot.
2.2.1.6.4. Using the Kafka signaling channel to stop an incremental snapshot
You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is stop-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports only the |
| N/A |
An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of table names or regular expressions to match table names to remove from the snapshot. |
The following example shows a typical stop-snapshot
Kafka message:
Key = `test_connector` Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.table1", "db1.table2"], "type": "INCREMENTAL"}}`
2.2.1.7. Blocking snapshots
To provide more flexibility in managing snapshots, Debezium includes a supplementary ad hoc snapshot mechanism, known as a blocking snapshot. Blocking snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
A blocking snapshot behaves just like an initial snapshot, except that you can trigger it at run time.
You might want to run a blocking snapshot rather than use the standard initial snapshot process in the following situations:
- You add a new table and you want to complete the snapshot while the connector is running.
- You add a large table, and you want the snapshot to complete in less time than is possible with an incremental snapshot.
Blocking snapshot process
When you run a blocking snapshot, Debezium stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the streaming is resumed.
Configure snapshot
You can set the following properties in the data
component of a signal:
- data-collections: to specify which tables must be snapshot
additional-conditions: You can specify different filters for different table.
-
The
data-collection
property is the fully-qualified name of the table for which the filter will be applied. -
The
filter
property will have the same value used in thesnapshot.select.statement.overrides
-
The
For example:
{"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]}
Possible duplicates
A delay might exist between the time that you send the signal to trigger the snapshot, and the time when streaming stops and the snapshot starts. As a result of this delay, after the snapshot completes, the connector might emit some event records that duplicate records captured by the snapshot.
2.2.1.8. Default names of Kafka topics that receive Debezium MariaDB change event records
By default, the MariaDB connector writes change events for all of the INSERT
, UPDATE
, and DELETE
operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
topicPrefix.databaseName.tableName
Suppose that fulfillment
is the topic prefix, inventory
is the database name, and the database contains tables named orders
, customers
, and products
. The Debezium MariaDB connector emits events to three Kafka topics, one for each table in the database:
fulfillment.inventory.orders fulfillment.inventory.customers fulfillment.inventory.products
The following list provides definitions for the components of the default name:
- topicPrefix
-
The topic prefix as specified by the
topic.prefix
connector configuration property. - schemaName
- The name of the schema in which the operation occurred.
- tableName
- The name of the table in which the operation occurred.
The connector applies similar naming conventions to label its internal database schema history topics, schema change topics, and transaction metadata topics.
If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing.
Transaction metadata
Debezium can generate events that represent transaction boundaries and that enrich data change event messages.
Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.
Debezium generates transaction boundary events for the BEGIN
and END
delimiters in every transaction. Transaction boundary events contain the following fields:
status
-
BEGIN
orEND
. id
- String representation of the unique transaction identifier.
ts_ms
-
The time of a transaction boundary event (
BEGIN
orEND
event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count
(forEND
events)- Total number of events emitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
elements that indicates the number of events that the connector emits for changes that originate from a data collection.
Example
{ "status": "BEGIN", "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "s1.a", "event_count": 1 }, { "data_collection": "s2.a", "event_count": 1 } ] }
Unless overridden via the topic.transaction
option, the connector emits transaction events to the <topic.prefix>
.transaction
topic.
Change data event enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
id
- String representation of unique transaction identifier.
total_order
- The absolute position of the event among all events generated by the transaction.
data_collection_order
- The per-data collection position of the event among all events that were emitted by the transaction.
Following is an example of a message:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "ts_us": "1580390884335472", "ts_ns": "1580390884335472987", "transaction": { "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10", "total_order": "1", "data_collection_order": "1" } }
2.2.2. Descriptions of Debezium MariaDB connector data change events
The Debezium MariaDB connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating table. See topic names.
The MariaDB connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or _. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
More details are in the following topics:
2.2.2.1. About keys in Debezium MariaDB change events
A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s PRIMARY KEY
(or unique constraint) at the time the connector created the event.
Consider the following customers
table, which is followed by an example of a change event key for this table.
CREATE TABLE customers ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE KEY ) AUTO_INCREMENT=1001;
Every change event that captures a change to the customers
table has the same event key schema. For as long as the customers
table has the previous definition, every change event that captures a change to the customers
table has the following key structure. In JSON, it looks like this:
{ "schema": { 1 "type": "struct", "name": "mariadb-server-1.inventory.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "field": "id", "type": "int32", "optional": false } ] }, "payload": { 5 "id": 1001 } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-name.table-name.
|
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Specifies each field that is expected in the |
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
2.2.2.2. About values in Debezium MariaDB change events
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE KEY ) AUTO_INCREMENT=1001;
The value portion of a change event for a change to this table is described for:
create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "mariadb-server-1.inventory.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "mariadb-server-1.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "int64", "optional": false, "field": "ts_us" }, { "type": "int64", "optional": false, "field": "ts_ns" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": true, "field": "table" }, { "type": "int64", "optional": false, "field": "server_id" }, { "type": "string", "optional": true, "field": "gtid" }, { "type": "string", "optional": false, "field": "file" }, { "type": "int64", "optional": false, "field": "pos" }, { "type": "int32", "optional": false, "field": "row" }, { "type": "int64", "optional": true, "field": "thread" }, { "type": "string", "optional": true, "field": "query" } ], "optional": false, "name": "io.debezium.connector.mariadb.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "mariadb-server-1.inventory.customers.Envelope" 4 }, "payload": { 5 "op": "c", 6 "ts_ms": 1465491411815, 7 "ts_us": 1465491411815437, 8 "ts_ns": 1465491411815437158, 9 "before": null, 10 "after": { 11 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 12 "version": "2.7.3.Final", "connector": "mariadb", "name": "mariadb-server-1", "ts_ms": 0, "ts_us": 0, "ts_ns": 0, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 0, "gtid": null, "file": "mariadb-bin.000003", "pos": 154, "row": 0, "thread": 7, "query": "INSERT INTO customers (first_name, last_name, email) VALUES ('Anne', 'Kretchmar', 'annek@noanswer.org')" } } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
7 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
8 |
|
An optional field that specifies the state of the row before the event occurred. When the |
9 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
10 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "after": { 2 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 3 "version": "2.7.3.Final", "name": "mariadb-server-1", "connector": "mariadb", "name": "mariadb-server-1", "ts_ms": 1465581029100, "ts_ms": 1465581029100000, "ts_ms": 1465581029100000000, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mariadb-bin.000003", "pos": 484, "row": 0, "thread": 7, "query": "UPDATE customers SET first_name='Anne Marie' WHERE id=1004" }, "op": "u", 4 "ts_ms": 1465581029523, 5 "ts_ms": 1465581029523758, 6 "ts_ms": 1465581029523758914 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that specifies the state of the row before the event occurred. In an update event value, the |
2 |
|
An optional field that specifies the state of the row after the event occurred. You can compare the |
3 |
|
Mandatory field that describes the source metadata for the event. The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
6 |
| Optional field that displays the time at which the connector processed the event, in microseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
7 |
| Optional field that displays the time at which the connector processed the event, in nanoseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a DELETE
event and a tombstone event with the old key for the row, followed by an event with the new key for the row. Details are in the next section.
Primary key updates
An UPDATE
operation that changes a row’s primary key field(s) is known as a primary key change. For a primary key change, in place of an UPDATE
event record, the connector emits a DELETE
event record for the old key and a CREATE
event record for the new (updated) key. These events have the usual structure and content, and in addition, each one has a message header related to the primary key change:
-
The
DELETE
event record has__debezium.newkey
as a message header. The value of this header is the new primary key for the updated row. -
The
CREATE
event record has__debezium.oldkey
as a message header. The value of this header is the previous (old) primary key that the updated row had.
delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "after": null, 2 "source": { 3 "version": "2.7.3.Final", "connector": "mariadb", "name": "mariadb-server-1", "ts_ms": 1465581902300, "ts_us": 1465581902300000, "ts_ns": 1465581902300000000, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mariadb-bin.000003", "pos": 805, "row": 0, "thread": 7, "query": "DELETE FROM customers WHERE id=1004" }, "op": "d", 4 "ts_ms": 1465581902461, 5 "ts_us": 1465581902461842, 6 "ts_ns": 1465581902461842579 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
6 |
| Optional field that displays the time at which the connector processed the event, in microseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
7 |
| Optional field that displays the time at which the connector processed the event, in nanoseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
A delete change event record provides a consumer with the information it needs to process the removal of this row. The old values are included because some consumers might require them in order to properly handle the removal.
MariaDB connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, after the Debezium MariaDB connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value.
truncate events
A truncate change event signals that a table has been truncated. The message key of a truncate event is null
. The message value resembles the following example:
{ "schema": { ... }, "payload": { "source": { 1 "version": "2.7.3.Final", "name": "mariadb-server-1", "connector": "mariadb", "name": "mariadb-server-1", "ts_ms": 1465581029100, "ts_us": 1465581029100000, "ts_ns": 1465581029100000000, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mariadb-bin.000003", "pos": 484, "row": 0, "thread": 7, "query": "UPDATE customers SET first_name='Anne Marie' WHERE id=1004" }, "op": "t", 2 "ts_ms": 1465581029523, 3 "ts_us": 1465581029523468, 4 "ts_ns": 1465581029523468471 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory field that describes the source metadata for the event. In a truncate event value, the
|
2 |
|
Mandatory string that describes the type of operation. The |
3 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
4 |
| Optional field that displays the time at which the connector processed the event, in microseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
5 |
| Optional field that displays the time at which the connector processed the event, in nanoseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
In case a single TRUNCATE
statement applies to multiple tables, one truncate change event record for each truncated table will be emitted.
A truncate event represents a change that applies to an entire table, and it does not have a message key. In topics that span multiple partition, the order of change events that apply to an entire table is is not guaranteed. That is, there is no ordering guarantee for (create, update, etc.), or for the truncate events for that table. When a consumer reads events from different partition, it might read an update event for a table from one partition only after it reads a truncate event for the same table from a second partition.
2.2.3. How Debezium MariaDB connectors map data types
The Debezium MariaDB connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. The MariaDB data type of that column dictates how Debezium represents the value in the event.
Columns that store strings are defined in MariaDB with a character set and collation. The MariaDB connector uses the column’s character set when reading the binary representation of the column values in the binlog events.
The connector can map MariaDB data types to both literal and semantic types.
- Literal type: how the value is represented using Kafka Connect schema types.
- Semantic type: how the Kafka Connect schema captures the meaning of the field (schema name).
If the default data type conversions do not meet your needs, you can create a custom converter for the connector.
Details are in the following sections:
Basic types
The following table shows how the connector maps basic MariaDB data types.
MariaDB type | Literal type | Semantic type |
---|---|---|
|
| n/a |
|
| n/a |
|
|
|
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
The precision is used only to determine storage size. A precision |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Temporal types
Excluding the TIMESTAMP
data type, MariaDB temporal types depend on the value of the time.precision.mode
connector configuration property. For TIMESTAMP
columns whose default value is specified as CURRENT_TIMESTAMP
or NOW
, the value 1970-01-01 00:00:00
is used as the default value in the Kafka Connect schema.
MariaDB allows zero-values for DATE
, DATETIME
, and TIMESTAMP
columns because zero-values are sometimes preferred over null values. The MariaDB connector represents zero-values as null values when the column definition allows null values, or as the epoch day when the column does not allow null values.
Temporal values without time zones
The DATETIME
type represents a local date and time such as "2018-01-13 09:48:27". As you can see, there is no time zone information. Such columns are converted into epoch milliseconds or microseconds based on the column’s precision by using UTC. The TIMESTAMP
type represents a timestamp without time zone information. It is converted by MariaDB from the server (or session’s) current time zone into UTC when writing and from UTC into the server (or session’s) current time zone when reading back the value. For example:
-
DATETIME
with a value of2018-06-20 06:37:03
becomes1529476623000
. -
TIMESTAMP
with a value of2018-06-20 06:37:03
becomes2018-06-20T13:37:03Z
.
Such columns are converted into an equivalent io.debezium.time.ZonedTimestamp
in UTC based on the server (or session’s) current time zone. The time zone will be queried from the server by default.
The time zone of the JVM running Kafka Connect and Debezium does not affect these conversions.
More details about properties related to temporal values are in the documentation for MariaDB connector configuration properties.
- time.precision.mode=adaptive_time_microseconds(default)
The MariaDB connector determines the literal type and semantic type based on the column’s data type definition so that events represent exactly the values in the database. All time fields are in microseconds. Only positive
TIME
field values in the range of00:00:00.000000
to23:59:59.999999
can be captured correctly.Table 2.40. Mappings when time.precision.mode=adaptive_time_microseconds MariaDB type Literal type Semantic type DATE
INT32
io.debezium.time.Date
Represents the number of days since the epoch.TIME[(M)]
INT64
io.debezium.time.MicroTime
Represents the time value in microseconds and does not include time zone information. MariaDB allowsM
to be in the range of0-6
.DATETIME, DATETIME(0), DATETIME(1), DATETIME(2), DATETIME(3)
INT64
io.debezium.time.Timestamp
Represents the number of milliseconds past the epoch and does not include time zone information.DATETIME(4), DATETIME(5), DATETIME(6)
INT64
io.debezium.time.MicroTimestamp
Represents the number of microseconds past the epoch and does not include time zone information.- time.precision.mode=connect
The MariaDB connector uses defined Kafka Connect logical types. This approach is less precise than the default approach and the events could be less precise if the database column has a fractional second precision value of greater than
3
. Values in only the range of00:00:00.000
to23:59:59.999
can be handled. Settime.precision.mode=connect
only if you can ensure that theTIME
values in your tables never exceed the supported ranges. Theconnect
setting is expected to be removed in a future version of Debezium.Table 2.41. Mappings when time.precision.mode=connect MariaDB type Literal type Semantic type DATE
INT32
org.apache.kafka.connect.data.Date
Represents the number of days since the epoch.TIME[(M)]
INT64
org.apache.kafka.connect.data.Time
Represents the time value in microseconds since midnight and does not include time zone information.DATETIME[(M)]
INT64
org.apache.kafka.connect.data.Timestamp
Represents the number of milliseconds since the epoch, and does not include time zone information.
Decimal types
Debezium connectors handle decimals according to the setting of the decimal.handling.mode
connector configuration property.
- decimal.handling.mode=precise
Table 2.42. Mappings when decimal.handling.mode=precise MariaDB type Literal type Semantic type NUMERIC[(M[,D])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.DECIMAL[(M[,D])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.- decimal.handling.mode=double
Table 2.43. Mappings when decimal.handling.mode=double MariaDB type Literal type Semantic type NUMERIC[(M[,D])]
FLOAT64
n/a
DECIMAL[(M[,D])]
FLOAT64
n/a
- decimal.handling.mode=string
Table 2.44. Mappings when decimal.handling.mode=string MariaDB type Literal type Semantic type NUMERIC[(M[,D])]
STRING
n/a
DECIMAL[(M[,D])]
STRING
n/a
Boolean values
MariaDB handles the BOOLEAN
value internally in a specific way. The BOOLEAN
column is internally mapped to the TINYINT(1)
data type. When the table is created during streaming then it uses proper BOOLEAN
mapping as Debezium receives the original DDL. During snapshots, Debezium executes SHOW CREATE TABLE
to obtain table definitions that return TINYINT(1)
for both BOOLEAN
and TINYINT(1)
columns. Debezium then has no way to obtain the original type mapping and so maps to TINYINT(1)
.
To enable you to convert source columns to Boolean data types, Debezium provides a TinyIntOneToBooleanConverter
custom converter that you can use in one of the following ways:
-
Map all
TINYINT(1)
orTINYINT(1) UNSIGNED
columns toBOOLEAN
types. Enumerate a subset of columns by using a comma-separated list of regular expressions.
To use this type of conversion, you must set theconverters
configuration property with theselector
parameter, as shown in the following example:converters=boolean boolean.type=io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter boolean.selector=db1.table1.*, db1.table2.column1
NOTE: In some cases, the database may not show the length of
tinyint unsigned
when the snapshot executesSHOW CREATE TABLE
, which means this converter doesn’t work. The new optionlength.checker
can solve this issue, the default value istrue
. Disable thelength.checker
and specify the columns that need to be converted toselected
property instead of converting all columns based on type, as shown in the following example:converters=boolean boolean.type=io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter boolean.length.checker=false boolean.selector=db1.table1.*, db1.table2.column1
Spatial types
Currently, the Debezium MariaDB connector supports the following spatial data types.
MariaDB type | Literal type | Semantic type |
---|---|---|
|
|
|
2.2.4. Custom converters for mapping MariaDB data to alternative data types
By default, the Debezium MariaDB connector provides several CustomConverter
implementations for MariaDB data types. These custom converters provide alternative mappings for specific data types based on the connector configuration. To add a CustomConverter
to the connector, follow the instructions in the Custom Converters documentation.
TINYINT(1)
to Boolean
By default, during a connector snapshot, the Debezium MariaDB connector obtains column types from the JDBC driver, which assigns the TINYINT(1)
type to BOOLEAN
columns. Debezium then uses these JDBC column types to define the schema for the snapshot events. After the connector transitions from the snapshot to the streaming phase, the change event schema that results from the default mapping can lead to inconsistent mappings for BOOLEAN
columns. To help ensure that MariaDB emits BOOLEAN
columns uniformly, you can apply the custom TinyIntOneToBooleanConverter
, as shown in the following configuration example.
Example: TinyIntOneToBooleanConverter
configuration
converters=tinyint-one-to-boolean tinyint-one-to-boolean.type=io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter tinyint-one-to-boolean.selector=.*.MY_TABLE.DATA tinyint-one-to-boolean.length.checker=false
In the preceding example, the selector
and length.checker
properties are optional. By default, the converter checks that TINYINT
data types conform to a length of 1
. If length.checker
to false
, the converter does not explicitly confirm that the TINYINT
data type conforms to a length of 1
. The selector
designates the tables or columns to convert, based on the supplied regular expression. If you omit the selector
property, the converter maps all TINYINT
columns to logical BOOL
field types. If you do not configure a selector
option, and you want to map TINYINT
columns to TINYINT(1)
, omit the length.checker
property, or set its value to true
.
JDBC sink data types
If you integrate the Debezium JDBC sink connector with a Debezium MariaDB source connector, the MariaDB connector emits some column attributes differently during the snapshot and streaming phases. For the JDBC sink connector to consistently consume changes from both the snapshot and streaming phase, you must include the JdbcSinkDataTypesConverter
converter as part of the MariaDB source connector configuration, as shown in the following example:
Example: JdbcSinkDataTypesConverter
configuration
converters=jdbc-sink jdbc-sink.type=io.debezium.connector.binlog.converters.JdbcSinkDataTypesConverter jdbc-sink.selector.boolean=.*.MY_TABLE.BOOL_COL jdbc-sink.selector.real=.*.MY_TABLE.REAL_COL jdbc-sink.selector.string=.*.MY_TABLE.STRING_COL jdbc-sink.treat.real.as.double=true
In the preceding example, the selector.*
and treat.real.as.double
configuration properties are optional.
The selector.*
properties specify comma-separated lists of regular expressions that specify which tables and columns that the converter applies to. By default, the converter applies the following rules apply to all Boolean, real, and string-based column data types, across all tables:
-
BOOLEAN
data types are always emitted asINT16
logical types, with1
representingtrue
and0
representingfalse
-
REAL
data types are always emitted asFLOAT64
logical types. -
String-based columns always include the
__debezium.source.column.character_set
schema parameter that contains the column’s character set.
For each data type, you can configure a selector rule to override the default scope and apply the selector to specific tables and columns only. For example, to set the scope of the Boolean converter, add the following rule to the connector configuration, as in the preceding example: converters.jdbc-sink.selector.boolean=.*.MY_TABLE.BOOL_COL
2.2.5. Setting up MariaDB to run a Debezium connector
Some MariaDB setup tasks are required before you can install and run a Debezium connector.
Details are in the following sections:
- Section 2.2.5.1, “Creating a MariaDB user for a Debezium connector”
- Section 2.2.5.2, “Enabling the MariaDB binlog for Debezium”
- Section 2.2.5.3, “Enabling MariaDB Global Transaction Identifiers for Debezium”
- Section 2.2.5.4, “Configuring MariaDB session timeouts for Debezium”
- Section 2.2.5.5, “Enabling query log events for Debezium MariaDB connectors”
2.2.5.1. Creating a MariaDB user for a Debezium connector
A Debezium MariaDB connector requires a MariaDB user account. This MariaDB user must have appropriate permissions on all databases for which the Debezium MariaDB connector captures changes.
Prerequisites
- A MariaDB server.
- Basic knowledge of SQL commands.
Procedure
Create the MariaDB user:
mariadb> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
Grant the required permissions to the user:
mariadb> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user' IDENTIFIED BY 'password';
For a description of the required permissions, see Table 2.46, “Descriptions of user permissions”.
ImportantIf using a hosted option such as Amazon RDS or Amazon Aurora that does not allow a global read lock, table-level locks are used to create the consistent snapshot. In this case, you need to also grant
LOCK TABLES
permissions to the user that you create. See snapshots for more details.Finalize the user’s permissions:
mariadb> FLUSH PRIVILEGES;
Table 2.46. Descriptions of user permissions Keyword Description SELECT
Enables the connector to select rows from tables in databases. This is used only when performing a snapshot.
RELOAD
Enables the connector the use of the
FLUSH
statement to clear or reload internal caches, flush tables, or acquire locks. This is used only when performing a snapshot.SHOW DATABASES
Enables the connector to see database names by issuing the
SHOW DATABASE
statement. This is used only when performing a snapshot.REPLICATION SLAVE
Enables the connector to connect to and read the MariaDB server binlog.
REPLICATION CLIENT
Enables the connector the use of the following statements:
-
SHOW MASTER STATUS
-
SHOW SLAVE STATUS
-
SHOW BINARY LOGS
The connector always requires this.
ON
Identifies the database to which the permissions apply.
TO 'user'
Specifies the user to grant the permissions to.
IDENTIFIED BY 'password'
Specifies the user’s MariaDB password.
-
2.2.5.2. Enabling the MariaDB binlog for Debezium
You must enable binary logging for MariaDB replication. The binary logs record transaction updates in a way that enables replicas to propagate those changes.
Prerequisites
- A MariaDB server.
- Appropriate MariaDB user privileges.
Procedure
-
Check whether the
log-bin
option is enabled: If the binlog is
OFF
, add the properties in the following table to the configuration file for the MariaDB server:server-id = 223344 # Querying variable is called server_id, e.g. SELECT variable_value FROM information_schema.global_variables WHERE variable_name='server_id'; log_bin = mariadb-bin binlog_format = ROW binlog_row_image = FULL binlog_expire_logs_seconds = 864000
- Confirm your changes by checking the binlog status once more:
If you run MariaDB on Amazon RDS, you must enable automated backups for your database instance for binary logging to occur. If the database instance is not configured to perform automated backups, the binlog is disabled, even if you apply the settings described in the previous steps.
Table 2.47. Descriptions of MariaDB binlog configuration properties Property Description server-id
The value for the
server-id
must be unique for each server and replication client in the MariaDB cluster.log_bin
The value of
log_bin
is the base name of the sequence of binlog files.binlog_format
The
binlog-format
must be set toROW
orrow
.binlog_row_image
The
binlog_row_image
must be set toFULL
orfull
.binlog_expire_logs_seconds
The
binlog_expire_logs_seconds
corresponds to deprecated system variableexpire_logs_days
. This is the number of seconds for automatic binlog file removal. The default value is2592000
, which equals 30 days. Set the value to match the needs of your environment. For more information, see MariaDB purges binlog files.
2.2.5.3. Enabling MariaDB Global Transaction Identifiers for Debezium
Global transaction identifiers (GTIDs) uniquely identify transactions that occur on a server within a cluster. Although not required for a Debezium MariaDB connector, using GTIDs simplifies replication and enables you to more easily confirm if primary and replica servers are consistent.
For MariaDB, GTIDs are enabled by default and no additional configuration is necessary.
2.2.5.4. Configuring MariaDB session timeouts for Debezium
When an initial consistent snapshot is made for large databases, your established connection could timeout while the tables are being read. You can prevent this behavior by configuring interactive_timeout
and wait_timeout
in your MariaDB configuration file.
Prerequisites
- A MariaDB server.
- Basic knowledge of SQL commands.
- Access to the MariaDB configuration file.
Procedure
Configure
interactive_timeout
:mariadb> interactive_timeout=<duration-in-seconds>
Configure
wait_timeout
:mariadb> wait_timeout=<duration-in-seconds>
Table 2.48. Descriptions of MariaDB session timeout options Option Description interactive_timeout
The number of seconds the server waits for activity on an interactive connection before closing it. For more information see the:
wait_timeout
The number of seconds that the server waits for activity on a non-interactive connection before closing it.
2.2.5.5. Enabling query log events for Debezium MariaDB connectors
You might want to see the original SQL
statement for each binlog event. Enabling the binlog_annotate_row_events
option in the MariaDB configuration allows you to do this.
Prerequisites
- A MariaDB server.
- Basic knowledge of SQL commands.
- Access to the MariaDB configuration file.
Procedure
Enable
binlog_annotate_row_events
in MariaDB:mariadb> binlog_annotate_row_events=ON
binlog_annotate_row_events
is set to a value that enables/disables support for including the originalSQL
statement in the binlog entry.-
ON
= enabled -
OFF
= disabled
-
2.2.5.6. validate binlog row value options for Debezium MariaDB connectors
Verify the setting of the binlog_row_value_options
variable in the database. To enable the connector to consume UPDATE events, this variable must be set to a value other than PARTIAL_JSON
.
Prerequisites
- A MariaDB server.
- Basic knowledge of SQL commands.
- Access to the MariaDB configuration file.
Procedure
Check current variable value
mariadb> show global variables where variable_name = 'binlog_row_value_options';
Result
+--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | binlog_row_value_options | | +--------------------------+-------+
If the value of the variable is set to
PARTIAL_JSON
, run the following command to unset it:mariadb> set @@global.binlog_row_value_options="" ;
2.2.6. Deployment of Debezium MariaDB connectors
You can use either of the following methods to deploy a Debezium MariaDB connector:
Additional resources
2.2.6.1. MariaDB connector deployment using Streams for Apache Kafka
Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in.
During the deployment process, you create and use the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. -
A
KafkaConnector
CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying theKafkaConnector
CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The spec.build.output
parameter in the KafkaConnect
CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically.
If you use a KafkaConnect
resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information.
Additional resources
- Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift.
2.2.6.2. Using Streams for Apache Kafka to deploy a Debezium MariaDB connector
With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use.
During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect
custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
The newly created container is pushed to the container registry that is specified in .spec.build.output
, and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector
custom resources to start the connectors that are included in the build.
Prerequisites
- You have access to an OpenShift cluster on which the cluster Operator is installed.
- The Streams for Apache Kafka Operator is running.
- An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Kafka Connect is deployed on Streams for Apache Kafka
- You have a Red Hat build of Debezium license.
-
The OpenShift
oc
CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
- To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub
- An account and permissions to create and manage images in the registry.
- To store the build image as a native OpenShift ImageStream
- An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams in the OpenShift Container Platform documentation.
Procedure
- Log in to the OpenShift cluster.
Create a Debezium
KafkaConnect
custom resource (CR) for the connector, or modify an existing one. For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifies themetadata.annotations
andspec.build
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
Example 2.11. A
dbz-connect.yaml
file that defines aKafkaConnect
custom resource that includes a Debezium connectorIn the example that follows, the custom resource is configured to download the following artifacts:
- The Debezium MariaDB connector archive.
- The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector.
- The Debezium scripting SMT archive and the associated scripting engine that you want to use with the Debezium connector. The SMT archive and scripting language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-mariadb artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mariadb/2.7.3.Final-redhat-00001/debezium-connector-mariadb-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat-<build-number>/apicurio-registry-distro-connect-converter-2.4.4.Final-redhat-<build-number>.zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ...
Table 2.49. Descriptions of Kafka Connect configuration settings Item Description 1
Sets the
strimzi.io/use-connector-resources
annotation to"true"
to enable the Cluster Operator to useKafkaConnector
resources to configure connectors in this Kafka Connect cluster.2
The
spec.build
configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.3
The
build.output
specifies the registry in which the newly built image is stored.4
Specifies the name and image name for the image output. Valid values for
output.type
aredocker
to push into a container registry such as Docker Hub or Quay, orimagestream
to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying thebuild.output
in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}.5
The
plugins
configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-inname
, and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component.6
The value of
artifacts.type
specifies the file type of the artifact specified in theartifacts.url
. Valid types arezip
,tgz
, orjar
. Debezium connector archives are provided in.zip
file format. Thetype
value must match the type of the file that is referenced in theurl
field.7
The value of
artifacts.url
specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server.8
(Optional) Specifies the artifact
type
andurl
for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter.9
(Optional) Specifies the artifact
type
andurl
for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy.10
(Optional) Specifies the artifact
type
andurl
for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT.ImportantIf you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components
artifacts.url
must specify the location of a JAR file, and the value ofartifacts.type
must also be set tojar
. Invalid values cause the connector fails at runtime.To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries:
-
groovy
-
groovy-jsr223
(scripting agent) -
groovy-json
(module for parsing JSON strings)
As an alternative, the Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript.
Apply the
KafkaConnect
build specification to the OpenShift cluster by entering the following command:oc create -f dbz-connect.yaml
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster.Create a
KafkaConnector
resource to define an instance of each connector that you want to deploy.
For example, create the followingKafkaConnector
CR, and save it asmariadb-inventory-connector.yaml
Example 2.12.
mariadb-inventory-connector.yaml
file that defines theKafkaConnector
custom resource for a Debezium connectorapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-mariadb 1 spec: class: io.debezium.connector.mariadb.MariaDbConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: mariadb.debezium-mariadb.svc.cluster.local 5 database.port: 3306 6 database.user: debezium 7 database.password: dbz 8 database.server.id: 184054 9 topic.prefix: inventory-connector-mariadb 10 table.include.list: inventory.* 11 ...
Table 2.50. Descriptions of connector configuration settings Item Description 1
The name of the connector to register with the Kafka Connect cluster.
2
The name of the connector class.
3
The number of tasks that can operate concurrently.
4
The connector’s configuration.
5
The address of the host database instance.
6
The port number of the database instance.
7
The name of the account that Debezium uses to connect to the database.
8
The password that Debezium uses to connect to the database user account.
9
Unique numeric ID of the connector.
10
The topic prefix for the database instance or cluster.
The specified name must be formed only from alphanumeric characters or underscores.
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster.
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector.11
The list of tables from which the connector captures change events.
Create the connector resource by running the following command:
oc create -n <namespace> -f <kafkaConnector>.yaml
For example,
oc create -n debezium -f mariadb-inventory-connector.yaml
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by
spec.config.database.dbname
in theKafkaConnector
CR. After the connector pod is ready, Debezium is running.
You are now ready to verify the Debezium MariaDB deployment.
2.2.6.3. Deploying Debezium MariaDB connectors by building a custom Kafka Connect container image from a Dockerfile
To deploy a Debezium MariaDB connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive, and then push this container image to a container registry. You then need to create the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance. Theimage
property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat Streams for Apache Kafka is deployed. Streams for Apache Kafka offers operators and images that bring Apache Kafka to OpenShift. -
A
KafkaConnector
CR that defines your Debezium MariaDB connector. Apply this CR to the same OpenShift instance where you apply theKafkaConnect
CR.
Prerequisites
- MariaDB is running and you completed the steps to set up MariaDB to work with a Debezium connector.
- Streams for Apache Kafka is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Managing Streams for Apache Kafka on OpenShift.
- Podman or Docker is installed.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector.
Procedure
Create the Debezium MariaDB container for Kafka Connect:
Create a Dockerfile that uses
registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0
as the base image. For example, from a terminal window, enter the following command:cat <<EOF >debezium-container-for-mariadb.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mariadb/2.7.3.Final-redhat-00001/debezium-connector-mariadb-2.7.3.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-mariadb-2.7.3.Final-redhat-00001-plugin.zip \ && rm debezium-connector-mariadb-2.7.3.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ USER 1001 EOF
Item Description 1
You can specify any file name that you want.
2
Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory.
The command creates a Dockerfile with the name
debezium-container-for-mariadb.yaml
in the current directory.Build the container image from the
debezium-container-for-mariadb.yaml
Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:podman build -t debezium-container-for-mariadb:latest .
docker build -t debezium-container-for-mariadb:latest .
The preceding commands build a container image with the name
debezium-container-for-mariadb
.Push your custom image to a container registry, such as
quay.io
or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:podman push <myregistry.io>/debezium-container-for-mariadb:latest
docker push <myregistry.io>/debezium-container-for-mariadb:latest
Create a new Debezium MariaDB
KafkaConnect
custom resource (CR). For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: #... image: debezium-container-for-mariadb 2 ...
Item Description 1
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.2
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply the
KafkaConnect
CR to the OpenShift Kafka Connect environment by entering the following command:oc create -f dbz-connect.yaml
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium MariaDB connector instance.You configure a Debezium MariaDB connector in a
.yaml
file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.The following example configures a Debezium connector that connects to a MariaDB host,
192.168.99.100
, on port3306
, and captures changes to theinventory
database.dbserver1
is the server’s logical name.MariaDB
inventory-connector.yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-mariadb 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mariadb.MariaDbConnector tasksMax: 1 2 config: 3 database.hostname: mariadb 4 database.port: 3306 database.user: debezium database.password: dbz database.server.id: 184054 5 topic.prefix: inventory-connector-mariadb 6 table.include.list: inventory 7 schema.history.internal.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 8 schema.history.internal.kafka.topic: schema-changes.inventory 9
Table 2.51. Descriptions of connector configuration settings Item Description 1
The name of the connector.
2
Only one task should operate at any one time. Because the MariaDB connector reads the MariaDB server’s
binlog
, using a single connector task ensures proper order and event handling. The Kafka Connect service uses connectors to start one or more tasks that do the work, and it automatically distributes the running tasks across the cluster of Kafka Connect services. If any of the services stop or crash, those tasks will be redistributed to running services.3
The connector’s configuration.
4
The database host, which is the name of the container running the MariaDB server (
mariadb
).5
Unique ID of the connector.
6
Topic prefix for the MariaDB server or cluster. This name is used as the prefix for all Kafka topics that receive change event records.
7
The connector captures changes from the
inventory
table only.8
The list of Kafka brokers that this connector will use to write and recover DDL statements to the database schema history topic. Upon restart, the connector recovers the schemas of the database that existed at the point in time in the binlog when the connector should begin reading.
9
The name of the database schema history topic. This topic is for internal use only and should not be used by consumers.
Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in theinventory-connector.yaml
file, you would run the following command:oc apply -f inventory-connector.yaml
The preceding command registers
inventory-connector
and the connector starts to run against theinventory
database as defined in theKafkaConnector
CR.
For the complete list of the configuration properties that you can set for the Debezium MariaDB connector, see MariaDB connector configuration properties.
Results
After the connector starts, it performs a consistent snapshot of the MariaDB databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
2.2.6.4. Verifying that the Debezium MariaDB connector is running
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc):
- Verify the connector status.
- Verify that the connector generates topics.
- Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table.
Prerequisites
- A Debezium connector is deployed to Streams for Apache Kafka on OpenShift.
-
The OpenShift
oc
CLI client is installed. - You have access to the OpenShift Container Platform web console.
Procedure
Check the status of the
KafkaConnector
resource by using one of the following methods:From the OpenShift Container Platform web console:
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaConnector
. - From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-mariadb.
- In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc describe KafkaConnector <connector-name> -n <project>
For example,
oc describe KafkaConnector inventory-connector-mariadb -n debezium
The command returns status information that is similar to the following output:
Example 2.13.
KafkaConnector
resource statusName: inventory-connector-mariadb Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-mariadb Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-mariadb.inventory inventory-connector-mariadb.inventory.addresses inventory-connector-mariadb.inventory.customers inventory-connector-mariadb.inventory.geom inventory-connector-mariadb.inventory.orders inventory-connector-mariadb.inventory.products inventory-connector-mariadb.inventory.products_on_hand Events: <none>
Verify that the connector created Kafka topics:
From the OpenShift Container Platform web console.
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaTopic
. -
From the KafkaTopics list, click the name of the topic that you want to check, for example,
inventory-connector-mariadb.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d
. - In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc get kafkatopics
The command returns status information that is similar to the following output:
Example 2.14.
KafkaTopic
resource statusNAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-mariadb--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-mariadb.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-mariadb.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-mariadb.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-mariadb.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-mariadb.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-mariadb.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
Check topic content.
- From a terminal window, enter the following command:
oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=<topic-name>
For example,
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-mariadb.inventory.products_on_hand
The format for specifying the topic name is the same as the
oc describe
command returns in Step 1, for example,inventory-connector-mariadb.inventory.addresses
.For each event in the topic, the command returns information that is similar to the following output:
Example 2.15. Content of a Debezium change event
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory-connector-mariadb.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-mariadb.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-mariadb.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"int64","optional":false,"field":"ts_us"},{"type":"int64","optional":false,"field":"ts_ns"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.mariadb.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"int64","optional":true,"field":"ts_us"},{"type":"int64","optional":true,"field":"ts_ns"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"inventory-connector-mariadb.inventory.products_on_hand.Envelope"},"payload":{"before":null,"after":{"product_id":101,"quantity":3},"source":{"version":"2.7.3.Final-redhat-00001","connector":"mariadb","name":"inventory-connector-mariadb","ts_ms":1638985247805,"ts_us":1638985247805000000,"ts_ns":1638985247805000000,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"mariadb-bin.000003","pos":156,"row":0,"thread":null,"query":null},"op":"r","ts_ms":1638985247805,"ts_us":1638985247805102,"ts_ns":1638985247805102588,"transaction":null}}
In the preceding example, the
payload
value shows that the connector snapshot generated a read ("op" ="r"
) event from the tableinventory.products_on_hand
. The"before"
state of theproduct_id
record isnull
, indicating that no previous value exists for the record. The"after"
state shows aquantity
of3
for the item withproduct_id
101
.
2.2.6.5. Descriptions of Debezium MariaDB connector configuration properties
The Debezium MariaDB connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:
- Required connector configuration properties
- Advanced connector configuration properties
- Database schema history connector configuration properties that control how Debezium processes events that it reads from the database schema history topic.
Pass-through MariaDB connector configuration properties
- Pass-through database schema history properties for configuring producer and consumer clients
- Pass-through Kafka signals configuration properties
- Pass-through Kafka signals consumer client configuration properties
- Pass-through sink notification configuration properties
- Pass-through database driver configuration properties
Required Debezium MariaDB connector configuration properties
The following configuration properties are required unless a default value is available.
bigint.unsigned.handling.mode
Default value:long
Specifies how the connector represents BIGINT UNSIGNED columns in change events. Set one of the following options:long
-
Uses Java
long
data types to represent BIGINT UNSIGNED column values. Although thelong
type does not offer the greatest precision, it is easy implement in most consumers. In most environments, this is the preferred setting. precise
-
Uses
java.math.BigDecimal
data types to represent values. The connector uses the Kafka Connectorg.apache.kafka.connect.data.Decimal
data type to represent values in encoded binary format. Set this option if the connector typically works with values larger than 2^63. Thelong
data type cannot convey values of that size.
binary.handling.mode
Default value:
bytes
Specifies how the connector represents values for binary columns, such as,blob
,binary
,varbinary
, in change events.
Set one of the following options:bytes
- Represents binary data as a byte array.
base64
- Represents binary data as a base64-encoded String.
base64-url-safe
- Represents binary data as a base64-url-safe-encoded String.
hex
- Represents binary data as a hex-encoded (base16) String.
column.exclude.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Other columns in the source record are captured as usual. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. If you include this property in the configuration, do not also set the
column.include.list
property.
column.include.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. Other columns are omitted from the event record. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name.
If you include this property in the configuration, do not set thecolumn.exclude.list
property.
column.mask.hash.v2.hashAlgorithm.with.salt.salt
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form<databaseName>.<tableName>.<columnName>
.
To match the name of a column Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. In the resulting change event record, the values for the specified columns are replaced with pseudonyms.
A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation.
In the following example,
CzQMA0cB5K
is a randomly selected salt.column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts.
Depending on the hashAlgorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked.
Hashing strategy version 2 ensures fidelity of values that are hashed in different places or systems.
column.mask.with.length.chars
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Setlength
to a positive integer to replace data in the specified columns with the number of asterisk (*
) characters specified by the length in the property name. Set length to0
(zero) to replace data in the specified columns with an empty string.The fully-qualified name of a column observes the following format: databaseName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
You can specify multiple properties with different lengths in a single configuration.
column.propagate.source.type
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:-
__debezium.source.column.type
-
__debezium.source.column.length
-
__debezium.source.column.scale
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
-
column.truncate.to.length.chars
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Setlength
to a positive integer value, for example,column.truncate.to.20.chars
.The fully-qualified name of a column observes the following format: databaseName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
You can specify multiple properties with different lengths in a single configuration.
connect.timeout.ms
-
Default value:
30000
(30 seconds)
A positive integer value that specifies the maximum time in milliseconds that the connector waits to establish a connection to the MariaDB database server before the connection request times out.
connector.class
-
Default value: No default
The name of the Java class for the connector. Always specify for the MariaDB connector.
database.exclude.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the names of databases from which you do not want the connector to capture changes. The connector captures changes in any database that is not named in thedatabase.exclude.list
.
To match the name of a database, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the database; it does not match substrings that might be present in a database name.
If you include this property in the configuration, do not also set thedatabase.include.list
property.
database.hostname
-
Default value: No default
The IP address or hostname of the MariaDB database server.
database.include.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the names of the databases from which the connector captures changes. The connector does not capture changes in any database whose name is not indatabase.include.list
. By default, the connector captures changes in all databases.
To match the name of a database, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the database; it does not match substrings that might be present in a database name.
If you include this property in the configuration, do not also set thedatabase.exclude.list
property.
database.password
-
Default value: No default
The password of the MariaDB user that the connector uses to connect to the MariaDB database server.
database.port
-
Default value:
3306
Integer port number of the MariaDB database server.
database.server.id
-
Default value: No default
The numeric ID of this database client. The specified ID must be unique across all currently running database processes in the MariaDB cluster. To enable it to read the binlog, the connector uses this unique ID to join the MariaDB database cluster as another server.
database.user
-
Default value: No default
The name of the MariaDB user that the connector uses to connect to the MariaDB database server.
decimal.handling.mode
Default value:
precise
Specifies how the connector handles values forDECIMAL
andNUMERIC
columns in change events.
Set one of the following options:precise
(default)-
Uses
java.math.BigDecimal
values in binary form to represent values precisely. double
-
Uses the
double
data type to represent values. This option can result in a loss of precision, but it is easier for most consumers to use. string
- Encodes values as formatted strings. This option is easy to consume, but can result in the loss of semantic information about the real type.
event.deserialization.failure.handling.mode
Default value:
fail
Specifies how the connector reacts after an exception occurs during deserialization of binlog events. This option is deprecated, please useevent.processing.failure.handling.mode
option instead.fail
- Propagates the exception, which indicates the problematic event and its binlog offset, and causes the connector to stop.
warn
- Logs the problematic event and its binlog offset and then skips the event.
ignore
- Passes over the problematic event and does not log anything.
field.name.adjustment.mode
Default value: No default
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Set one of the following options:none
- No adjustment.
avro
- Replaces characters that are not valid in Avro names with underscore characters.
avro_unicode
Replaces underscore characters or characters that cannot be used in Avro names with corresponding unicode, such as
_uxxxx
.
Note`_` is an escape sequence, similar to a backslash in Java
For more information, see: Avro naming.
gtid.source.excludes
-
Default value: No default
A comma-separated list of regular expressions that match source domain IDs in the GTID set that the connector uses to find the binlog position on the MariaDB server. When this property is set, the connector uses only the GTID ranges that have source UUIDs that do not match any of the specifiedexclude
patterns.
To match the value of a GTID, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the GTID’s domain identifier.
If you include this property in the configuration, do not also set thegtid.source.includes
property.
gtid.source.includes
-
Default value: No default
A comma-separated list of regular expressions that match source domain IDs in the GTID set used that the connector uses to find the binlog position on the MariaDB server. When this property is set, the connector uses only the GTID ranges that have source UUIDs that match one of the specifiedinclude
patterns.
To match the value of a GTID, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the GTID’s domain identifier.
If you include this property in the configuration, do not also set thegtid.source.excludes
property.
include.query
-
Default value:
false
Boolean value that specifies whether the connector should include the original SQL query that generated the change event.
If you set this option totrue
then you must also configure MariaDB with thebinlog_annotate_row_events
option set toON
. Wheninclude.query
istrue
, the query is not present for events that the snapshot process generates.
Settinginclude.query
totrue
might expose tables or fields that are explicitly excluded or masked by including the original SQL statement in the change event. For this reason, the default setting isfalse
.
For more information about configuring the database to return the originalSQL
statement for each log event, see Enabling query log events.
include.schema.changes
-
Default value:
true
Boolean value that specifies whether the connector publishes changes that occur to the database schema to a Kafka topic with the name of the database server ID. Each schema change event that the connector captures uses a key that contains the database name and a value that includes the DDL statements that describe the change. This setting does not affect how the connector records schema changes in its internal database schema history.
include.schema.comments
Default value:
false
Boolean value that specifies whether the connector parses and publishes table and column comments on metadata objects.
NoteWhen you set this option to
true
, the schema comments that the connector includes can add a significant amount of string data to each schema object. Increasing the number and size of logical schema objects increases the amount of memory that the connector uses.
inconsistent.schema.handling.mode
Default value:
fail
Specifies how the connector responds to binlog events that refer to tables that are not present in the internal schema representation. That is, the internal representation is not consistent with the database.
Set one of the following options:fail
- The connector throws an exception that reports the problematic event and its binlog offset. The connector then stops.
warn
- The connector logs the problematic event and its binlog offset, and then skips the event.
skip
- The connector skips the problematic event and does not report it in the log.
message.key.columns
-
Default value: No default
A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns.
To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format:<fully-qualified_tableName>:<keyColumn>,<keyColumn>
To base a table key on multiple column names, insert commas between the column names.
Each fully-qualified table name is a regular expression in the following format:<databaseName>.<tableName>
The property can include entries for multiple tables. Use a semicolon to separate table entries in the list.
The following example sets the message key for the tablesinventory.customers
andpurchase.orders
:inventory.customers:pk1,pk2;(.*).purchaseorders:pk3,pk4
For the tableinventory.customer
, the columnspk1
andpk2
are specified as the message key. For thepurchaseorders
tables in any database, the columnspk3
andpk4
server as the message key.
There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key.
name
-
Default value: No default
Unique name for the connector. If you attempt to use the same name to register another connector, registration fails. This property is required by all Kafka Connect connectors.
schema.name.adjustment.mode
Default value: No default
Specifies how the connector adjusts schema names for compatibility with the message converter used by the connector. Set one of the following options:none
- No adjustment.
avro
- Replaces characters that are not valid in Avro names with underscore characters.
avro_unicode
-
Replaces underscore characters or characters that cannot be used in Avro names with corresponding unicode, such as
_uxxxx.
NOTE:_
is an escape sequence, similar to a backslash in Java
skip.messages.without.change
-
Default value:
false
Specifies whether the connector emits messages for records when it does not detect a change in the included columns. Columns are considered to be included if they are listed in thecolumn.include.list
, or are not listed in thecolumn.exclude.list
. Set the value totrue
to prevent the connector from capturing records when no changes are present in the included columns.
table.exclude.list
Default value: empty string
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables from which you do not want the connector to capture changes. The connector captures changes in any table that is not included intable.exclude.list
. Each identifier is of the form databaseName.tableName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
If you include this property in the configuration, do not also set thetable.include.list
property.
table.include.list
Default value: empty string
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. The connector does not capture changes in any table that is not included intable.include.list
. Each identifier is of the form databaseName.tableName. By default, the connector captures changes in all non-system tables in every database from which it is configured to captures changes.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
If you include this property in the configuration, do not also set thetable.exclude.list
property.
tasks.max
-
Default value:
1
The maximum number of tasks to create for this connector. Because the MariaDB connector always uses a single task, changing the default value has no effect.
time.precision.mode
Default value:
adaptive_time_microseconds
Specifies the type of precision that the connector uses to represent time, date, and timestamps values. Set one of the following options:
adaptive_time_microseconds
(default)-
The connector captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type, with the exception of TIME type fields, which are always captured as microseconds.
connect
- The connector always represents time and timestamp values using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which use millisecond precision regardless of the database columns' precision.
tombstones.on.delete
Default value:
true
Specifies whether a delete event is followed by a tombstone event. After a source record is deleted, the connector can emit a tombstone event (the default behavior) to enable Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic. Set one of the following options:
true
(default)-
The connector represents delete operations by emitting a delete event and a subsequent tombstone event.
false
-
The connector emits only delete events.
topic.prefix
Default value: No default
Topic prefix that provides a namespace for the particular MariaDB database server or cluster in which Debezium is capturing changes. Because the topic prefix is used to name all of the Kafka topics that receive events that this connector emits, it’s important that the topic prefix is unique across all connectors. Values must contain only alphanumeric characters, hyphens, dots, and underscores.
WarningAfter you set this property, do not change its value. If you change the value, after the connector restarts, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. The connector is also unable to recover its database schema history topic.
Advanced Debezium MariaDB connector configuration properties
The following list describes advanced MariaDB connector configuration properties. The default values for these properties rarely require changes. Therefore, you do not need to specify them in the connector configuration.
connect.keep.alive
-
Default value:
true
A Boolean value that specifies whether a separate thread should be used to ensure that the connection to the MariaDB server or cluster is kept alive.
converters
Default value: No default
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use.
For example,boolean
.
This property is required to enable the connector to use a custom converter.
For each converter that you configure for a connector, you must also add a.type
property, which specifies the fully-qualified name of the class that implements the converter interface. The.type
property uses the following format:
<converterSymbolicName>.type
For example,
boolean.type: io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate these additional configuration parameter with a converter, prefix the parameter name with the symbolic name of the converter.
For example, to define a
selector
parameter that specifies the subset of columns that theboolean
converter processes, add the following property:
boolean.selector=db1.table1.*, db1.table2.column1
custom.metric.tags
-
Default value: No default
Defines tags that customize MBean object names by adding metadata that provides contextual information. Specify a comma-separated list of key-value pairs. Each key represents a tag for the MBean object name, and the corresponding value represents a value for the key, for example,k1=v1,k2=v2
The connector appends the specified tags to the base MBean object name. Tags can help you to organize and categorize metrics data. You can define tags to identify particular application instances, environments, regions, versions, and so forth. For more information, see Customized MBean names.
database.initial.statements
Default value: No default
A semicolon separated list of SQL statements to be executed when a JDBC connection, not the connection that is reading the transaction log, to the database is established. To specify a semicolon as a character in a SQL statement and not as a delimiter, use two semicolons, (;;
).
The connector might establish JDBC connections at its own discretion, so this property is ony for configuring session parameters. It is not for executing DML statements.
database.query.timeout.ms
-
Default value:
600000
(10 minutes)
Specifies the time, in milliseconds, that the connector waits for a query to complete. Set the value to0
(zero) to remove the timeout limit.
database.ssl.keystore
-
Default value: No default
An optional setting that specifies the location of the key store file. A key store file can be used for two-way authentication between the client and the MariaDB server.
database.ssl.keystore.password
-
Default value: No default
The password for the key store file. Specify a password only if thedatabase.ssl.keystore
is configured.
database.ssl.mode
Default value:
preferred
Specifies whether the connector uses an encrypted connection. The following settings are available:disabled
- Specifies the use of an unencrypted connection.
preferred
(Default)- The connector establishes an encrypted connection if the server supports secure connections. If the server does not support secure connections, the connector falls back to using an unencrypted connection.
required
- The connector establishes an encrypted connection. If it is unable to establish an encrypted connection, the connector fails.
verify_ca
-
The connector behaves as when you set the
required
option, but it also verifies the server TLS certificate against the configured Certificate Authority (CA) certificates. If the server TLS certificate does not match any valid CA certificates, the connector fails.
verify_identity
-
The connector behaves as when you set the
verify_ca
option, but it also verifies that the server certificate matches the host of the remote connection.
database.ssl.truststore
-
Default value: No default
The location of the trust store file for the server certificate verification.
database.ssl.truststore.password
-
Default value: No default
The password for the trust store file. Used to check the integrity of the truststore, and unlock the truststore.
enable.time.adjuster
Default value:
true
Boolean value that indicates whether the connector converts a 2-digit year specification to 4 digits. Set the value tofalse
when conversion is fully delegated to the database.
MariaDB users can insert year values with either 2-digits or 4-digits. 2-digit values are mapped to a year in the range 1970 - 2069. By default, the connector performs the conversion.
errors.max.retries
Default value:
-1
Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
Set one of the following options:-1
- No limit. The connector always restarts automatically, and retries the operation, regardless of the number of previous failures.
0
- Disabled. The connector fails immediately, and never retries the operation. User intervention is required to restart the connector.
> 0
- The connector restarts automatically until it reaches the specified maximum number of retries. After the next failure, the connector stops, and user intervention is required to restart it.
event.converting.failure.handling.mode
Default value:
warn
Specifies how the connector responds when it cannot convert a table record due to a mismatch between the data type of a column and the type specified by the Debezium internal schema.
Set one of the following options:fail
-
An exception reports that conversion failed because the data type of the field did not match the schema type, and indicates that it might be necessary to restart the connector in
schema _only_recovery
mode to enable a successful conversion. warn
-
The connector writes a
null
value to the event field for the column that failed conversion, writes a message to the warning log .
skip
-
The connector writes a
null
value to the event field for the column that failed conversion, and writes a message to the debug log.
event.processing.failure.handling.mode
Default value:
fail
Specifies how the connector handles failures that occur when processing events, for example, if it encounters a corrupted event. The following settings are available:fail
- The connector raises an exception that reports the problematic event and its position. The connector then stops.
warn
- The connector does not raise an exception. Instead, it logs the problematic event and its position, and then skips the event.
ignore
- The connector ignores the problematic event, and does not generate a log entry.
heartbeat.action.query
Default value: No default
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message.
For example, the following query periodically captures the state of the executed GTID set in the source database.
INSERT INTO gtid_history_table (select @gtid_executed)
heartbeat.interval.ms
Default value:
0
Specifies how frequently the connector sends heartbeat messages to a Kafka topic. By default, the connector does not send heartbeat messages.
Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages.
incremental.snapshot.allow.schema.changes
-
Default value:
false
Specifies whether the connector allows schema changes during an incremental snapshot. When the value is set totrue
, the connector detects schema change during an incremental snapshot, and re-select a current chunk to avoid locking DDLs.
Changes to a primary key are not supported. Changing the primary during an incremental snapshot, can lead to incorrect results. A further limitation is that if a schema change affects only the default values of columns, then the change is not detected until the DDL is processed from the binlog stream. This does not affect the values of snapshot events, but the schema of these snapshot events may have outdated defaults.
incremental.snapshot.chunk.size
-
Default value:
1024
The maximum number of rows that the connector fetches and reads into memory when it retrieves an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment.
incremental.snapshot.watermarking.strategy
Default value:
insert_insert
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
You can specify one of the following options:insert_insert
(default)- When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads during the snapshot, it writes an entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, Debezium inserts a second entry that records the signal to close the window.
insert_delete
- When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads, it writes a single entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, this entry is removed. No entry is created for the signal to close the snapshot window. Set this option to prevent rapid growth of the signaling data collection.
max.batch.size
-
Default value:
2048
Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector.
max.queue.size
-
Default value:
8192
A positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always setmax.queue.size
to a value that is larger than the value ofmax.batch.size
.
max.queue.size.in.bytes
-
Default value:
0
A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value.
Ifmax.queue.size
is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you setmax.queue.size=1000
, andmax.queue.size.in.bytes=5000
, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes.
min.row.count.to.stream.results
Default value:
1000
During a snapshot, the connector queries each table for which the connector is configured to capture changes. The connector uses each query result to produce a read event that contains data for all rows in that table. This property determines whether the MariaDB connector puts results for a table into memory, which is fast but requires large amounts of memory, or streams the results, which can be slower but work for very large tables. The setting of this property specifies the minimum number of rows a table must contain before the connector streams results.
To skip all table size checks and always stream all results during a snapshot, set this property to
0
.
notification.enabled.channels
Default value: No default
List of notification channel names that are enabled for the connector. By default, the following channels are available:-
sink
-
log
-
jmx
-
poll.interval.ms
-
Default value:
500
(0.5 seconds)
Positive integer value that specifies the number of milliseconds the connector waits for new change events to appear before it starts processing a batch of events.
provide.transaction.metadata
-
Default value:
false
Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specifytrue
if you want the connector to do this. For more information, see Transaction metadata.
signal.data.collection
-
Default value: No default
Fully-qualified name of the data collection that is used to send signals to the connector.
Use the following format to specify the collection name:<databaseName>.<tableName>
signal.enabled.channels
Default value: No default
List of the signaling channel names that are enabled for the connector. By default, the following channels are available:-
source
-
kafka
-
file
-
jmx
-
skipped.operations
-
Default value:
t
A comma-separated list of operation types that will be skipped during streaming. The operations include:c
for inserts/create,u
for updates,d
for deletes,t
for truncates, andnone
to not skip any operations. By default, truncate operations are skipped.
snapshot.delay.ms
-
Default value: No default
An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors.
snapshot.fetch.size
-
Default value: Unset
By default, during a snapshot, the connector reads table content in batches of rows. Set this property to specify the maximum number of rows in a batch.
snapshot.include.collection.list
-
Default value: All tables specified in the
table.include.list
.
An optional, comma-separated list of regular expressions that match the fully-qualified names (<databaseName>.<tableName>
) of the tables to include in a snapshot. The specified items must be named in the connector’stable.include.list
property. This property takes effect only if the connector’ssnapshot.mode
property is set to a value other thannever
.
This property does not affect the behavior of incremental snapshots.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
snapshot.lock.timeout.ms
-
Default value:
10000
Positive integer that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. For more information, see
snapshot.locking.mode
Default value:
minimal
Specifies whether and for how long the connector holds the global MariaDB read lock, which prevents any updates to the database while the connector is performing a snapshot. The following settings are available:minimal
-
The connector holds the global read lock for only the initial phase of the snapshot during which it reads the database schemas and other metadata. During the next phase of the snapshot, the connector releases the lock as it selects all rows from each table. To perform the SELECT operation in a consistent fashion, the connector uses a REPEATABLE READ transaction. Although the release of the global read lock permits other MariaDB clients to update the database, use of REPEATABLE READ isolation ensures a consistent snapshot, because the connector continues to read the same data for the duration of the transaction.
extended
-
Blocks all write operations for the duration of the snapshot. Use this setting if clients submit concurrent operations that are incompatible with the REPEATABLE READ isolation level in MariaDB.
none
- Prevents the connector from acquiring any table locks during the snapshot. Although this option is allowed with all snapshot modes, it is safe to use only if no schema changes occur while the snapshot is running. Tables that are defined with the MyISAM engine always acquire a table lock. As a result, such tables are locked even if you set this option. This behavior differs from tables that are defined by the InnoDB engine, which acquire row-level locks.
snapshot.max.threads
Default value:
1
Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently.
ImportantParallel initial snapshots is a Developer Preview feature only. Developer Preview software is not supported by Red Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.
snapshot.mode
Default value:
initial
Specifies the criteria for running a snapshot when the connector starts. The following settings are available:always
- The connector performs a snapshot every time that it starts. The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables every time that the connector starts.
initial
(default)- The connector runs a snapshot only when no offsets have been recorded for the logical server name, or if it detects that an earlier snapshot failed to complete. After the snapshot completes, the connector begins to stream event records for subsequent database changes.
initial_only
- The connector runs a snapshot only when no offsets have been recorded for the logical server name. After the snapshot completes, the connector stops. It does not transition to streaming to read change events from the binlog.
schema_only
-
Deprecated, see
no_data
. no_data
- The connector runs a snapshot that captures only the schema, but not any table data. Set this option if you do not need the topics to contain a consistent snapshot of the data, but you want to capture any schema changes that were applied after the last connector restart.
schema_only_recovery
-
Deprecated, see
recovery
. recovery
Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth.
WarningDo not use this mode to perform a snapshot if schema changes were committed to the database after the last connector shutdown.
never
-
When the connector starts, rather than performing a snapshot, it immediately begins to stream event records for subsequent database changes. This option is under consideration for future deprecation, in favor of the
no_data
option. when_needed
After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
- It cannot detect any topic offsets.
- A previously recorded offset specifies a binlog position or GTID that is not available on the server.
snapshot.query.mode
Default value:
select_all
Specifies how the connector queries data while performing a snapshot.
Set one of the following options:select_all
(default)-
The connector uses a
select all
query to retrieve rows from captured tables, optionally adjusting the columns selected based on the columninclude
andexclude
list configurations.
This setting enables you to manage snapshot content in a more flexible manner compared to using the
snapshot.select.statement.overrides
property.
snapshot.select.statement.overrides
Default value: No default
Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a comma-separated list of fully-qualified table names in the form<databaseName>.<tableName>
. For example,
"snapshot.select.statement.overrides": "inventory.products,customers.orders"
For each table in the list, add a further configuration property that specifies the
SELECT
statement for the connector to run on the table when it takes a snapshot. The specifiedSELECT
statement determines the subset of table rows to include in the snapshot. Use the following format to specify the name of thisSELECT
statement property:
snapshot.select.statement.overrides.<databaseName>.<tableName>
For example,snapshot.select.statement.overrides.customers.orders
From acustomers.orders
table that includes the soft-delete column,delete_flag
, add the following properties if you want a snapshot to include only those records that are not soft-deleted:"snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC"
In the resulting snapshot, the connector includes only the records for which
delete_flag = 0
.
snapshot.tables.order.by.row.count
Default value:
disabled
Specifies the order in which the connector processes tables when it performs an initial snapshot. Set one of the following options:descending
- The connector snapshots tables in order, based on the number of rows from the highest to the lowest.
ascending
- The connector snapshots tables in order, based on the number of rows, from lowest to highest.
disabled
- The connector disregards row count when performing an initial snapshot.
streaming.delay.ms
-
Default value:
0
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of theoffset.flush.interval.ms
property that is set for the Kafka Connect worker.
table.ignore.builtin
-
Default value:
true
A Boolean value that specifies whether built-in system tables should be ignored. This applies regardless of the table include and exclude lists. By default, changes that occur to the values in system tables are excluded from capture, and Debezium does not generate events for system table changes.
topic.cache.size
-
Default value:
10000
Specifies the number of topic names that can be stored in memory in a bounded concurrent hash map. The connector uses the cache to help determine the topic name that corresponds to a data collection.
topic.delimiter
-
Default value:
.
Specifies the delimiter that the connector inserts between components of the topic name.
topic.heartbeat.prefix
Default value:
__debezium-heartbeat
Specifies the name of the topic to which the connector sends heartbeat messages. The topic name takes the following format:
topic.heartbeat.prefix.topic.prefix
For example, if the topic prefix is
fulfillment
, the default topic name is__debezium-heartbeat.fulfillment
.
topic.naming.strategy
-
Default value:
io.debezium.schema.DefaultTopicNamingStrategy
The name of theTopicNamingStrategy
class that the connector uses. The specified strategy determines how the connector names the topics that store event records for data changes, schema changes, transactions, heartbeats, and so forth.
topic.transaction
Default value:
transaction
Specifies the name of the topic to which the connector sends transaction metadata messages. The topic name takes the following pattern:
topic.prefix.topic.transaction
For example, if the topic prefix is
fulfillment
, the default topic name isfulfillment.transaction
.
use.nongraceful.disconnect
-
Default value: false
A Boolean value that specifies whether the binary log client’s keepalive thread sets theSO_LINGER
socket option to0
to immediately close stale TCP connections.
Set the value totrue
if the connector experiences deadlocks inSSLSocketImpl.close
.
Debezium connector database schema history configuration properties
Debezium provides a set of schema.history.internal.*
properties that control how the connector interacts with the schema history topic.
The following table describes the schema.history.internal
properties for configuring the Debezium connector.
Property | Default | Description |
---|---|---|
No default | The full name of the Kafka topic where the connector stores the database schema history. | |
No default | A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while fetching cluster information using Kafka admin client. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while create kafka history topic using Kafka admin client. | |
|
The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is | |
|
A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is | |
|
A Boolean value that specifies whether the connector records schema structures from all tables in a schema or database, or only from tables that are designated for capture.
| |
|
A Boolean value that specifies whether the connector records schema structures from all logical databases in the database instance.
|
Pass-through MariaDB connector configuration properties
You can set pass-through properties in the connector configuration to customize the behavior of the Apache Kafka producer and consumer. For information about the full range of configuration properties for Kafka producers and consumers, see the Kafka documentation.
Pass-through properties for configuring how producer and consumer clients interact with schema history topics
Debezium relies on an Apache Kafka producer to write schema changes to database schema history topics. Similarly, it relies on a Kafka consumer to read from database schema history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the schema.history.internal.producer.*
and schema.history.internal.consumer.*
prefixes. The pass-through producer and consumer database schema history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:
schema.history.internal.producer.security.protocol=SSL schema.history.internal.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.producer.ssl.keystore.password=test1234 schema.history.internal.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.producer.ssl.truststore.password=test1234 schema.history.internal.producer.ssl.key.password=test1234 schema.history.internal.consumer.security.protocol=SSL schema.history.internal.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.consumer.ssl.keystore.password=test1234 schema.history.internal.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.consumer.ssl.truststore.password=test1234 schema.history.internal.consumer.ssl.key.password=test1234
Debezium strips the prefix from the property name before it passes the property to the Kafka client.
For more information about Kafka producer configuration properties and Kafka consumer configuration properties, see the Apache Kafka documentation .
Pass-through properties for configuring how the MariaDB connector interacts with the Kafka signaling topic
Debezium provides a set of signal.*
properties that control how the connector interacts with the Kafka signals topic.
The following table describes the Kafka signal
properties.
Property | Default | Description |
---|---|---|
<topic.prefix>-signal | The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. | |
kafka-signal | The name of the group ID that is used by Kafka consumers. | |
No default | A list of the host and port pairs that the connector uses to establish its initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. | |
| Specifies whether the Kafka consumer writes an offset commit after it reads a message from the signaling topic. The value that you assign to this property determines whether the connector can process requests that the signaling topic receives while the connector is offline. Choose one of the following settings:
|
Pass-through properties for configuring the Kafka consumer client for the signaling channel
The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.*
. For example, the connector passes properties such as signal.consumer.security.protocol=SSL
to the Kafka consumer.
Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer.
Pass-through properties for configuring the MariaDB connector sink notification channel
The following table describes properties that you can use to configure the Debezium sink notification
channel.
Property | Default | Description |
---|---|---|
No default |
The name of the topic that receives notifications from Debezium. This property is required when you configure the |
Debezium connector pass-through database driver configuration properties
The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix driver.*
. For example, the connector passes properties such as driver.foobar=false
to the JDBC URL.
Debezium strips the prefixes from the properties before it passes the properties to the database driver.
2.2.7. Monitoring Debezium MariaDB connector performance
The Debezium MariaDB connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is reading the binlog.
- Schema history metrics provide information about the status of the connector’s schema history.
Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
2.2.7.1. Customized names for MariaDB connector snapshot and streaming MBean objects
Debezium connectors expose metrics via the MBean name for the connector. These metrics, which are specific to each connector instance, provide data about the behavior of the connector’s snapshot, streaming, and schema history processes.
By default, when you deploy a correctly configured connector, Debezium generates a unique MBean name for each of the different connector metrics. To view the metrics for a connector process, you configure your observability stack to monitor its MBean. But these default MBean names depend on the connector configuration; configuration changes can result in changes to the MBean names. A change to the MBean name breaks the linkage between the connector instance and the MBean, disrupting monitoring activity. In this scenario, you must reconfigure the observability stack to use the new MBean name if you want to resume monitoring.
To prevent monitoring disruptions that result from MBean name changes, you can configure custom metrics tags. You configure custom metrics by adding the custom.metric.tags
property to the connector configuration. The property accepts key-value pairs in which each key represents a tag for the MBean object name, and the corresponding value represents the value of that tag. For example: k1=v1,k2=v2
. Debezium appends the specified tags to the MBean name of the connector.
After you configure the custom.metric.tags
property for a connector, you can configure the observability stack to retrieve metrics associated with the specified tags. The observability stack then uses the specified tags, rather than the mutable MBean names to uniquely identify connectors. Later, if Debezium redefines how it constructs MBean names, or if the topic.prefix
in the connector configuration changes, metrics collection is uninterrupted, because the metrics scrape task uses the specified tag patterns to identify the connector.
A further benefit of using custom tags, is that you can use tags that reflect the architecture of your data pipeline, so that metrics are organized in a way that suits you operational needs. For example, you might specify tags with values that declare the type of connector activity, the application context, or the data source, for example, db1-streaming-for-application-abc
. If you specify multiple key-value pairs, all of the specified pairs are appended to the connector’s MBean name.
The following example illustrates how tags modify the default MBean name.
Example 2.16. How custom tags modify the connector MBean name
By default, the MariaDB connector uses the following MBean name for streaming metrics:
debezium.mariadb:type=connector-metrics,context=streaming,server=<topic.prefix>
If you set the value of custom.metric.tags
to database=salesdb-streaming,table=inventory
, Debezium generates the following custom MBean name:
debezium.mariadb:type=connector-metrics,context=streaming,server=<topic.prefix>,database=salesdb-streaming,table=inventory
2.2.7.2. Monitoring Debezium during snapshots of MariaDB databases
The MBean is debezium.mariadb:type=connector-metrics,context=snapshot,server=<topic.prefix>
.
Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
The following table lists the snapshot metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last snapshot event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of events that this connector has seen since last started or reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The total number of tables that are being included in the snapshot. | |
| The number of tables that the snapshot has yet to copy. | |
| Whether the snapshot was started. | |
| Whether the snapshot was paused. | |
| Whether the snapshot was aborted. | |
| Whether the snapshot completed. | |
| The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. | |
| The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. | |
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The connector also provides the following additional snapshot metrics when an incremental snapshot is executed:
Attributes | Type | Description |
---|---|---|
| The identifier of the current snapshot chunk. | |
| The lower bound of the primary key set defining the current chunk. | |
| The upper bound of the primary key set defining the current chunk. | |
| The lower bound of the primary key set of the currently snapshotted table. | |
| The upper bound of the primary key set of the currently snapshotted table. |
2.2.7.3. Monitoring Debezium MariaDB connector record streaming
The Debezium MariaDB connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is reading the binlog.
- Schema history metrics provide information about the status of the connector’s schema history.
Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
The MBean is debezium.mariadb:type=connector-metrics,context=streaming,server=<topic.prefix>
.
The following table lists the streaming metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last streaming event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of data change events reported by the source database since the last connector start, or since a metrics reset. Represents the data change workload for Debezium to process. | |
| The total number of create events processed by the connector since its last start or metrics reset. | |
| The total number of update events processed by the connector since its last start or metrics reset. | |
| The total number of delete events processed by the connector since its last start or metrics reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| Flag that denotes whether the connector is currently connected to the database server. | |
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. | |
| The number of processed transactions that were committed. | |
| The coordinates of the last received event. | |
| Transaction identifier of the last processed transaction. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
2.2.7.4. Monitoring Debezium MariaDB connector schema history
The MBean is debezium.mariadb:type=connector-metrics,context=schema-history,server=<topic.prefix>
.
The following table lists the schema history metrics that are available.
Attributes | Type | Description |
---|---|---|
|
One of | |
| The time in epoch seconds at what recovery has started. | |
| The number of changes that were read during recovery phase. | |
| the total number of schema changes applied during recovery and runtime. | |
| The number of milliseconds that elapsed since the last change was recovered from the history store. | |
| The number of milliseconds that elapsed since the last change was applied. | |
| The string representation of the last change recovered from the history store. | |
| The string representation of the last applied change. |
2.2.8. How Debezium MariaDB connectors handle faults and problems
Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.
If a fault does occur, the system does not lose any events. However, while Debezium is recovering from a fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.
Details are in the following sections:
- Configuration and startup errors
In the following situations, the connector fails when trying to start, reports an error or exception in the log, and stops running:
- The connector’s configuration is invalid.
- The connector cannot successfully connect to the MariaDB server by using the specified connection parameters.
- The connector is attempting to restart at a position in the binlog for which MariaDB no longer has the history available.
In these cases, the error message has details about the problem and possibly a suggested workaround. After you correct the configuration or address the MariaDB problem, restart the connector.
However, if you are connecting to a highly available MariaDB cluster, you can restart the connector immediately. It will connect to a different MariaDB server in the cluster, find the location in the server’s binlog that represents the last transaction, and begin reading the new server’s binlog from that specific location.
- Kafka Connect stops gracefully
- When Kafka Connect stops gracefully, there is a short delay while the Debezium MariaDB connector tasks are stopped and restarted on new Kafka Connect processes.
- Kafka Connect process crashes
- If Kafka Connect crashes, the process stops and any Debezium MariaDB connector tasks terminate without their most recently-processed offsets being recorded. In distributed mode, Kafka Connect restarts the connector tasks on other processes. However, the MariaDB connector resumes from the last offset recorded by the earlier processes. As a result, the replacement tasks might regenerate some events that were processed before the crash, creating duplicate events.
Each change event message includes source-specific information that you can use to identify duplicate events, for example:
- Event origin
- MariaDB server’s event time
- The binlog file name and position
- GTIDs.
- MariaDB purges binlog files
- If the Debezium MariaDB connector stops for too long, the MariaDB server purges older binlog files and the connector’s last position may be lost. When the connector is restarted, the MariaDB server no longer has the starting point and the connector performs another initial snapshot. If the snapshot is disabled, the connector fails with an error.
See snapshots for details about how MariaDB connectors perform initial snapshots.
2.3. Debezium connector for MongoDB
Debezium’s MongoDB connector tracks a MongoDB replica set or a MongoDB sharded cluster for document changes in databases and collections, recording those changes as events in Kafka topics. The connector automatically handles the addition or removal of shards in a sharded cluster, changes in membership of each replica set, elections within each replica set, and awaiting the resolution of communications problems.
For information about the MongoDB versions that are compatible with this connector, see the Debezium Supported Configurations page.
Information and procedures for using a Debezium MongoDB connector is organized as follows:
- Section 2.3.1, “Overview of Debezium MongoDB connector”
- Section 2.3.2, “How Debezium MongoDB connectors work”
- Section 2.3.3, “Descriptions of Debezium MongoDB connector data change events”
- Section 2.3.4, “Setting up MongoDB to work with a Debezium connector”
- Section 2.3.5, “Deployment of Debezium MongoDB connectors”
- Section 2.3.6, “Monitoring Debezium MongoDB connector performance”
- Section 2.3.7, “How Debezium MongoDB connectors handle faults and problems”
2.3.1. Overview of Debezium MongoDB connector
MongoDB’s replication mechanism provides redundancy and high availability, and is the preferred way to run MongoDB in production. MongoDB connector captures the changes in a replica set or sharded cluster.
A MongoDB replica set consists of a set of servers that all have copies of the same data, and replication ensures that all changes made by clients to documents on the replica set’s primary are correctly applied to the other replica set’s servers, called secondaries. MongoDB replication works by having the primary record the changes in its oplog (or operation log), and then each of the secondaries reads the primary’s oplog and applies in order all of the operations to their own documents. When a new server is added to a replica set, that server first performs an snapshot of all of the databases and collections on the primary, and then reads the primary’s oplog to apply all changes that might have been made since it began the snapshot. This new server becomes a secondary (and able to handle queries) when it catches up to the tail of the primary’s oplog.
2.3.1.1. Description of how the MongoDB connector uses change streams to capture event records
Although the Debezium MongoDB connector does not become part of a replica set, it uses a similar replication mechanism to obtain oplog data. The main difference is that the connector does not read the oplog directly. Instead, it delegates the capture and decoding of oplog data to the MongoDB change streams feature. With change streams, the MongoDB server exposes the changes that occur in a collection as an event stream. The Debezium connector monitors the stream and then delivers the changes downstream. The first time that the connector detects a replica set, it examines the oplog to obtain the last recorded transaction, and then performs a snapshot of the primary’s databases and collections. After the connector finishes copying the data, it creates a change stream beginning from the oplog position that it read earlier.
As the MongoDB connector processes changes, it periodically records the position at which the event originated in the oplog stream. When the connector stops, it records the last oplog stream position that it processed, so that after a restart it can resume streaming from that position. In other words, the connector can be stopped, upgraded or maintained, and restarted some time later, and always pick up exactly where it left off without losing a single event. Of course, MongoDB oplogs are usually capped at a maximum size, so if the connector is stopped for long periods, operations in the oplog might be purged before the connector has a chance to read them. In this case, after a restart the connector detects the missing oplog operations, performs a snapshot, and then proceeds to stream changes.
The MongoDB connector is also quite tolerant of changes in membership and leadership of the replica sets, of additions or removals of shards within a sharded cluster, and network problems that might cause communication failures. The connector always uses the replica set’s primary node to stream changes, so when the replica set undergoes an election and a different node becomes primary, the connector will immediately stop streaming changes, connect to the new primary, and start streaming changes using the new primary node. Similarly, if connector is unable to communicate with the replica set primary, it attempts to reconnect (using exponential backoff so as to not overwhelm the network or replica set). After connection is reestablished, the connector continues to stream changes from the last event that it captured. In this way the connector dynamically adjusts to changes in replica set membership, and automatically handles communication disruptions.
2.3.1.2. Description of how the MongoDB connector uses the MongoDB read preference
You specify read preferences for a MongoDB connection by setting the readPreference
parameter in the mongodb.connection.string
.
2.3.2. How Debezium MongoDB connectors work
An overview of the MongoDB topologies that the connector supports is useful for planning your application.
The following topics provide details about how the Debezium MongoDB connector works:
- Section 2.3.2.1, “MongoDB topologies supported by Debezium connectors”
- Section 2.3.2.3, “How Debezium MongoDB connectors use logical names for replica sets and sharded clusters”
- Section 2.3.2.5, “How Debezium MongoDB connectors perform snapshots”
- Section 2.3.2.6, “Ad hoc snapshots”
- Section 2.3.2.7, “Incremental snapshots”
- Section 2.3.2.9, “How the Debezium MongoDB connector streams change event records”
- Section 2.3.2.11, “Default names of Kafka topics that receive Debezium MongoDB change event records”
- Section 2.3.2.12, “How event keys control topic partitioning for the Debezium MongoDB connector”
- Section 2.3.2.13, “Debezium MongoDB connector-generated events that represent transaction boundaries”
2.3.2.1. MongoDB topologies supported by Debezium connectors
The MongoDB connector supports the following MongoDB topologies:
- MongoDB replica set
The Debezium MongoDB connector can capture changes from a single MongoDB replica set. Production replica sets require a minimum of at least three members.
To use the MongoDB connector with a replica set, you must set the value of the
mongodb.connection.string
property in the connector configuration to the replica set connection string. When the connector is ready to begin capturing changes from a MongoDB change stream, it starts a connection task. The connection task then uses the specified connection string to establish a connection to an available replica set member.
- MongoDB sharded cluster
A MongoDB sharded cluster consists of:
- One or more shards, each deployed as a replica set;
- A separate replica set that acts as the cluster’s configuration server
One or more routers (also called
mongos
) to which clients connect and that routes requests to the appropriate shardsTo use the MongoDB connector with a sharded cluster, in the connector configuration, set the value of the
mongodb.connection.string
property to the sharded cluster connection string.
- MongoDB standalone server
- The MongoDB connector is not capable of monitoring the changes of a standalone MongoDB server, since standalone servers do not have an oplog. The connector will work if the standalone server is converted to a replica set with one member.
MongoDB does not recommend running a standalone server in production. For more information, see the MongoDB documentation.
2.3.2.2. User permissions required by Debezium connectors
To capture data from MongoDB, Debezium attaches to the database as a MongoDB user. The MongoDB user account that you create for Debezium requires specific database permissions to read from the database. The connector user requires the following permissions:
- Read from the database.
-
Run the
hello
command.
The connector user might also require the following permission:
-
Read from the
config.shards
system collection.
Database read permissions
The connector user must be able to read from all databases, or to read from a specific database, depending on the value of the connector’s capture.scope
property. Assign one of the following permissions to the user, depending on the capture.scope
setting:
capture.scope
is set todeployment
- Grant the user permission to read any database.
capture.scope
is set todatabase
-
Grant the user permission to read the database specified by the connector’s
capture.target
property. capture.scope
is set tocollection
-
Grant the user permission to read the collection specified by the connector’s
capture.target
property.
The use of the Debezium collection
option for the capture.scope
property is a Developer Preview feature. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.
Permission to use the MongoDB hello
command
Regardless of the capture.scope
setting, the user requires permission to run the MongoDB hello command.
Permission to read the config.shards
collection
Depending on your Debezium environment, to enable the connector to perform offset consolidation, you must grant the connector user explicit permission to read the config.shards
collection. Permission to read the config.shards
collection is required for the following connector environments:
- Connectors upgraded from Debezium 2.5 or earlier.
- Connectors configured to capture changes from a sharded MongoDB cluster.
2.3.2.3. How Debezium MongoDB connectors use logical names for replica sets and sharded clusters
The connector configuration property topic.prefix
serves as a logical name for the MongoDB replica set or sharded cluster. The connector uses the logical name in a number of ways: as the prefix for all topic names, and as a unique identifier when recording the change stream position of each replica set.
You should give each MongoDB connector a unique logical name that meaningfully describes the source MongoDB system. We recommend logical names begin with an alphabetic or underscore character, and remaining characters that are alphanumeric or underscore.
2.3.2.4. How Debezium MongoDB connectors perform offset consolidation
The Debezium MongoDB connector no longer supports replica_set
connections to sharded MongoDB deployments. As a result, the offsets recorded by connector versions that used the replica_set
connection mode are incompatible with the current version.
To minimize the affects of the connection mode change, and to prevent the connector from running unnecessary snapshots, when the connector restarts after the upgrade, it runs a procedure to consolidate offsets. During this offset consolidation procedure, the connector completes the following steps to reconcile offsets that were recorded by the earlier connector version:
- Offsets that were recorded by connector versions later than 2.5 are used as-is.
-
Offsets for events that were captured in
sharded
connection mode from sharded MongoDB deployments, or from MongoDB replica set deployments, are used as-is. Shard-specific offsets that were recorded by connector versions 2.5.x and earlier are used as-is, if both of the following conditions apply:
- The offsets exist for all current database shards.
-
Offset invalidation is enabled.
If offset invalidation is disabled, the connector fails to start.
-
After the connector processes existing offsets in the preceding steps, it resumes streaming changes, and then commits offsets for new events that it captures.
If the offset consolidation procedure does not detect any existing offsets, the connector performs an initial snapshot.
2.3.2.5. How Debezium MongoDB connectors perform snapshots
When a Debezium task starts to use a replica set, it uses the connector’s logical name and the replica set name to find an offset that describes the position where the connector previously stopped reading changes. If an offset can be found and it still exists in the oplog, then the task immediately proceeds with streaming changes, starting at the recorded offset position.
However, if no offset is found, or if the oplog no longer contains that position, the task must first obtain the current state of the replica set contents by performing a snapshot. This process starts by recording the current position of the oplog and recording that as the offset (along with a flag that denotes a snapshot has been started). The task then proceeds to copy each collection, spawning as many threads as possible (up to the value of the snapshot.max.threads
configuration property) to perform this work in parallel. The connector records a separate read event for each document it sees. Each read event contains the object’s identifier, the complete state of the object, and source information about the MongoDB replica set where the object was found. The source information also includes a flag that denotes that the event was produced during a snapshot.
This snapshot will continue until it has copied all collections that match the connector’s filters. If the connector is stopped before the tasks' snapshots are completed, upon restart the connector begins the snapshot again.
Try to avoid task reassignment and reconfiguration while the connector performs snapshots of any replica sets. The connector generates log messages to report on the progress of the snapshot. To provide for the greatest control, run a separate Kafka Connect cluster for each connector.
You can find more information about snapshots in the following sections:
Setting | Description |
---|---|
| The connector performs a snapshot every time that it starts. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| After the connector starts, it performs an initial database snapshot. |
| The connector performs a database snapshot. After the snapshot completes, the connector stops, and does not stream event records for subsequent database changes. |
|
Deprecated, see |
|
The connector captures the structure of all relevant tables, but it does not create |
| After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
|
For more information, see snapshot.mode
in the table of connector configuration properties.
2.3.2.6. Ad hoc snapshots
By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.
However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing collection data, Debezium includes an option to perform ad hoc snapshots. You might want to perform an ad hoc snapshot after any of the following changes occur in your Debezium environment:
- The connector configuration is modified to capture a different set of collections.
- Kafka topics are deleted and must be rebuilt.
- Data corruption occurs due to a configuration error or some other problem.
You can re-run a snapshot for a collection for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling collections. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling collection.
When you initiate an ad hoc snapshot of an existing collection, the connector appends content to the topic that already exists for the collection. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.
Ad hoc snapshot signals specify the collections to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the collections in the database. Also, the snapshot can capture a subset of the contents of the collection(s) in the database.
You specify the collections to capture by sending an execute-snapshot
message to the signaling collection. Set the type of the execute-snapshot
signal to incremental
or blocking
, and provide the names of the collections to include in the snapshot, as described in the following table:
Field | Default | Value |
---|---|---|
|
|
Specifies the type of snapshot that you want to run. |
| N/A |
An array that contains regular expressions matching the fully-qualified names of the collections to include in the snapshot. |
| N/A |
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
|
| N/A | An optional string that specifies the column name that the connector uses as the primary key of a collection during the snapshot process. |
Triggering an ad hoc incremental snapshot
You initiate an ad hoc incremental snapshot by adding an entry with the execute-snapshot
signal type to the signaling collection, or by sending a signal message to a Kafka signaling topic. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each collection. Based on the number of entries in the collection, and the configured chunk size, Debezium divides the collection into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
For more information, see Incremental snapshots.
Triggering an ad hoc blocking snapshot
You initiate an ad hoc blocking snapshot by adding an entry with the execute-snapshot
signal type to the signaling collection or signaling topic. After the connector processes the message, it begins the snapshot operation. The connector temporarily stops streaming, and then initiates a snapshot of the specified collection, following the same process that it uses during an initial snapshot. After the snapshot completes, the connector resumes streaming.
For more information, see Blocking snapshots.
2.3.2.7. Incremental snapshots
To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each collection in phases, in a series of configurable chunks. You can specify the collections that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows.
As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each collection row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:
- You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
- If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the collection from the beginning.
-
You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a collection to its
collection.include.list
property.
Incremental snapshot process
When you run an incremental snapshot, Debezium sorts each collection by primary key and then splits the collection into chunks based on the configured chunk size. Working chunk by chunk, it then captures each collection row in a chunk. For each row that it captures, the snapshot emits a READ
event. That event represents the value of the row when the snapshot for the chunk began.
As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying collection records. To reflect such changes, INSERT
, UPDATE
, or DELETE
operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.
How Debezium resolves collisions among records with the same primary key
In some cases, the UPDATE
or DELETE
events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a collection row before the snapshot captures the chunk that contains the READ
event for that row. When the snapshot eventually emits the corresponding READ
event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.
Snapshot window
To assist in resolving collisions between late-arriving READ
events and streamed events that modify the same collection row, Debezium employs a so-called snapshot window. The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified collection chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key..
For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ
operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE
or DELETE
operations for each change.
As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ
events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ
event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ
events for which no related transaction log events exist. Debezium emits these remaining READ
events to the collection’s Kafka topic.
The connector repeats the process for each snapshot chunk.
Currently, you can use either of the following methods to initiate an incremental snapshot:
Incremental snapshots require that the primary key for each table is stably ordered. Because String
fields can include special characters, and are subject to different encodings, string-based primary keys do not lend themselves to sorting in a consistent and predictable order. When performing incremental snapshots, it’s best to set the primary key to a data type other than String
.
For more information about BSON string types in MongoDB, see the MongoDB documentation).
Incremental snapshots for sharded clusters
To use incremental snapshots with sharded MongoDB clusters, you must set incremental.snapshot.chunk.size
to a value that is high enough to compensate for the increased complexity of change stream pipelines.
2.3.2.7.1. Triggering an incremental snapshot
To initiate an incremental snapshot, you can send an ad hoc snapshot signal to the signaling collection on the source database.
You submit a signal to the signaling collection by using the MongoDB insert()
method.
After Debezium detects the change in the signaling collection, it reads the signal, and runs the requested snapshot operation.
The query that you submit specifies the collections to include in the snapshot, and, optionally, specifies the type of snapshot operation. Currently, the only valid options for snapshots operations are incremental
and blocking
.
To specify the collections to include in the snapshot, provide a data-collections
array that lists the collections or an array of regular expressions used to match collections, for example,{"data-collections": ["public.Collection1", "public.Collection2"]}
The data-collections
array for an incremental snapshot signal has no default value. If the data-collections
array is empty, Debezium detects that no action is required and does not perform a snapshot.
If the name of a collection that you want to include in a snapshot contains a dot (.
) in the name of the database, schema, or table, to add the collection to the data-collections
array, you must escape each part of the name in double quotes.
For example, to include a data collection that exists in the public
database, and that has the name My.Collection
, use the following format: "public"."My.Collection"
.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to trigger an incremental snapshot
Insert a snapshot signal document into the signaling collection:
<signalDataCollection>.insert({"id" : _<idNumber>,"type" : <snapshotType>, "data" : {"data-collections" ["<collectionName>", "<collectionName>"],"type": <snapshotType>, "additional-conditions" : [{"data-collections" : "<collectionName>", "filter" : "<additional-condition>"}] }});
For example,
db.debeziumSignal.insert({ 1 "type" : "execute-snapshot", 2 3 "data" : { "data-collections" ["\"public\".\"Collection1\"", "\"public\".\"Collection2\""], 4 "type": "incremental"} 5 "additional-conditions":[{"data-collection": "schema1.table1" ,"filter":"color=\'blue\'"}]}'); 6 });
The values of the
id
,type
, anddata
parameters in the command correspond to the fields of the signaling collection.The following table describes the parameters in the example:
Table 2.57. Descriptions of fields in a MongoDB insert() command for sending an incremental snapshot signal to the signaling collection Item Value Description 1
db.debeziumSignal
Specifies the fully-qualified name of the signaling collection on the source database.
2
null
The
_id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
The insert method in the preceding example omits use of the optional_id
parameter. Because the document does not explicitly assign a value for the parameter, the arbitrary id that MongoDB automatically assigns to the document becomes theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling collection. Debezium does not use this identifier string. Rather, during the snapshot, Debezium generates its ownid
string as a watermarking signal.3
execute-snapshot
Specifies
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
A required component of the
data
field of a signal that specifies an array of collection names or regular expressions to match collection names to include in the snapshot.
The array lists regular expressions which match collections by their fully-qualified names, using the same format as you use to specify the name of the connector’s signaling collection in thesignal.data.collection
configuration property.5
incremental
An optional
type
component of thedata
field of a signal that specifies the type of snapshot operation to run.
Currently supports theincremental
andblocking
types.
If you do not specify a value, the connector runs an incremental snapshot.6
additional-conditions
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
Each element in theadditional-conditions
array is an object that includes the following keys:data-collection
:: The fully-qualified name of the data collection for which the filter will be applied.filter
:: Specifies the column values that must be present in a data collection record for the snapshot to include it, for example,"color='blue'"
.
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
Example: Incremental snapshot event message
{ "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "ts_us":"1620393591654962", "ts_ns":"1620393591654962147", "transaction":null }
Item | Field name | Description |
---|---|---|
1 |
|
Specifies the type of snapshot operation to run. |
2 |
|
Specifies the event type. |
2.3.2.7.2. Using the Kafka signaling channel to trigger an incremental snapshot
You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is execute-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports the |
| N/A |
An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. |
| N/A |
An optional array of additional conditions that specifies criteria that the connector evaluates to designate a subset of records to include in a snapshot. |
Example 2.17. An execute-snapshot
Kafka message
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
Ad hoc incremental snapshots with additional-conditions
Debezium uses the additional-conditions
field to select a subset of a collection’s content.
Typically, when Debezium runs a snapshot, it runs a SQL query such as:
SELECT * FROM <tableName> ….
When the snapshot request includes an additional-conditions
property, the data-collection
and filter
parameters of the property are appended to the SQL query, for example:
SELECT * FROM <data-collection> WHERE <filter> ….
For example, given a products
collection with the columns id
(primary key), color
, and brand
, if you want a snapshot to include only content for which color='blue'
, when you request the snapshot, you could add the additional-conditions
property to filter the content:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue'"}]}}`
You can also use the additional-conditions
property to pass conditions based on multiple columns. For example, using the same products
collection as in the previous example, if you want a snapshot to include only the content from the products
collection for which color='blue'
, and brand='MyBrand'
, you could send the following request:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
2.3.2.7.3. Stopping an incremental snapshot
In some situations, it might be necessary to stop an incremental snapshot. For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations. You can stop a snapshot that is already running by sending a signal to the collection on the source database.
You submit a stop snapshot signal to the signaling collection by inserting a stop snapshot signal document into it. The stop snapshot signal that you submit specifies the type
of the snapshot operation as incremental
, and, optionally specifies the collections that you want to omit from the currently running snapshot. After Debezium detects the change in the signaling collection, it reads the signal, and stops the incremental snapshot operation if it’s in progress.
Additional resources
You can also stop an incremental snapshot by sending a JSON message to the Kafka signaling topic.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to stop an incremental snapshot
Insert a stop snapshot signal document into the signaling collection:
<signalDataCollection>.insert({"id" : _<idNumber>,"type" : "stop-snapshot", "data" : {"data-collections" ["<collectionName>", "<collectionName>"],"type": "incremental"}});
For example,
db.debeziumSignal.insert({ 1 "type" : "stop-snapshot", 2 3 "data" : { "data-collections" ["\"public\".\"Collection1\"", "\"public\".\"Collection2\""], 4 "type": "incremental"} 5 });
The values of the
id
,type
, anddata
parameters in the signal command correspond to the fields of the signaling collection.The following table describes the parameters in the example:
Table 2.59. Descriptions of fields in an insert command for sending a stop incremental snapshot document to the signaling collection Item Value Description 1
db.debeziumSignal
Specifies the fully-qualified name of the signaling collection on the source database.
2
null
The insert method in the preceding example omits use of the optional
_id
parameter. Because the document does not explicitly assign a value for the parameter, the arbitrary id that MongoDB automatically assigns to the document becomes theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling collection. Debezium does not use this identifier string.3
stop-snapshot
The
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
An optional component of the
data
field of a signal that specifies an array of collection names or regular expressions to match collection names to remove from the snapshot.
The array lists regular expressions that match collections by their fully-qualified names in the formatdatabase.collection
.If you omit the
data-collections
array from thedata
field, the signal stops the entire incremental snapshot that is in progress.5
incremental
A required component of the
data
field of a signal that specifies the type of snapshot operation that is to be stopped.
Currently, the only valid option isincremental
.
If you do not specify atype
value, the signal fails to stop the incremental snapshot.
2.3.2.7.4. Using the Kafka signaling channel to stop an incremental snapshot
You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is stop-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports only the |
| N/A |
An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of collection names or regular expressions to match collection names to remove from the snapshot. |
The following example shows a typical stop-snapshot
Kafka message:
Key = `test_connector` Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.table1", "db1.table2"], "type": "INCREMENTAL"}}`
2.3.2.8. Blocking snapshots
To provide more flexibility in managing snapshots, Debezium includes a supplementary ad hoc snapshot mechanism, known as a blocking snapshot. Blocking snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
A blocking snapshot behaves just like an initial snapshot, except that you can trigger it at run time.
You might want to run a blocking snapshot rather than use the standard initial snapshot process in the following situations:
- You add a new collection and you want to complete the snapshot while the connector is running.
- You add a large collection, and you want the snapshot to complete in less time than is possible with an incremental snapshot.
Blocking snapshot process
When you run a blocking snapshot, Debezium stops streaming, and then initiates a snapshot of the specified collection, following the same process that it uses during an initial snapshot. After the snapshot completes, the streaming is resumed.
Configure snapshot
You can set the following properties in the data
component of a signal:
- data-collections: to specify which collections must be snapshot
additional-conditions: You can specify different filters for different collection.
-
The
data-collection
property is the fully-qualified name of the collection for which the filter will be applied. -
The
filter
property will have the same value used in thesnapshot.select.statement.overrides
-
The
For example:
{"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]}
Possible duplicates
A delay might exist between the time that you send the signal to trigger the snapshot, and the time when streaming stops and the snapshot starts. As a result of this delay, after the snapshot completes, the connector might emit some event records that duplicate records captured by the snapshot.
2.3.2.9. How the Debezium MongoDB connector streams change event records
After the connector task for a replica set records an offset, it uses the offset to determine the position in the oplog where it should start streaming changes. The task then (depending on the configuration) either connects to the replica set’s primary node or connects to a replica-set-wide change stream and starts streaming changes from that position. It processes all of create, insert, and delete operations, and converts them into Debezium change events. Each change event includes the position in the oplog where the operation was found, and the connector periodically records this as its most recent offset. The interval at which the offset is recorded is governed by offset.flush.interval.ms
, which is a Kafka Connect worker configuration property.
When the connector is stopped gracefully, the last offset processed is recorded so that, upon restart, the connector will continue exactly where it left off. If the connector’s tasks terminate unexpectedly, however, then the tasks may have processed and generated events after it last records the offset but before the last offset is recorded; upon restart, the connector begins at the last recorded offset, possibly generating some the same events that were previously generated just prior to the crash.
When all components in a Kafka pipeline operate nominally, Kafka consumers receive every message exactly once. However, when things go wrong, Kafka can only guarantee that consumers receive every message at least once. To avoid unexpected results, consumers must be able to handle duplicate messages.
As mentioned earlier, the connector tasks always use the replica set’s primary node to stream changes from the oplog, ensuring that the connector sees the most up-to-date operations as possible and can capture the changes with lower latency than if secondaries were to be used instead. When the replica set elects a new primary, the connector immediately stops streaming changes, connects to the new primary, and starts streaming changes from the new primary node at the same position. Likewise, if the connector experiences any problems communicating with the replica set members, it tries to reconnect, by using exponential backoff so as to not overwhelm the replica set, and once connected it continues streaming changes from where it last left off. In this way, the connector is able to dynamically adjust to changes in replica set membership and automatically handle communication failures.
To summarize, the MongoDB connector continues running in most situations. Communication problems might cause the connector to wait until the problems are resolved.
2.3.2.10. MongoDB support for populating the before
field in Debezium change event
In MongoDB 6.0 and later, you can configure change streams to emit the pre-image state of a document to populate the before
field for MongoDB change events. To enable the use of pre-images in MongoDB, you must set the changeStreamPreAndPostImages
for a collection by using db.createCollection()
, create
, or collMod
. To enable the Debezium MongoDB to include pre-images in change events, set the capture.mode
for the connector to one of the *_with_pre_image
options.
The size of a MongoDB change stream event is limited to 16 megabytes. The use of pre-images thus increases the likelihood of exceeding this threshold, which can lead to failures. For information about how to avoid exceeding the change stream limit, see the MongoDB documentation.
2.3.2.11. Default names of Kafka topics that receive Debezium MongoDB change event records
The MongoDB connector writes events for all insert, update, and delete operations to documents in each collection to a single Kafka topic. The name of the Kafka topics always takes the form logicalName.databaseName.collectionName, where logicalName is the logical name of the connector as specified with the topic.prefix
configuration property, databaseName is the name of the database where the operation occurred, and collectionName is the name of the MongoDB collection in which the affected document existed.
For example, consider a MongoDB replica set with an inventory
database that contains four collections: products
, products_on_hand
, customers
, and orders
. If the connector monitoring this database were given a logical name of fulfillment
, then the connector would produce events on these four Kafka topics:
-
fulfillment.inventory.products
-
fulfillment.inventory.products_on_hand
-
fulfillment.inventory.customers
-
fulfillment.inventory.orders
Notice that the topic names do not incorporate the replica set name or shard name. As a result, all changes to a sharded collection (where each shard contains a subset of the collection’s documents) all go to the same Kafka topic.
You can set up Kafka to auto-create the topics as they are needed. If not, then you must use Kafka administration tools to create the topics before starting the connector.
2.3.2.12. How event keys control topic partitioning for the Debezium MongoDB connector
The MongoDB connector does not make any explicit determination about how to partition topics for events. Instead, it allows Kafka to determine how to partition topics based on event keys. You can change Kafka’s partitioning logic by defining the name of the Partitioner
implementation in the Kafka Connect worker configuration.
Kafka maintains total order only for events written to a single topic partition. Partitioning the events by key does mean that all events with the same key always go to the same partition. This ensures that all events for a specific document are always totally ordered.
2.3.2.13. Debezium MongoDB connector-generated events that represent transaction boundaries
Debezium can generate events that represents transaction metadata boundaries and enrich change data event messages.
Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.
For every transaction BEGIN
and END
, Debezium generates an event that contains the following fields:
status
-
BEGIN
orEND
id
- String representation of unique transaction identifier.
event_count
(forEND
events)- Total number of events emitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
that provides number of events emitted by changes originating from given data collection.
The following example shows a typical message:
{ "status": "BEGIN", "id": "1462833718356672513", "event_count": null, "data_collections": null } { "status": "END", "id": "1462833718356672513", "event_count": 2, "data_collections": [ { "data_collection": "rs0.testDB.collectiona", "event_count": 1 }, { "data_collection": "rs0.testDB.collectionb", "event_count": 1 } ] }
Unless overridden via the topic.transaction
option, transaction events are written to the topic named <topic.prefix>
.transaction
.
Change data event enrichment
When transaction metadata is enabled, the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
id
- String representation of unique transaction identifier.
total_order
- The absolute position of the event among all events generated by the transaction.
data_collection_order
- The per-data collection position of the event among all events that were emitted by the transaction.
Following is an example of what a message looks like:
{ "after": "{\"_id\" : {\"$numberLong\" : \"1004\"},\"first_name\" : \"Anne\",\"last_name\" : \"Kretchmar\",\"email\" : \"annek@noanswer.org\"}", "source": { ... }, "op": "c", "ts_ms": "1580390884335", "ts_us": "1580390884335486", "ts_ns": "1580390884335486281", "transaction": { "id": "1462833718356672513", "total_order": "1", "data_collection_order": "1" } }
2.3.3. Descriptions of Debezium MongoDB connector data change events
The Debezium MongoDB connector generates a data change event for each document-level operation that inserts, updates, or deletes data. Each event contains a key and a value. The structure of the key and the value depends on the collection that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating collection. See topic names.
The MongoDB connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and collection names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a collection name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
For more information, see the following topics:
2.3.3.1. About keys in Debezium MongoDB change events
A change event’s key contains the schema for the changed document’s key and the changed document’s actual key. For a given collection, both the schema and its corresponding payload contain a single id
field. The value of this field is the document’s identifier represented as a string that is derived from MongoDB extended JSON serialization strict mode.
Consider a connector with a logical name of fulfillment
, a replica set containing an inventory
database, and a customers
collection that contains documents such as the following.
Example document
{ "_id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }
Example change event key
Every change event that captures a change to the customers
collection has the same event key schema. For as long as the customers
collection has the previous definition, every change event that captures a change to the customers
collection has the following key structure. In JSON, it looks like this:
{ "schema": { 1 "type": "struct", "name": "fulfillment.inventory.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "field": "id", "type": "string", "optional": false } ] }, "payload": { 5 "id": "1004" } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the key for the document that was changed. Key schema names have the format connector-name.database-name.collection-name.
|
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Specifies each field that is expected in the |
5 |
|
Contains the key for the document for which this change event was generated. In this example, the key contains a single |
This example uses a document with an integer identifier, but any valid MongoDB document identifier works the same way, including a document identifier. For a document identifier, an event key’s payload.id
value is a string that represents the updated document’s original _id
field as a MongoDB extended JSON serialization that uses strict mode. The following table provides examples of how different types of _id
fields are represented.
Type | MongoDB _id Value | Key’s payload |
---|---|---|
Integer | 1234 |
|
Float | 12.34 |
|
String | "1234" |
|
Document |
|
|
ObjectId |
|
|
Binary |
|
|
2.3.3.2. About values in Debezium MongoDB change events
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample document that was used to show an example of a change event key:
Example document
{ "_id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }
The value portion of a change event for a change to this document is described for each event type:
create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
collection:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "string", "optional": true, "name": "io.debezium.data.Json", 2 "version": 1, "field": "after" }, { "type": "string", "optional": true, "name": "io.debezium.data.Json", "version": 1, "field": "patch" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "int64", "optional": false, "field": "ts_us" }, { "type": "int64", "optional": false, "field": "ts_ns" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "rs" }, { "type": "string", "optional": false, "field": "collection" }, { "type": "int32", "optional": false, "field": "ord" }, { "type": "int64", "optional": true, "field": "h" } ], "optional": false, "name": "io.debezium.connector.mongo.Source", 3 "field": "source" }, { "type": "string", "optional": true, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "dbserver1.inventory.customers.Envelope" 4 }, "payload": { 5 "after": "{\"_id\" : {\"$numberLong\" : \"1004\"},\"first_name\" : \"Anne\",\"last_name\" : \"Kretchmar\",\"email\" : \"annek@noanswer.org\"}", 6 "source": { 7 "version": "2.7.3.Final", "connector": "mongodb", "name": "fulfillment", "ts_ms": 1558965508000, "ts_ms": 1558965508000000, "ts_ms": 1558965508000000000, "snapshot": false, "db": "inventory", "rs": "rs0", "collection": "customers", "ord": 31, "h": 1546547425148721999 }, "op": "c", 8 "ts_ms": 1558965515240, 9 "ts_us": 1558965515240142, 10 "ts_ns": 1558965515240142879, 11 } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular collection. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the document after the event occurred. In this example, the |
7 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
8 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
9 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
10 |
| Optional field that displays the time at which the connector processed the event, in microseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
9 |
| Optional field that displays the time at which the connector processed the event, in nanoseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
Change streams capture mode
The value of a change event for an update in the sample customers
collection has the same schema as a create event for that collection. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. An update event includes an after
value only if the capture.mode
option is set to change_streams_update_full
. A before
value is provided if the capture.mode
option is set to one of the *_with_pre_image
option. There is a new structured field updateDescription
with a few additional fields in this case:
-
updatedFields
is a string field that contains the JSON representation of the updated document fields with their values -
removedFields
is a list of field names that were removed from the document -
truncatedArrays
is a list of arrays in the document that were truncated
Here is an example of a change event value in an event that the connector generates for an update in the customers
collection:
{ "schema": { ... }, "payload": { "op": "u", 1 "ts_ms": 1465491461815, 2 "ts_us": 1465491461815698, 3 "ts_ns": 1465491461815698142, 4 "before":"{\"_id\": {\"$numberLong\": \"1004\"},\"first_name\": \"unknown\",\"last_name\": \"Kretchmar\",\"email\": \"annek@noanswer.org\"}", 5 "after":"{\"_id\": {\"$numberLong\": \"1004\"},\"first_name\": \"Anne Marie\",\"last_name\": \"Kretchmar\",\"email\": \"annek@noanswer.org\"}", 6 "updateDescription": { "removedFields": null, "updatedFields": "{\"first_name\": \"Anne Marie\"}", 7 "truncatedArrays": null }, "source": { 8 "version": "2.7.3.Final", "connector": "mongodb", "name": "fulfillment", "ts_ms": 1558965508000, "ts_us": 1558965508000000, "ts_ns": 1558965508000000000, "snapshot": false, "db": "inventory", "rs": "rs0", "collection": "customers", "ord": 1, "h": null, "tord": null, "stxnid": null, "lsid":"{\"id\": {\"$binary\": \"FA7YEzXgQXSX9OxmzllH2w==\",\"$type\": \"04\"},\"uid\": {\"$binary\": \"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=\",\"$type\": \"00\"}}", "txnNumber":1 } } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example, |
2 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
3 |
| Contains the JSON string representation of the actual MongoDB document before change.
An update event value does not contain a |
4 |
|
Contains the JSON string representation of the actual MongoDB document. |
5 |
|
Contains the JSON string representation of the updated field values of the document. In this example, the update changed the |
6 |
| Mandatory field that describes the source metadata for the event. This field contains the same information as a create event for the same collection, but the values are different since this event is from a different position in the oplog. The source metadata includes:
|
The after
value in the event should be handled as the at-point-of-time value of the document. The value is not calculated dynamically but is obtained from the collection. It is thus possible if multiple updates are closely following one after the other, that all update updates events will contain the same after
value which will be representing the last value stored in the document.
If your application depends on gradual change evolution then you should rely on updateDescription
only.
delete events
The value in a delete change event has the same schema
portion as create and update events for the same collection. The payload
portion in a delete event contains values that are different from create and update events for the same collection. In particular, a delete event contains neither an after
value nor a updateDescription
value. Here is an example of a delete event for a document in the customers
collection:
{ "schema": { ... }, "payload": { "op": "d", 1 "ts_ms": 1465495462115, 2 "ts_us": 1465495462115748, 3 "ts_ns": 1465495462115748263, 4 "before":"{\"_id\": {\"$numberLong\": \"1004\"},\"first_name\": \"Anne Marie\",\"last_name\": \"Kretchmar\",\"email\": \"annek@noanswer.org\"}",5 "source": { 6 "version": "2.7.3.Final", "connector": "mongodb", "name": "fulfillment", "ts_ms": 1558965508000, "ts_us": 1558965508000000, "ts_ns": 1558965508000000000, "snapshot": true, "db": "inventory", "rs": "rs0", "collection": "customers", "ord": 6, "h": 1546547425148721999 } } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory string that describes the type of operation. The |
2 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
3 |
| Contains the JSON string representation of the actual MongoDB document before change.
An update event value does not contain a |
4 |
| Mandatory field that describes the source metadata for the event. This field contains the same information as a create or update event for the same collection, but the values are different since this event is from a different position in the oplog. The source metadata includes:
|
MongoDB connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
All MongoDB connector events for a uniquely identified document have exactly the same key. When a document is deleted, the delete event value still works with log compaction because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that key, the message value must be null
. To make this possible, after Debezium’s MongoDB connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value. A tombstone event informs Kafka that all messages with that same key can be removed.
2.3.4. Setting up MongoDB to work with a Debezium connector
The MongoDB connector uses MongoDB’s change streams to capture the changes, so the connector works only with MongoDB replica sets or with sharded clusters where each shard is a separate replica set. See the MongoDB documentation for setting up a replica set or sharded cluster. Also, be sure to understand how to enable access control and authentication with replica sets.
You must also have a MongoDB user that has the appropriate roles to read the admin
database where the oplog can be read. Additionally, the user must also be able to read the config
database in the configuration server of a sharded cluster and must have listDatabases
privilege action. When change streams are used (the default) the user also must have cluster-wide privilege actions find
and changeStream
.
When you intend to utilize pre-image and populate the before
field, you need to first enable changeStreamPreAndPostImages
for a collection using db.createCollection()
, create
, or collMod
.
Optimal Oplog Config
The Debezium MongoDB connector reads change streams to obtain oplog data for a replica set. Because the oplog is a fixed-sized, capped collection, if it exceeds its maximum configured size, it begins to overwrite its oldest entries. If the connector is stopped for any reason, when it restarts, it attempts to resume streaming from the last oplog stream position. However, if last stream position was removed from the oplog, depending on the value specified in the connector’s snapshot.mode
property, the connector might fail to start, reporting an invalid resume token error. In the event of a failure, you must create a new connector to enable Debezium to continue capturing records from the database. For more information, see Connector fails after it is stopped for a long interval if snapshot.mode is set to initial.
To ensure that the oplog retains the offset values that Debezium requires to resume streaming, you can use either of the following approaches:
- Increase the size of the oplog. Based on your typical workloads, set the oplog size to a value that is greater than the peak number of oplog entries per hour.
- Increase the minimum number of hours that an oplog entry is retained (MongoDB 4.4 and greater). This setting is time-based, such that entries in the last n hours are guaranteed to be available even if the oplog reaches its maximum configured size. Although this is generally the preferred option, for clusters with high workloads that are nearing capacity, specify the maximum oplog size.
To help prevent failures that are related to missing oplog entries, it’s important to track metrics that report replication behavior, and to optimize the oplog size to support Debezium. In particular, you should monitor the values of Oplog GB/Hour and Replication Oplog Window. If Debezium is offline for an interval that exceeds the value of the replication oplog window, and the primary oplog grows faster than Debezium can consume entries, a connector failure can result.
For information about how to monitor these metrics, see the MongoDB documentation.
It’s best to set the maximum oplog size to a value that is based on the anticipated hourly growth of the oplog (Oplog GB/Hour), multiplied by the time that might be required to address a Debezium failure.
That is,
Oplog GB/Hour
X average reaction time to Debezium failure
For example, if the oplog size limit is set to 1GB, and the oplog grows by 3GB per hour, oplog entries are cleared three times per hour. If Debezium were to fail during this time, its last oplog position is likely to be removed.
If the oplog grows at the rate of 3GB/hour, and Debezium is offline for two hours, you would thus set the oplog size to 3GB/hour X 2 hours, or 6GB.
2.3.5. Deployment of Debezium MongoDB connectors
You can use either of the following methods to deploy a Debezium MongoDB connector:
Additional resources
2.3.5.1. MongoDB connector deployment using Streams for Apache Kafka
Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in.
During the deployment process, you create and use the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. -
A
KafkaConnector
CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying theKafkaConnector
CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The spec.build.output
parameter in the KafkaConnect
CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically.
If you use a KafkaConnect
resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information.
Additional resources
- Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift.
2.3.5.2. Using Streams for Apache Kafka to deploy a Debezium MongoDB connector
With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use.
During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect
custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
The newly created container is pushed to the container registry that is specified in .spec.build.output
, and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector
custom resources to start the connectors that are included in the build.
Prerequisites
- You have access to an OpenShift cluster on which the cluster Operator is installed.
- The Streams for Apache Kafka Operator is running.
- An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Kafka Connect is deployed on Streams for Apache Kafka
- You have a Red Hat build of Debezium license.
-
The OpenShift
oc
CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
- To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub
- An account and permissions to create and manage images in the registry.
- To store the build image as a native OpenShift ImageStream
- An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams on OpenShift Container Platform.
Procedure
- Log in to the OpenShift cluster.
Create a Debezium
KafkaConnect
custom resource (CR) for the connector, or modify an existing one. For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifies themetadata.annotations
andspec.build
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
Example 2.18. A
dbz-connect.yaml
file that defines aKafkaConnect
custom resource that includes a Debezium connectorIn the example that follows, the custom resource is configured to download the following artifacts:
- The Debezium MongoDB connector archive.
- The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector.
- The Debezium scripting SMT archive and the associated scripting engine that you want to use with the Debezium connector. The SMT archive and scripting language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-mongodb artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mongodb/2.7.3.Final-redhat-00001/debezium-connector-mongodb-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat-<build-number>/apicurio-registry-distro-connect-converter-2.4.4.Final-redhat-<build-number>.zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ...
Table 2.67. Descriptions of Kafka Connect configuration settings Item Description 1
Sets the
strimzi.io/use-connector-resources
annotation to"true"
to enable the Cluster Operator to useKafkaConnector
resources to configure connectors in this Kafka Connect cluster.2
The
spec.build
configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.3
The
build.output
specifies the registry in which the newly built image is stored.4
Specifies the name and image name for the image output. Valid values for
output.type
aredocker
to push into a container registry such as Docker Hub or Quay, orimagestream
to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying thebuild.output
in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}.5
The
plugins
configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-inname
, and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component.6
The value of
artifacts.type
specifies the file type of the artifact specified in theartifacts.url
. Valid types arezip
,tgz
, orjar
. Debezium connector archives are provided in.zip
file format. Thetype
value must match the type of the file that is referenced in theurl
field.7
The value of
artifacts.url
specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server.8
(Optional) Specifies the artifact
type
andurl
for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter.9
(Optional) Specifies the artifact
type
andurl
for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy.10
(Optional) Specifies the artifact
type
andurl
for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT.ImportantIf you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components
artifacts.url
must specify the location of a JAR file, and the value ofartifacts.type
must also be set tojar
. Invalid values cause the connector fails at runtime.To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries:
-
groovy
-
groovy-jsr223
(scripting agent) -
groovy-json
(module for parsing JSON strings)
As an alternative, the Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript.
Apply the
KafkaConnect
build specification to the OpenShift cluster by entering the following command:oc create -f dbz-connect.yaml
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster.Create a
KafkaConnector
resource to define an instance of each connector that you want to deploy.
For example, create the followingKafkaConnector
CR, and save it asmongodb-inventory-connector.yaml
Example 2.19.
mongodb-inventory-connector.yaml
file that defines theKafkaConnector
custom resource for a Debezium connectorapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-mongodb 1 spec: class: io.debezium.connector.mongodb.MongoDbConnector 2 tasksMax: 1 3 config: 4 mongodb.hosts: rs0/192.168.99.100:27017 5 mongodb.user: debezium 6 mongodb.password: dbz 7 topic.prefix: inventory-connector-mongodb 8 collection.include.list: inventory[.]* 9
Table 2.68. Descriptions of connector configuration settings Item Description 1
The name of the connector to register with the Kafka Connect cluster.
2
The name of the connector class.
3
The number of tasks that can operate concurrently.
4
The connector’s configuration.
5
The address and port number of the host database instance.
7
The name of the account that Debezium uses to connect to the database.
8
The password that Debezium uses to connect to the database user account.
8
The topic prefix for the database instance or cluster.
The specified name must be formed only from alphanumeric characters or underscores.
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster.
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector.9
The names of the collections that the connector captures changes from.
Create the connector resource by running the following command:
oc create -n <namespace> -f <kafkaConnector>.yaml
For example,
oc create -n debezium -f mongodb-inventory-connector.yaml
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by
spec.config.database.dbname
in theKafkaConnector
CR. After the connector pod is ready, Debezium is running.
You are now ready to verify the Debezium MongoDB deployment.
2.3.5.3. Deploying a Debezium MongoDB connector by building a custom Kafka Connect container image from a Dockerfile
To deploy a Debezium MongoDB connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive and then push this container image to a container registry. You then create two custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance. Theimage
property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat Streams for Apache Kafka is deployed. Streams for Apache Kafka offers operators and images that bring Apache Kafka to OpenShift. -
A
KafkaConnector
CR that defines your Debezium MongoDB connector. Apply this CR to the same OpenShift instance where you apply theKafkaConnect
CR.
Prerequisites
- MongoDB is running and you completed the steps to set up MongoDB to work with a Debezium connector.
- Streams for Apache Kafka is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Managing Streams for Apache Kafka on OpenShift.
- Podman or Docker is installed.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector.
Procedure
Create the Debezium MongoDB container for Kafka Connect:
Create a Dockerfile that uses
registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0
as the base image. For example, from a terminal window, enter the following command:cat <<EOF >debezium-container-for-mongodb.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mongodb/2.7.3.Final-redhat-00001/debezium-connector-mongodb-2.7.3.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-mongodb-2.7.3.Final-redhat-00001-plugin.zip \ && rm debezium-connector-mongodb-2.7.3.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ USER 1001 EOF
Item Description 1
You can specify any file name that you want.
2
Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory.
The command creates a Dockerfile with the name
debezium-container-for-mongodb.yaml
in the current directory.Build the container image from the
debezium-container-for-mongodb.yaml
Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:podman build -t debezium-container-for-mongodb:latest .
docker build -t debezium-container-for-mongodb:latest .
The preceding commands build a container image with the name
debezium-container-for-mongodb
.Push your custom image to a container registry, such as
quay.io
or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:podman push <myregistry.io>/debezium-container-for-mongodb:latest
docker push <myregistry.io>/debezium-container-for-mongodb:latest
Create a new Debezium MongoDB
KafkaConnect
custom resource (CR). For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: #... image: debezium-container-for-mongodb 2 ...
Item Description 1
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.2
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply the
KafkaConnect
CR to the OpenShift Kafka Connect environment by entering the following command:oc create -f dbz-connect.yaml
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium MongoDB connector instance.You configure a Debezium MongoDB connector in a
.yaml
file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce change events for a subset of MongoDB replica sets or sharded clusters. Optionally, you can set properties that filter out collections that are not needed.The following example configures a Debezium connector that connects to a MongoDB replica set
rs0
at port27017
on192.168.99.100
, and captures changes that occur in theinventory
collection.inventory-connector-mongodb
is the logical name of the replica set.
MongoDB
inventory-connector.yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-mongodb 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mongodb.MongoDbConnector 2 config: mongodb.connection.string: mongodb://192.168.99.100:27017/?replicaSet=rs0 3 topic.prefix: inventory-connector-mongodb 4 collection.include.list: inventory[.]* 5
Table 2.69. Descriptions of settings in the MongoDB inventory-connector.yaml example Item Description 1
The name that is used to register the connector with Kafka Connect.
2
The name of the MongoDB connector class.
3
The host addresses to use to connect to the MongoDB replica set.
4
The logical name of the MongoDB replica set. The logical name forms a namespace for generated events, and is used in the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro converter is used.
5
An optional list of regular expressions that match the collection namespaces (for example, <dbName>.<collectionName>) of all collections to be monitored.
Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in theinventory-connector.yaml
file, you would run the following command:oc apply -f inventory-connector.yaml
The preceding command registers
inventory-connector
and the connector starts to run against theinventory
collection as defined in theKafkaConnector
CR.
For the complete list of the configuration properties that you can set for the Debezium MongoDB connector, see MongoDB connector configuration properties.
Results
After the connector starts, it completes the following actions:
- Performs a consistent snapshot of the collections in your MongoDB replica sets.
- Reads the change streams for the replica sets.
- Produces change events for every inserted, updated, and deleted document.
- Streams change event records to Kafka topics.
2.3.5.4. Verifying that the Debezium MongoDB connector is running
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc):
- Verify the connector status.
- Verify that the connector generates topics.
- Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table.
Prerequisites
- A Debezium connector is deployed to Streams for Apache Kafka on OpenShift.
-
The OpenShift
oc
CLI client is installed. - You have access to the OpenShift Container Platform web console.
Procedure
Check the status of the
KafkaConnector
resource by using one of the following methods:From the OpenShift Container Platform web console:
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaConnector
. - From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-mongodb.
- In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc describe KafkaConnector <connector-name> -n <project>
For example,
oc describe KafkaConnector inventory-connector-mongodb -n debezium
The command returns status information that is similar to the following output:
Example 2.20.
KafkaConnector
resource statusName: inventory-connector-mongodb Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-mongodb Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-mongodb.inventory inventory-connector-mongodb.inventory.addresses inventory-connector-mongodb.inventory.customers inventory-connector-mongodb.inventory.geom inventory-connector-mongodb.inventory.orders inventory-connector-mongodb.inventory.products inventory-connector-mongodb.inventory.products_on_hand Events: <none>
Verify that the connector created Kafka topics:
From the OpenShift Container Platform web console.
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaTopic
. -
From the KafkaTopics list, click the name of the topic that you want to check, for example,
inventory-connector-mongodb.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d
. - In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc get kafkatopics
The command returns status information that is similar to the following output:
Example 2.21.
KafkaTopic
resource statusNAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-mongodb--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-mongodb.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-mongodb.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-mongodb.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-mongodb.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-mongodb.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-mongodb.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
Check topic content.
- From a terminal window, enter the following command:
oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=<topic-name>
For example,
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-mongodb.inventory.products_on_hand
The format for specifying the topic name is the same as the
oc describe
command returns in Step 1, for example,inventory-connector-mongodb.inventory.addresses
.For each event in the topic, the command returns information that is similar to the following output:
Example 2.22. Content of a Debezium change event
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory-connector-mongodb.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-mongodb.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-mongodb.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"int64","optional":false,"field":"ts_us"},{"type":"int64","optional":false,"field":"ts_ns"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.mongodb.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"int64","optional":true,"field":"ts_us"},{"type":"int64","optional":true,"field":"ts_ns"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"inventory-connector-mongodb.inventory.products_on_hand.Envelope"},"payload":{"before":null,"after":{"product_id":101,"quantity":3},"source":{"version":"2.7.3.Final-redhat-00001","connector":"mongodb","name":"inventory-connector-mongodb","ts_ms":1638985247805,"ts_us":1638985247805000000,"ts_ns":1638985247805000000,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"mongodb-bin.000003","pos":156,"row":0,"thread":null,"query":null},"op":"r","ts_ms":1638985247805,"ts_us":1638985247805102,"ts_ns":1638985247805102588,"transaction":null}}
In the preceding example, the
payload
value shows that the connector snapshot generated a read ("op" ="r"
) event from the tableinventory.products_on_hand
. The"before"
state of theproduct_id
record isnull
, indicating that no previous value exists for the record. The"after"
state shows aquantity
of3
for the item withproduct_id
101
.
2.3.5.5. Descriptions of Debezium MongoDB connector configuration properties
The Debezium MongoDB connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:
The following configuration properties are required unless a default value is available.
Property | Default | Description |
---|---|---|
false |
Set this property to Warning This property permits you to modify the current default behavior. The property is subject to removal in a future release if the default behavior changes to permit the connector to automatically invalidate and consolidate offsets that are recorded by earlier connector versions. | |
No default | Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) | |
No default |
The name of the Java class for the connector. Always use a value of | |
No default |
Specifies a connection string that the connector uses to connect to a MongoDB replica set. This property replaces the | |
No default |
A unique name that identifies the connector and/or MongoDB replica set or sharded cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. Use only alphanumeric characters, hyphens, dots and underscores to form the name. The logical name should be unique across all other connectors, because the name is used as the prefix in naming the Kafka topics that receive records from this connector. Warning Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. | |
DefaultMongoDbAuthProvider |
A full Java class name that is an implementation of the io.debezium.connector.mongodb.connection.MongoDbAuthProvider interface. This class handles setting the credentials on the MongoDB connection (called on each app boot). Default behavior uses the | |
No default |
When using default | |
No default |
When using default | |
|
When using default | |
| Connector will use SSL to connect to MongoDB instances. | |
|
When SSL is enabled this setting controls whether strict hostname checking is disabled during connection phase. If | |
regex | The mode used to match events based on included/excluded database and collection names. Set the property to one of the following values:
| |
empty string |
An optional comma-separated list of regular expressions or literals that match database names to be monitored. By default, all databases are monitored.
To match the name of a database, Debezium performs one of the following actions based on the value of
If you include this property in the configuration, do not also set the | |
empty string |
An optional comma-separated list of regular expressions or literals that match database names to be excluded from monitoring. When
To match the name of a database, Debezium performs one of the following actions based on the value of
If you include this property in the configuration, do not set the | |
empty string |
An optional comma-separated list of regular expressions or literals that match fully-qualified namespaces for MongoDB collections to be monitored. By default, the connector monitors all collections except those in the
To match the name of a namespace, Debezium performs one of the following actions based on the value of
If you include this property in the configuration, do not also set the | |
empty string |
An optional comma-separated list of regular expressions or literals that match fully-qualified namespaces for MongoDB collections to be excluded from monitoring. When
To match the name of a namespace, Debezium performs one of the following actions based on the value of
If you include this property in the configuration, do not set the | |
|
Specifies the method that the connector uses to capture
| |
| Specifies the scope of the change streams that the connector opens. Set this property to one of the following values:
| |
Specifies the database that the connector monitors for changes. This property applies only if the | ||
empty string | An optional comma-separated list of the fully-qualified names of fields that should be excluded from change event message values. Fully-qualified names for fields are of the form databaseName.collectionName.fieldName.nestedFieldName, where databaseName and collectionName may contain the wildcard (*) which matches any characters. | |
empty string | An optional comma-separated list of the fully-qualified replacements of fields that should be used to rename fields in change event message values. Fully-qualified replacements for fields are of the form databaseName.collectionName.fieldName.nestedFieldName:newNestedFieldName, where databaseName and collectionName may contain the wildcard (*) which matches any characters, the colon character (:) is used to determine rename mapping of field. The next field replacement is applied to the result of the previous field replacement in the list, so keep this in mind when renaming multiple fields that are in the same path. | |
|
Controls whether a delete event is followed by a tombstone event. | |
none |
Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
| |
none |
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
See Avro naming for more details. |
The following advanced configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property | Default | Description |
---|---|---|
|
Specifies how the connector looks up the full value of an updated document when the
To use this option with a MongoDB change streams collection, you must configure the collection to return document pre- and post-images. Pre- and post-images for an operation are available only if the required configuration is in place before the operation occurs. Set this property to one of the following values:
Warning
If the lookup process fails to retrieve a document, it cannot populate the full document to the Failed lookups can occur because a delete operation removed the document immediately after it was created, or because a change to the sharding key results in the document being moved to a different location. Sharding key changes can result when you modify any of the properties that make up the key.
| |
| Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048. | |
|
Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of | |
|
A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. | |
| Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 500 milliseconds, or 0.5 second. | |
| Positive integer value that specifies the initial delay when trying to reconnect to a primary after the first failed connection attempt or when no primary is available. Defaults to 1 second (1000 ms). | |
| Positive integer value that specifies the maximum delay when trying to reconnect to a primary after repeated failed connection attempts or when no primary is available. Defaults to 120 seconds (120,000 ms). | |
|
Positive integer value that specifies the maximum number of failed connection attempts to a replica set primary before an exception occurs and task is aborted. Defaults to 16, which with the defaults for | |
|
Controls how frequently heartbeat messages are sent.
Set this parameter to | |
|
A comma-separated list of operation types that will be skipped during streaming. The operations include: | |
No default | Controls which collection items are included in snapshot. This property affects snapshots only. Specify a comma-separated list of collection names in the form databaseName.collectionName.
For each collection that you specify, also specify another configuration property: | |
No default |
An interval in milliseconds that the connector should wait before taking a snapshot after starting up; | |
0 |
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of the | |
|
Specifies the maximum number of documents that should be read in one go from each collection while taking a snapshot. The connector will read the collection contents in multiple batches of this size. | |
All collections specified in |
An optional, comma-separated list of regular expressions that match the fully-qualified names ( To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. | |
| Positive integer value that specifies the maximum number of threads used to perform an intial sync of the collections in a replica set. Defaults to 1. | |
initial | Specifies the criteria for performing a snapshot when the connector starts. Set the property to one of the following values:
| |
|
When set to See Transaction Metadata for additional details. | |
10000 (10 seconds) | The number of milliseconds to wait before restarting a connector after a retriable error occurs. | |
| The interval in which the connector polls for new, removed, or changed replica sets. | |
10000 (10 seconds) | The number of milliseconds the driver will wait before a new connection attempt is aborted. | |
10000 (10 seconds) | The frequency that the cluster monitor attempts to reach each server. | |
0 |
The number of milliseconds before a send/receive on the socket can take before a timeout occurs. A value of | |
30000 (30 seconds) | The number of milliseconds the driver will wait to select a server before it times out and throws an error. | |
No default | When streaming changes, this setting applies processing to change stream events as part of the standard MongoDB aggregation stream pipeline. A pipeline is a MongoDB aggregation pipeline composed of instructions to the database to filter or transform data. This can be used customize the data that the connector consumes. The value of this property must be an array of permitted aggregation pipeline stages in JSON format. Note that this is appended after the internal pipeline used to support the connector (e.g. filtering operation types, database names, collection names, etc.). | |
internal_first | The order used to construct the effective MongoDB aggregation stream pipeline. Set the property to one of the following values:
| |
fail | The strategy used to handle change events for documents exceeding specified BSON size. Set the property to one of the following values:
| |
0 | The maximum allowed size in bytes of the stored document for which change events are processed. This includes both, the size before and after database operation, more specifically this limits the size of fullDocument and fullDocumentBeforeChange filed of MongoDB change events. | |
|
Specifies the maximum number of milliseconds the oplog/change stream cursor will wait for the server to produce a result before causing an execution timeout exception. A value of | |
No default |
Fully-qualified name of the data collection that is used to send signals to the connector. Use the following format to specify the collection name: | |
source | List of the signaling channel names that are enabled for the connector. By default, the following channels are available:
| |
No default | List of notification channel names that are enabled for the connector. By default, the following channels are available:
| |
|
The maximum number of documents that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. | |
|
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
| |
|
The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to | |
|
Specify the delimiter for topic name, defaults to | |
| The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. | |
|
Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: | |
|
Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: | |
|
Defines tags that customize MBean object names by adding metadata that provides contextual information. Specify a comma-separated list of key-value pairs. Each key represents a tag for the MBean object name, and the corresponding value represents a value for the key, for example, The connector appends the specified tags to the base MBean object name. Tags can help you to organize and categorize metrics data. You can define tags to identify particular application instances, environments, regions, versions, and so forth. For more information, see Customized MBean names. | |
|
Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
|
Pass-through properties for configuring how the MongoDB connector interacts with the Kafka signaling topic
Debezium provides a set of signal.*
properties that control how the connector interacts with the Kafka signals topic.
The following table describes the Kafka signal
properties.
Property | Default | Description |
---|---|---|
<topic.prefix>-signal | The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. | |
kafka-signal | The name of the group ID that is used by Kafka consumers. | |
No default | A list of the host and port pairs that the connector uses to establish its initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. | |
| Specifies whether the Kafka consumer writes an offset commit after it reads a message from the signaling topic. The value that you assign to this property determines whether the connector can process requests that the signaling topic receives while the connector is offline. Choose one of the following settings:
|
Pass-through properties for configuring the MongoDB connector sink notification channel
The following table describes properties that you can use to configure the Debezium sink notification
channel.
Property | Default | Description |
---|---|---|
No default |
The name of the topic that receives notifications from Debezium. This property is required when you configure the |
2.3.6. Monitoring Debezium MongoDB connector performance
The Debezium MongoDB connector has two metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is capturing changes and streaming change event records.
The Debezium monitoring documentation provides details about how to expose these metrics by using JMX.
2.3.6.1. Customized names for MongoDB connector snapshot and streaming MBean objects
Debezium connectors expose metrics via the MBean name for the connector. These metrics, which are specific to each connector instance, provide data about the behavior of the connector’s snapshot, streaming, and schema history processes.
By default, when you deploy a correctly configured connector, Debezium generates a unique MBean name for each of the different connector metrics. To view the metrics for a connector process, you configure your observability stack to monitor its MBean. But these default MBean names depend on the connector configuration; configuration changes can result in changes to the MBean names. A change to the MBean name breaks the linkage between the connector instance and the MBean, disrupting monitoring activity. In this scenario, you must reconfigure the observability stack to use the new MBean name if you want to resume monitoring.
To prevent monitoring disruptions that result from MBean name changes, you can configure custom metrics tags. You configure custom metrics by adding the custom.metric.tags
property to the connector configuration. The property accepts key-value pairs in which each key represents a tag for the MBean object name, and the corresponding value represents the value of that tag. For example: k1=v1,k2=v2
. Debezium appends the specified tags to the MBean name of the connector.
After you configure the custom.metric.tags
property for a connector, you can configure the observability stack to retrieve metrics associated with the specified tags. The observability stack then uses the specified tags, rather than the mutable MBean names to uniquely identify connectors. Later, if Debezium redefines how it constructs MBean names, or if the topic.prefix
in the connector configuration changes, metrics collection is uninterrupted, because the metrics scrape task uses the specified tag patterns to identify the connector.
A further benefit of using custom tags, is that you can use tags that reflect the architecture of your data pipeline, so that metrics are organized in a way that suits you operational needs. For example, you might specify tags with values that declare the type of connector activity, the application context, or the data source, for example, db1-streaming-for-application-abc
. If you specify multiple key-value pairs, all of the specified pairs are appended to the connector’s MBean name.
The following example illustrates how tags modify the default MBean name.
Example 2.23. How custom tags modify the connector MBean name
By default, the MongoDB connector uses the following MBean name for streaming metrics:
debezium.mongodb:type=connector-metrics,context=streaming,server=<topic.prefix>
If you set the value of custom.metric.tags
to database=salesdb-streaming,table=inventory
, Debezium generates the following custom MBean name:
debezium.mongodb:type=connector-metrics,context=streaming,server=<topic.prefix>,database=salesdb-streaming,table=inventory
2.3.6.2. Monitoring Debezium during MongoDB snapshots
The MBean is debezium.mongodb:type=connector-metrics,context=snapshot,server=<topic.prefix>,task=<task.id>
.
Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
The following table lists the snapshot metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last snapshot event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of events that this connector has seen since last started or reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The total number of tables that are being included in the snapshot. | |
| The number of tables that the snapshot has yet to copy. | |
| Whether the snapshot was started. | |
| Whether the snapshot was paused. | |
| Whether the snapshot was aborted. | |
| Whether the snapshot completed. | |
| The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. | |
| The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. | |
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The Debezium MongoDB connector also provides the following custom snapshot metrics:
Attribute | Type | Description |
---|---|---|
|
| Number of database disconnects. |
2.3.6.3. Monitoring Debezium MongoDB connector record streaming
The MBean is debezium.mongodb:type=connector-metrics,context=streaming,server=<topic.prefix>,task=<task.id>
.
The following table lists the streaming metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last streaming event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of data change events reported by the source database since the last connector start, or since a metrics reset. Represents the data change workload for Debezium to process. | |
| The total number of create events processed by the connector since its last start or metrics reset. | |
| The total number of update events processed by the connector since its last start or metrics reset. | |
| The total number of delete events processed by the connector since its last start or metrics reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| Flag that denotes whether the connector is currently connected to the database server. | |
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. | |
| The number of processed transactions that were committed. | |
| The coordinates of the last received event. | |
| Transaction identifier of the last processed transaction. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The Debezium MongoDB connector also provides the following custom streaming metrics:
Attribute | Type | Description |
---|---|---|
|
| Number of database disconnects. |
|
| Number of primary node elections. |
2.3.7. How Debezium MongoDB connectors handle faults and problems
Debezium is a distributed system that captures all changes in multiple upstream databases, and will never miss or lose an event. When the system is operating normally and is managed carefully, then Debezium provides exactly once delivery of every change event.
If a fault occurs, the system does not lose any events. However, while it is recovering from the fault, it might repeat some change events. In such situations, Debezium, like Kafka, provides at least once delivery of change events.
The following topics provide details about how the Debezium MongoDB connector handles various kinds of faults and problems.
Configuration and startup errors
In the following situations, the connector fails when trying to start, reports an error or exception in the log, and stops running:
- The connector’s configuration is invalid.
- The connector cannot successfully connect to MongoDB by using the specified connection parameters.
After a failure, the connector attempts to reconnect by using exponential backoff. You can configure the maximum number of reconnection attempts.
In these cases, the error will have more details about the problem and possibly a suggested work around. The connector can be restarted when the configuration has been corrected or the MongoDB problem has been addressed.
The attempts to reconnect are controlled by three properties:
-
connect.backoff.initial.delay.ms
- The delay before attempting to reconnect for the first time, with a default of 1 second (1000 milliseconds). -
connect.backoff.max.delay.ms
- The maximum delay before attempting to reconnect, with a default of 120 seconds (120,000 milliseconds). -
connect.max.attempts
- The maximum number of attempts before an error is produced, with a default of 16.
Each delay is double that of the prior delay, up to the maximum delay. Given the default values, the following table shows the delay for each failed connection attempt and the total accumulated time before failure.
Reconnection attempt number | Delay before attempt, in seconds | Total delay before attempt, in minutes and seconds |
---|---|---|
1 | 1 | 00:01 |
2 | 2 | 00:03 |
3 | 4 | 00:07 |
4 | 8 | 00:15 |
5 | 16 | 00:31 |
6 | 32 | 01:03 |
7 | 64 | 02:07 |
8 | 120 | 04:07 |
9 | 120 | 06:07 |
10 | 120 | 08:07 |
11 | 120 | 10:07 |
12 | 120 | 12:07 |
13 | 120 | 14:07 |
14 | 120 | 16:07 |
15 | 120 | 18:07 |
16 | 120 | 20:07 |
Connector Unable to Start - InvalidResumeToken or ChangeStreamHistoryLost
A connector that is stopped for a long period fails to start, and reports the following exception:
Command failed with error 286 (ChangeStreamHistoryLost): 'PlanExecutor error during aggregation :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog
The preceding exception indicates that the entry that corresponds to the connector’s resume token is no longer present in the oplog. Because the oplog no longer contains the last offset that the connector processed, the connector cannot resume streaming.
You can use either of the following options to recover from the failure:
- Delete the failed connector, and create a new connector with the same configuration but with a different connector name.
- Pause the connector and then remove offsets, or change the offset topic.
To help prevent failures related to missing resume tokens, optimize configuration of the oplog.
Kafka Connect process stops gracefully
If Kafka Connect is being run in distributed mode, and a Kafka Connect process is stopped gracefully, then prior to shutdown of that processes Kafka Connect will migrate all of the process' connector tasks to another Kafka Connect process in that group, and the new connector tasks will pick up exactly where the prior tasks left off. There is a short delay in processing while the connector tasks are stopped gracefully and restarted on the new processes.
If the group contains only one process and that process is stopped gracefully, then Kafka Connect will stop the connector and record the last offset for each replica set. Upon restart, the replica set tasks will continue exactly where they left off.
Kafka Connect process crashes
If the Kafka Connector process stops unexpectedly, then any connector tasks it was running will terminate without recording their most recently-processed offsets. When Kafka Connect is being run in distributed mode, it will restart those connector tasks on other processes. However, the MongoDB connectors will resume from the last offset recorded by the earlier processes, which means that the new replacement tasks may generate some of the same change events that were processed just prior to the crash. The number of duplicate events depends on the offset flush period and the volume of data changes just before the crash.
Because there is a chance that some events may be duplicated during a recovery from failure, consumers should always anticipate some events may be duplicated. Debezium changes are idempotent, so a sequence of events always results in the same state.
Debezium also includes with each change event message the source-specific information about the origin of the event, including the MongoDB event’s unique transaction identifier (h
) and timestamp (sec
and ord
). Consumers can keep track of other of these values to know whether it has already seen a particular event.
Connector fails after it is stopped for a long interval if snapshot.mode
is set to initial
If the connector is gracefully stopped, users might continue to perform operations on replica set members. Changes that occur while the connector is offline continue to be recorded in MongoDB’s oplog. In most cases, after the connector is restarted, it reads the offset value in the oplog to determine the last operation that it streamed for each replica set, and then resumes streaming changes from that point. After the restart, database operations that occurred while the connector was stopped are emitted to Kafka as usual, and after some time, the connector catches up with the database. The amount of time required for the connector to catch up depends on the capabilities and performance of Kafka and the volume of changes that occurred in the database.
However, if the connector remains stopped for a long enough interval, it can occur that MongoDB purges the oplog during the time that the connector is inactive, resulting in the loss of information about the connector’s last position. After the connector restarts, it cannot resume streaming, because the oplog no longer contains the previous offset value that marks the last operation that the connector processed. The connector also cannot perform a snapshot, as it typically would when the snapshot.mode
property is set to initial
, and no offset value is present. In this case, a mismatch exists, because the oplog does not contain the value of the previous offset, but the offset value is present in the connector’s internal Kafka offsets topic. An error results and the connector fails.
To recover from the failure, delete the failed connector, and create a new connector with the same configuration but with a different connector name. When you start the new connector, it performs a snapshot to ingest the state of database, and then resumes streaming.
MongoDB loses writes
In certain failure situations, MongoDB can lose commits, which results in the MongoDB connector being unable to capture the lost changes. For example, if the primary crashes suddenly after it applies a change and records the change to its oplog, the oplog might become unavailable before secondary nodes can read its contents. As a result, the secondary node that is elected as the new primary node might be missing the most recent changes from its oplog.
At this time, there is no way to prevent this side effect in MongoDB.
2.4. Debezium connector for MySQL
MySQL has a binary log (binlog) that records all operations in the order in which they are committed to the database. This includes changes to table schemas as well as changes to the data in tables. MySQL uses the binlog for replication and recovery.
The Debezium MySQL connector reads the binlog, produces change events for row-level INSERT
, UPDATE
, and DELETE
operations, and emits the change events to Kafka topics. Client applications read those Kafka topics.
Because MySQL is typically set up to purge binlogs after a specified period of time, the MySQL connector performs an initial consistent snapshot of each of your databases. The MySQL connector reads the binlog from the point at which the snapshot was made.
For information about the MySQL Database versions that are compatible with this connector, see the Debezium Supported Configurations page.
Information and procedures for using a Debezium MySQL connector are organized as follows:
- Section 2.4.1, “How Debezium MySQL connectors work”
- Section 2.4.2, “Descriptions of Debezium MySQL connector data change events”
- Section 2.4.3, “How Debezium MySQL connectors map data types”
- Section 2.4.4, “Custom converters for mapping MySQL data to alternative data types”
- Section 2.4.5, “Setting up MySQL to run a Debezium connector”
- Section 2.4.6, “Deployment of Debezium MySQL connectors”
- Section 2.4.7, “Monitoring Debezium MySQL connector performance”
- Section 2.4.8, “How Debezium MySQL connectors handle faults and problems”
2.4.1. How Debezium MySQL connectors work
An overview of the MySQL topologies that the connector supports is useful for planning your application. To optimally configure and run a Debezium MySQL connector, it is helpful to understand how the connector tracks the structure of tables, exposes schema changes, performs snapshots, and determines Kafka topic names.
Details are in the following topics:
- Section 2.4.1.1, “MySQL topologies supported by Debezium connectors”
- Section 2.4.1.2, “How Debezium MySQL connectors handle database schema changes”
- Section 2.4.1.3, “How Debezium MySQL connectors expose database schema changes”
- Section 2.4.1.4, “How Debezium MySQL connectors perform database snapshots”
- Section 2.4.1.5, “Ad hoc snapshots”
- Section 2.4.1.6, “Incremental snapshots”
- Section 2.4.1.8, “Default names of Kafka topics that receive Debezium MySQL change event records”
2.4.1.1. MySQL topologies supported by Debezium connectors
The Debezium MySQL connector supports the following MySQL topologies:
- Standalone
- When a single MySQL server is used, the server must have the binlog enabled so the Debezium MySQL connector can monitor the server. This is often acceptable, since the binary log can also be used as an incremental [backup]. In this case, the MySQL connector always connects to and follows this standalone MySQL server instance.
- Primary and replica
The Debezium MySQL connector can follow one of the primary servers, or one of the replicas (if that replica has its binlog enabled), but the connector detects changes only in the cluster that is visible to that server. Generally, this is not a problem except for the multi-primary topologies.
The connector records its position in the server’s binlog, which is different on each server in the cluster. Therefore, the connector must follow just one MySQL server instance. If that server fails, that server must be restarted or recovered before the connector can continue.
- High available clusters
- A variety of [high availability] solutions exist for MySQL, and they make it significantly easier to tolerate and almost immediately recover from problems and failures. Because HA MySQL clusters use GTIDs, replicas are able to track all of the changes that occur on any primary server.
- Multi-primary
uses one or more MySQL replica nodes that each replicate from multiple primary servers. Cluster replication provides a powerful way to aggregate the replication of multiple MySQL clusters.
A Debezium MySQL connector can use these multi-primary MySQL replicas as sources, and can fail over to different multi-primary MySQL replicas as long as the new replica is caught up to the old replica. That is, the new replica has all transactions that were seen on the first replica. This works even if the connector is using only a subset of databases and/or tables, because the connector can be configured to include or exclude specific GTID sources when attempting to reconnect to a new multi-primary MySQL replica and find the correct position in the binlog.
- Hosted
The Debezium MySQL connector can use hosted database options such as Amazon RDS and Amazon Aurora.
Because these hosted options do not permit the use of global read locks, the connector uses table-level locks when it creates a consistent snapshot.
2.4.1.2. How Debezium MySQL connectors handle database schema changes
When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot necessarily apply the current schema to every event. If an event is relatively old, it’s possible that it was recorded before the current schema was applied.
To ensure correct processing of events that occur after a schema change, MySQL includes in the transaction log not only the row-level changes that affect the data, but also the DDL statements that are applied to the database. As the connector encounters these DDL statements in the binlog, it parses them and updates an in-memory representation of each table’s schema. The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete operation and to produce the appropriate change event. In a separate database schema history Kafka topic, the connector records all DDL statements along with the position in the binlog where each DDL statement appeared.
When the connector restarts after either a crash or a graceful stop, it starts reading the binlog from a specific position, that is, from a specific point in time. The connector rebuilds the table structures that existed at this point in time by reading the database schema history Kafka topic and parsing all DDL statements up to the point in the binlog where the connector is starting.
This database schema history topic is for internal connector use only. Optionally, the connector can also emit schema change events to a different topic that is intended for consumer applications.
When the MySQL connector captures changes in a table to which a schema change tool such as gh-ost
or pt-online-schema-change
is applied, there are helper tables created during the migration process. You must configure the connector to capture changes that occur in these helper tables. If consumers do not need the records the connector generates for helper tables, configure a single message transform (SMT) to remove these records from the messages that the connector emits.
Additional resources
- Default names for topics that receive Debezium event records.
2.4.1.3. How Debezium MySQL connectors expose database schema changes
You can configure a Debezium MySQL connector to produce schema change events that describe schema changes that are applied to tables in the database. The connector writes schema change events to a Kafka topic named <topicPrefix>
, where topicPrefix
is the namespace specified in the topic.prefix
connector configuration property. Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
The schema for the schema change event has the following elements:
name
- The name of the schema change event message.
type
- The type of the change event message.
version
- The version of the schema. The version is an integer that is incremented each time the schema is changed.
fields
- The fields that are included in the change event message.
Example: Schema of the MySQL connector schema change topic
The following example shows a typical schema in JSON format.
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.mysql.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "inventory" } }
The payload of a schema change event message includes the following elements:
ddl
-
Provides the SQL
CREATE
,ALTER
, orDROP
statement that results in the schema change. databaseName
-
The name of the database to which the DDL statements are applied. The value of
databaseName
serves as the message key. pos
- The position in the binlog where the statements appear.
tableChanges
-
A structured representation of the entire table schema after the schema change. The
tableChanges
field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser.
For a table that is in capture mode, the connector not only stores the history of schema changes in the schema change topic, but also in an internal database schema history topic. The internal database schema history topic is for connector use only, and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic.
Never partition the database schema history topic. For the database schema history topic to function correctly, it must maintain a consistent, global order of the event records that the connector emits to it.
To ensure that the topic is not split among partitions, set the partition count for the topic by using one of the following methods:
-
If you create the database schema history topic manually, specify a partition count of
1
. -
If you use the Apache Kafka broker to create the database schema history topic automatically, the topic is created, set the value of the Kafka
num.partitions
configuration option to1
.
The format of the messages that a connector emits to its schema change topic is in an incubating state and is subject to change without notice.
Example: Message emitted to the MySQL connector schema change topic
The following example shows a typical schema change message in JSON format. The message contains a logical representation of the table schema.
{ "schema": { }, "payload": { "source": { 1 "version": "2.7.3.Final", "connector": "mysql", "name": "mysql", "ts_ms": 1651535750218, 2 "ts_us": 1651535750218000, 3 "ts_ns": 1651535750218000000, 4 "snapshot": "false", "db": "inventory", "sequence": null, "table": "customers", "server_id": 223344, "gtid": null, "file": "mysql-bin.000003", "pos": 570, "row": 0, "thread": null, "query": null }, "databaseName": "inventory", 5 "schemaName": null, "ddl": "ALTER TABLE customers ADD middle_name varchar(255) AFTER first_name", 6 "tableChanges": [ 7 { "type": "ALTER", 8 "id": "\"inventory\".\"customers\"", 9 "table": { 10 "defaultCharsetName": "utf8mb4", "primaryKeyColumnNames": [ 11 "id" ], "columns": [ 12 { "name": "id", "jdbcType": 4, "nativeType": null, "typeName": "INT", "typeExpression": "INT", "charsetName": null, "length": null, "scale": null, "position": 1, "optional": false, "autoIncremented": true, "generated": true }, { "name": "first_name", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 2, "optional": false, "autoIncremented": false, "generated": false }, { "name": "middle_name", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 3, "optional": true, "autoIncremented": false, "generated": false }, { "name": "last_name", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 4, "optional": false, "autoIncremented": false, "generated": false }, { "name": "email", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR", "typeExpression": "VARCHAR", "charsetName": "utf8mb4", "length": 255, "scale": null, "position": 5, "optional": false, "autoIncremented": false, "generated": false } ], "attributes": [ 13 { "customAttribute": "attributeValue" } ] } } ] } }
Item | Field name | Description |
---|---|---|
1 |
|
The |
2 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
3 |
|
Identifies the database and the schema that contains the change. The value of the |
4 |
|
This field contains the DDL that is responsible for the schema change. The |
5 |
| An array of one or more items that contain the schema changes generated by a DDL command. |
6 |
| Describes the kind of change. The value is one of the following:
|
7 |
|
Full identifier of the table that was created, altered, or dropped. In the case of a table rename, this identifier is a concatenation of |
8 |
| Represents table metadata after the applied change. |
9 |
| List of columns that compose the table’s primary key. |
10 |
| Metadata for each column in the changed table. |
11 |
| Custom attribute metadata for each table change. |
For more information, see schema history topic.
2.4.1.4. How Debezium MySQL connectors perform database snapshots
When a Debezium MySQL connector is first started, it performs an initial consistent snapshot of your database. This snapshot enables the connector to establish a baseline for the current state of the database.
Debezium can use different modes when it runs a snapshot. The snapshot mode is determined by the snapshot.mode
configuration property. The default value of the property is initial
. You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property.
You can find more information about snapshots in the following sections:
The connector completes a series of tasks when it performs the snapshot. The exact steps vary with the snapshot mode and with the table locking policy that is in effect for the database. The Debezium MySQL connector completes different steps when it performs an initial snapshot that uses a global read lock or table-level locks.
2.4.1.4.1. Initial snapshots that use a global read lock
You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property. If you configure a different snapshot mode, the connector completes the snapshot by using a modified version of this workflow. For information about the snapshot process in environments that do not permit global read locks, see the snapshot workflow for table-level locks.
Default workflow that the Debezium MySQL connector uses to perform an initial snapshot with a global read lock
The following table shows the steps in the workflow that Debezium follows to create a snapshot with a global read lock.
Step | Action |
---|---|
1 | Establish a connection to the database. |
2 |
Determine the tables to be captured. By default, the connector captures the data for all non-system tables. After the snapshot completes, the connector continues to stream data for the specified tables. If you want the connector to capture data only from specific tables you can direct the connector to capture the data for only a subset of tables or table elements by setting properties such as |
3 |
Obtain a global read lock on the tables to be captured to block writes by other database clients. |
4 Note The use of these isolation semantics can slow the progress of the snapshot. If the snapshot takes too long to complete, consider using a different isolation configuration, or skip the initial snapshot and run an incremental snapshot instead. | 5 |
Read the current binlog position. | 6 |
Capture the structure of all tables in the database, or all tables that are designated for capture. The connector persists schema information in its internal database schema history topic, including all necessary Note
By default, the connector captures the schema of every table in the database, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. | 7 |
Release the global read lock obtained in Step 3. Other database clients can now write to the database. | 8 |
At the binlog position that the connector read in Step 5, the connector begins to scan the tables that are designated for capture. During the scan, the connector completes the following tasks:
| 9 |
Commit the transaction. | 10 |
The resulting initial snapshot captures the current state of each row in the captured tables. From this baseline state, the connector captures subsequent changes as they occur.
After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts.
After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 5 so that it does not miss any updates.
If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.
After the connector restarts, if the logs have been pruned, the connector’s position in the logs might no longer available. The connector then fails, and returns an error that indicates that a new snapshot is required. To configure the connector to automatically initiate a snapshot in this situation, set the value of the snapshot.mode
property to when_needed
. For more tips on troubleshooting the Debezium MySQL connector, see behavior when things go wrong.
2.4.1.4.2. Initial snapshots that use table-level locks
In some database environments, administrators do not permit global read locks. If the Debezium MySQL connector detects that global read locks are not permitted, the connector uses table-level locks when it performs snapshots. For the connector to perform a snapshot that uses table-level locks, the database account that the Debezium connector uses to connect to MySQL must have LOCK TABLES
privileges.
Default workflow that the Debezium MySQL connector uses to perform an initial snapshot with table-level locks
The following table shows the steps in the workflow that Debezium follows to create a snapshot with table-level read locks. For information about the snapshot process in environments that do not permit global read locks, see the snapshot workflow for global read locks.
Step | Action |
---|---|
1 | Establish a connection to the database. |
2 |
Determine the tables to be captured. By default, the connector captures all non-system tables. To have the connector capture a subset of tables or table elements, you can set a number of |
3 | Obtain table-level locks. |
4 | 5 |
Read the current binlog position. | 6 |
Read the schema of the databases and tables for which the connector is configured to capture changes. The connector persists schema information in its internal database schema history topic, including all necessary Note By default, the connector captures the schema of every table in the database, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see Understanding why initial snapshots capture the schema for all tables. | 7 |
At the binlog position that the connector read in Step 5, the connector begins to scan the tables that are designated for capture. During the scan, the connector completes the following tasks:
| 8 |
Commit the transaction. | 9 |
Release the table-level locks. Other database clients can now write to any previously locked tables. | 10 |
Setting | Description |
---|---|
| The connector performs a snapshot every time that it starts. The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables every time that the connector starts. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot as described in the default workflow for creating an initial snapshot. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot. After the snapshot completes, the connector stops, and does not stream event records for subsequent database changes. |
|
Deprecated, see |
|
The connector captures the structure of all relevant tables, performing all the steps described in the default workflow for creating an initial snapshot, except that it does not create |
|
When the connector starts, rather than performing a snapshot, it immediately begins to stream event records for subsequent database changes. This option is under consideration for future deprecation, in favor of the |
|
Deprecated, see |
|
Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth. |
| After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
|
For more information, see snapshot.mode
in the table of connector configuration properties.
2.4.1.4.3. Description of why initial snapshots capture the schema history for all tables
The initial snapshot that a connector runs captures two types of information:
- Table data
-
Information about
INSERT
,UPDATE
, andDELETE
operations in tables that are named in the connector’stable.include.list
property. - Schema data
- DDL statements that describe the structural changes that are applied to tables. Schema data is persisted to both the internal schema history topic, and to the connector’s schema change topic, if one is configured.
After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture. By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture. Connectors require that the table’s schema is present in the schema history topic before they can capture a table. By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, Debezium prepares the connector to readily capture event data from these tables should that later become necessary. If the initial snapshot does not capture a table’s schema, you must add the schema to the history topic before the connector can capture data from the table.
In some cases, you might want to limit schema capture in the initial snapshot. This can be useful when you want to reduce the time required to complete a snapshot. Or when Debezium connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database.
Additional information
- Capturing data from tables not captured by the initial snapshot (no schema change)
- Capturing data from tables not captured by the initial snapshot (schema change)
-
Setting the
schema.history.internal.store.only.captured.tables.ddl
property to specify the tables from which to capture schema information. -
Setting the
schema.history.internal.store.only.captured.databases.ddl
property to specify the logical databases from which to capture schema changes.
2.4.1.4.4. Capturing data from tables not captured by the initial snapshot (no schema change)
In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot. Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database. If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error.
You might still be able to capture data from the table, but you must perform additional steps to add the table schema.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- In the transaction log, all entries for the table use the same schema. For information about capturing data from a new table that has undergone structural changes, see Capturing data from tables not captured by the initial snapshot (schema change).
Procedure
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Apply the following changes to the connector configuration:
-
Set the
snapshot.mode
toschema_only_recovery
. -
Set the value of
schema.history.internal.store.only.captured.tables.ddl
tofalse
. -
Add the tables that you want the connector to capture to
table.include.list
. This guarantees that in the future, the connector can reconstruct the schema history for all tables.
-
Set the
- Restart the connector. The snapshot recovery process rebuilds the schema history based on the current structure of the tables.
- (Optional) After the snapshot completes, initiate an incremental snapshot to capture existing data for newly added tables along with changes to other tables that occurred while that connector was off-line.
-
(Optional) Reset the
snapshot.mode
back toschema_only
to prevent the connector from initiating recovery after a future restart.
2.4.1.4.5. Capturing data from tables not captured by the initial snapshot (schema change)
If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change. When Debezium captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event. If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results.
If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available. You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- A schema change was applied to the table so that the records to be captured do not have a uniform structure.
Procedure
- Initial snapshot captured the schema for all tables (
store.only.captured.tables.ddl
was set tofalse
) -
Edit the
table.include.list
property to specify the tables that you want to capture. - Restart the connector.
- Initiate an incremental snapshot if you want to capture existing data from the newly added tables.
-
Edit the
- Initial snapshot did not capture the schema for all tables (
store.only.captured.tables.ddl
was set totrue
) If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures:
- Procedure 1: Schema snapshot, followed by incremental snapshot
In this procedure, the connector first performs a schema snapshot. You can then initiate an incremental snapshot to enable the connector to synchronize data.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toschema_only
. -
Edit the
table.include.list
to add the tables that you want to capture.
-
Set the value of the
- Restart the connector.
- Wait for Debezium to capture the schema of the new and existing tables. Data changes that occurred any tables after the connector stopped are not captured.
- To ensure that no data is lost, initiate an incremental snapshot.
- Procedure 2: Initial snapshot, followed by optional incremental snapshot
In this procedure the connector performs a full initial snapshot of the database. As with any initial snapshot, in a database with many large tables, running an initial snapshot can be a time-consuming operation. After the snapshot completes, you can optionally trigger an incremental snapshot to capture any changes that occur while the connector is off-line.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Edit the
table.include.list
to add the tables that you want to capture. Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toinitial
. -
(Optional) Set
schema.history.internal.store.only.captured.tables.ddl
tofalse
.
-
Set the value of the
- Restart the connector. The connector takes a full database snapshot. After the snapshot completes, the connector transitions to streaming.
- (Optional) To capture any data that changed while the connector was off-line, initiate an incremental snapshot.
2.4.1.5. Ad hoc snapshots
By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.
However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. You might want to perform an ad hoc snapshot after any of the following changes occur in your Debezium environment:
- The connector configuration is modified to capture a different set of tables.
- Kafka topics are deleted and must be rebuilt.
- Data corruption occurs due to a configuration error or some other problem.
You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling tables. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table.
When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.
Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database.
You specify the tables to capture by sending an execute-snapshot
message to the signaling table. Set the type of the execute-snapshot
signal to incremental
or blocking
, and provide the names of the tables to include in the snapshot, as described in the following table:
Field | Default | Value |
---|---|---|
|
|
Specifies the type of snapshot that you want to run. |
| N/A |
An array that contains regular expressions matching the fully-qualified names of the tables to include in the snapshot. |
| N/A |
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
|
| N/A | An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. |
Triggering an ad hoc incremental snapshot
You initiate an ad hoc incremental snapshot by adding an entry with the execute-snapshot
signal type to the signaling table, or by sending a signal message to a Kafka signaling topic. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
For more information, see Incremental snapshots.
Triggering an ad hoc blocking snapshot
You initiate an ad hoc blocking snapshot by adding an entry with the execute-snapshot
signal type to the signaling table or signaling topic. After the connector processes the message, it begins the snapshot operation. The connector temporarily stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the connector resumes streaming.
For more information, see Blocking snapshots.
2.4.1.6. Incremental snapshots
To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows.
As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:
- You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
- If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning.
-
You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its
table.include.list
property.
Incremental snapshot process
When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size. Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ
event. That event represents the value of the row when the snapshot for the chunk began.
As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT
, UPDATE
, or DELETE
operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.
How Debezium resolves collisions among records with the same primary key
In some cases, the UPDATE
or DELETE
events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ
event for that row. When the snapshot eventually emits the corresponding READ
event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.
Snapshot window
To assist in resolving collisions between late-arriving READ
events and streamed events that modify the same table row, Debezium employs a so-called snapshot window. The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key..
For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ
operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE
or DELETE
operations for each change.
As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ
events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ
event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ
events for which no related transaction log events exist. Debezium emits these remaining READ
events to the table’s Kafka topic.
The connector repeats the process for each snapshot chunk.
Currently, you can use either of the following methods to initiate an incremental snapshot:
2.4.1.6.1. Triggering an incremental snapshot
To initiate an incremental snapshot, you can send an ad hoc snapshot signal to the signaling table on the source database. You submit snapshot signals as SQL INSERT
queries.
After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation.
The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the type of snapshot operation. Debezium currently supports the incremental
and blocking
snapshot types.
To specify the tables to include in the snapshot, provide a data-collections
array that lists the tables, or an array of regular expressions used to match tables, for example,
{"data-collections": ["public.MyFirstTable", "public.MySecondTable"]}
The data-collections
array for an incremental snapshot signal has no default value. If the data-collections
array is empty, Debezium interprets the empty array to mean that no action is required, and it does not perform a snapshot.
If the name of a table that you want to include in a snapshot contains a dot (.
), a space, or some other non-alphanumeric character, you must escape the table name in double quotes.
For example, to include a table that exists in the db1
database, and that has the name My.Table
, use the following format: "db1.\"My.Table\""
.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to trigger an incremental snapshot
Send a SQL query to add the ad hoc incremental snapshot request to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example,
INSERT INTO db1.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["db1.table1", "db1.table2"], 4 "type":"incremental", 5 "additional-conditions":[{"data-collection": "db1.table1" ,"filter":"color=\'blue\'"}]}'); 6
The values of the
id
,type
, anddata
parameters in the command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.77. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1
database.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its ownid
string as a watermarking signal.3
execute-snapshot
The
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
A required component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot.
The array lists regular expressions that use the formatdatabase.table
to match the fully-qualified names of the tables. This format is the same as the one that you use to specify the name of the connector’s signaling table.5
incremental
An optional
type
component of thedata
field of a signal that specifies the type of snapshot operation to run.
Valid values areincremental
andblocking
.
If you do not specify a value, the connector defaults to performing an incremental snapshot.6
additional-conditions
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
Each additional condition is an object withdata-collection
andfilter
properties. You can specify different filters for each data collection.
* Thedata-collection
property is the fully-qualified name of the data collection that the filter applies to. For more information about theadditional-conditions
parameter, see Section 2.4.1.6.2, “Running an ad hoc incremental snapshots withadditional-conditions
”.
2.4.1.6.2. Running an ad hoc incremental snapshots with additional-conditions
If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-conditions
parameter to the snapshot signal.
The SQL query for a typical snapshot takes the following form:
SELECT * FROM <tableName> ....
By adding an additional-conditions
parameter, you append a WHERE
condition to the SQL query, as in the following example:
SELECT * FROM <data-collection> WHERE <filter> ....
The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example, suppose you have a products
table that contains the following columns:
-
id
(primary key) -
color
-
quantity
If you want an incremental snapshot of the products
table to include only the data items where color=blue
, you can use the following SQL statement to trigger the snapshot:
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue"}]}');
The additional-conditions
parameter also enables you to pass conditions that are based on more than one column. For example, using the products
table from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue
and quantity>10
:
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue AND quantity>10"}]}');
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
Example 2.24. Incremental snapshot event message
{ "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "ts_us":"1620393591654547", "ts_ns":"1620393591654547920", "transaction":null }
Item | Field name | Description |
---|---|---|
1 |
|
Specifies the type of snapshot operation to run. |
2 |
|
Specifies the event type. |
2.4.1.6.3. Using the Kafka signaling channel to trigger an incremental snapshot
You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is execute-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports the |
| N/A |
An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. |
| N/A |
An optional array of additional conditions that specifies criteria that the connector evaluates to designate a subset of records to include in a snapshot. |
Example 2.25. An execute-snapshot
Kafka message
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
Ad hoc incremental snapshots with additional-conditions
Debezium uses the additional-conditions
field to select a subset of a table’s content.
Typically, when Debezium runs a snapshot, it runs a SQL query such as:
SELECT * FROM <tableName> ….
When the snapshot request includes an additional-conditions
property, the data-collection
and filter
parameters of the property are appended to the SQL query, for example:
SELECT * FROM <data-collection> WHERE <filter> ….
For example, given a products
table with the columns id
(primary key), color
, and brand
, if you want a snapshot to include only content for which color='blue'
, when you request the snapshot, you could add the additional-conditions
property to filter the content:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue'"}]}}`
You can also use the additional-conditions
property to pass conditions based on multiple columns. For example, using the same products
table as in the previous example, if you want a snapshot to include only the content from the products
table for which color='blue'
, and brand='MyBrand'
, you could send the following request:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
2.4.1.6.4. Stopping an incremental snapshot
In some situations, it might be necessary to stop an incremental snapshot. For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations. You can stop a snapshot that is already running by sending a signal to the signaling table on the source database.
You submit a stop snapshot signal to the signaling table by sending it in a SQL INSERT
query. The stop-snapshot signal specifies the type
of the snapshot operation as incremental
, and optionally specifies the tables that you want to omit from the currently running snapshot. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it’s in progress.
Additional resources
You can also stop an incremental snapshot by sending a JSON message to the Kafka signaling topic.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to stop an incremental snapshot
Send a SQL query to stop the ad hoc incremental snapshot to the signaling table:
INSERT INTO <signalTable> (id, type, data) values ('<id>', 'stop-snapshot', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"incremental"}');
For example,
INSERT INTO db1.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["db1.table1", "db1.table2"], 4 "type":"incremental"}'); 5
The values of the
id
,type
, anddata
parameters in the signal command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.80. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1
database.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string.3
stop-snapshot
Specifies
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
An optional component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot.
The array lists regular expressions which match tables by their fully-qualified names in the formatdatabase.table
If you omit this component from the
data
field, the signal stops the entire incremental snapshot that is in progress.5
incremental
A required component of the
data
field of a signal that specifies the type of snapshot operation that is to be stopped.
Currently, the only valid option isincremental
.
If you do not specify atype
value, the signal fails to stop the incremental snapshot.
2.4.1.6.5. Using the Kafka signaling channel to stop an incremental snapshot
You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is stop-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports only the |
| N/A |
An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of table names or regular expressions to match table names to remove from the snapshot. |
The following example shows a typical stop-snapshot
Kafka message:
Key = `test_connector` Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.table1", "db1.table2"], "type": "INCREMENTAL"}}`
2.4.1.7. Blocking snapshots
To provide more flexibility in managing snapshots, Debezium includes a supplementary ad hoc snapshot mechanism, known as a blocking snapshot. Blocking snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
A blocking snapshot behaves just like an initial snapshot, except that you can trigger it at run time.
You might want to run a blocking snapshot rather than use the standard initial snapshot process in the following situations:
- You add a new table and you want to complete the snapshot while the connector is running.
- You add a large table, and you want the snapshot to complete in less time than is possible with an incremental snapshot.
Blocking snapshot process
When you run a blocking snapshot, Debezium stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the streaming is resumed.
Configure snapshot
You can set the following properties in the data
component of a signal:
- data-collections: to specify which tables must be snapshot
additional-conditions: You can specify different filters for different table.
-
The
data-collection
property is the fully-qualified name of the table for which the filter will be applied. -
The
filter
property will have the same value used in thesnapshot.select.statement.overrides
-
The
For example:
{"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]}
Possible duplicates
A delay might exist between the time that you send the signal to trigger the snapshot, and the time when streaming stops and the snapshot starts. As a result of this delay, after the snapshot completes, the connector might emit some event records that duplicate records captured by the snapshot.
2.4.1.8. Default names of Kafka topics that receive Debezium MySQL change event records
By default, the MySQL connector writes change events for all of the INSERT
, UPDATE
, and DELETE
operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
topicPrefix.databaseName.tableName
Suppose that fulfillment
is the topic prefix, inventory
is the database name, and the database contains tables named orders
, customers
, and products
. The Debezium MySQL connector emits events to three Kafka topics, one for each table in the database:
fulfillment.inventory.orders fulfillment.inventory.customers fulfillment.inventory.products
The following list provides definitions for the components of the default name:
- topicPrefix
-
The topic prefix as specified by the
topic.prefix
connector configuration property. - schemaName
- The name of the schema in which the operation occurred.
- tableName
- The name of the table in which the operation occurred.
The connector applies similar naming conventions to label its internal database schema history topics, schema change topics, and transaction metadata topics.
If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing.
Transaction metadata
Debezium can generate events that represent transaction boundaries and that enrich data change event messages.
Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.
Debezium generates transaction boundary events for the BEGIN
and END
delimiters in every transaction. Transaction boundary events contain the following fields:
status
-
BEGIN
orEND
. id
- String representation of the unique transaction identifier.
ts_ms
-
The time of a transaction boundary event (
BEGIN
orEND
event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count
(forEND
events)- Total number of events emitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
elements that indicates the number of events that the connector emits for changes that originate from a data collection.
Example
{ "status": "BEGIN", "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "s1.a", "event_count": 1 }, { "data_collection": "s2.a", "event_count": 1 } ] }
Unless overridden via the topic.transaction
option, the connector emits transaction events to the <topic.prefix>
.transaction
topic.
Change data event enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
id
- String representation of unique transaction identifier.
total_order
- The absolute position of the event among all events generated by the transaction.
data_collection_order
- The per-data collection position of the event among all events that were emitted by the transaction.
Following is an example of a message:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "ts_us": "1580390884335472", "ts_ns": "1580390884335472987", "transaction": { "id": "0e4d5dcd-a33b-11ea-80f1-02010a22a99e:10", "total_order": "1", "data_collection_order": "1" } }
2.4.2. Descriptions of Debezium MySQL connector data change events
The Debezium MySQL connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating table. See topic names.
The MySQL connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or _. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
More details are in the following topics:
2.4.2.1. About keys in Debezium MySQL change events
A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s PRIMARY KEY
(or unique constraint) at the time the connector created the event.
Consider the following customers
table, which is followed by an example of a change event key for this table.
CREATE TABLE customers ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE KEY ) AUTO_INCREMENT=1001;
Every change event that captures a change to the customers
table has the same event key schema. For as long as the customers
table has the previous definition, every change event that captures a change to the customers
table has the following key structure. In JSON, it looks like this:
{ "schema": { 1 "type": "struct", "name": "mysql-server-1.inventory.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "field": "id", "type": "int32", "optional": false } ] }, "payload": { 5 "id": 1001 } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-name.table-name.
|
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Specifies each field that is expected in the |
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
2.4.2.2. About values in Debezium MySQL change events
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers ( id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE KEY ) AUTO_INCREMENT=1001;
The value portion of a change event for a change to this table is described for:
create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "mysql-server-1.inventory.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "mysql-server-1.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "int64", "optional": false, "field": "ts_us" }, { "type": "int64", "optional": false, "field": "ts_ns" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": true, "field": "table" }, { "type": "int64", "optional": false, "field": "server_id" }, { "type": "string", "optional": true, "field": "gtid" }, { "type": "string", "optional": false, "field": "file" }, { "type": "int64", "optional": false, "field": "pos" }, { "type": "int32", "optional": false, "field": "row" }, { "type": "int64", "optional": true, "field": "thread" }, { "type": "string", "optional": true, "field": "query" } ], "optional": false, "name": "io.debezium.connector.mysql.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "mysql-server-1.inventory.customers.Envelope" 4 }, "payload": { 5 "op": "c", 6 "ts_ms": 1465491411815, 7 "ts_us": 1465491411815437, 8 "ts_ns": 1465491411815437158, 9 "before": null, 10 "after": { 11 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 12 "version": "2.7.3.Final", "connector": "mysql", "name": "mysql-server-1", "ts_ms": 0, "ts_us": 0, "ts_ns": 0, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 0, "gtid": null, "file": "mysql-bin.000003", "pos": 154, "row": 0, "thread": 7, "query": "INSERT INTO customers (first_name, last_name, email) VALUES ('Anne', 'Kretchmar', 'annek@noanswer.org')" } } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
7 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
8 |
|
An optional field that specifies the state of the row before the event occurred. When the |
9 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
10 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "after": { 2 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 3 "version": "2.7.3.Final", "name": "mysql-server-1", "connector": "mysql", "name": "mysql-server-1", "ts_ms": 1465581029100, "ts_ms": 1465581029100000, "ts_ms": 1465581029100000000, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mysql-bin.000003", "pos": 484, "row": 0, "thread": 7, "query": "UPDATE customers SET first_name='Anne Marie' WHERE id=1004" }, "op": "u", 4 "ts_ms": 1465581029523, 5 "ts_ms": 1465581029523758, 6 "ts_ms": 1465581029523758914 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that specifies the state of the row before the event occurred. In an update event value, the |
2 |
|
An optional field that specifies the state of the row after the event occurred. You can compare the |
3 |
|
Mandatory field that describes the source metadata for the event. The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
6 |
| Optional field that displays the time at which the connector processed the event, in microseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
7 |
| Optional field that displays the time at which the connector processed the event, in nanoseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a DELETE
event and a tombstone event with the old key for the row, followed by an event with the new key for the row. Details are in the next section.
Primary key updates
An UPDATE
operation that changes a row’s primary key field(s) is known as a primary key change. For a primary key change, in place of an UPDATE
event record, the connector emits a DELETE
event record for the old key and a CREATE
event record for the new (updated) key. These events have the usual structure and content, and in addition, each one has a message header related to the primary key change:
-
The
DELETE
event record has__debezium.newkey
as a message header. The value of this header is the new primary key for the updated row. -
The
CREATE
event record has__debezium.oldkey
as a message header. The value of this header is the previous (old) primary key that the updated row had.
delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "after": null, 2 "source": { 3 "version": "2.7.3.Final", "connector": "mysql", "name": "mysql-server-1", "ts_ms": 1465581902300, "ts_us": 1465581902300000, "ts_ns": 1465581902300000000, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mysql-bin.000003", "pos": 805, "row": 0, "thread": 7, "query": "DELETE FROM customers WHERE id=1004" }, "op": "d", 4 "ts_ms": 1465581902461, 5 "ts_us": 1465581902461842, 6 "ts_ns": 1465581902461842579 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
6 |
| Optional field that displays the time at which the connector processed the event, in microseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
7 |
| Optional field that displays the time at which the connector processed the event, in nanoseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
A delete change event record provides a consumer with the information it needs to process the removal of this row. The old values are included because some consumers might require them in order to properly handle the removal.
MySQL connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, after the Debezium MySQL connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value.
truncate events
A truncate change event signals that a table has been truncated. The message key of a truncate event is null
. The message value resembles the following example:
{ "schema": { ... }, "payload": { "source": { 1 "version": "2.7.3.Final", "name": "mysql-server-1", "connector": "mysql", "name": "mysql-server-1", "ts_ms": 1465581029100, "ts_us": 1465581029100000, "ts_ns": 1465581029100000000, "snapshot": false, "db": "inventory", "table": "customers", "server_id": 223344, "gtid": null, "file": "mysql-bin.000003", "pos": 484, "row": 0, "thread": 7, "query": "UPDATE customers SET first_name='Anne Marie' WHERE id=1004" }, "op": "t", 2 "ts_ms": 1465581029523, 3 "ts_us": 1465581029523468, 4 "ts_ns": 1465581029523468471 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory field that describes the source metadata for the event. In a truncate event value, the
|
2 |
|
Mandatory string that describes the type of operation. The |
3 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
4 |
| Optional field that displays the time at which the connector processed the event, in microseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
5 |
| Optional field that displays the time at which the connector processed the event, in nanoseconds. The time is based on the system clock in the JVM running the Kafka Connect task. |
In case a single TRUNCATE
statement applies to multiple tables, one truncate change event record for each truncated table will be emitted.
A truncate event represents a change that applies to an entire table, and it does not have a message key. In topics that span multiple partition, the order of change events that apply to an entire table is is not guaranteed. That is, there is no ordering guarantee for (create, update, etc.), or for the truncate events for that table. When a consumer reads events from different partition, it might read an update event for a table from one partition only after it reads a truncate event for the same table from a second partition.
2.4.3. How Debezium MySQL connectors map data types
The Debezium MySQL connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. The MySQL data type of that column dictates how Debezium represents the value in the event.
Columns that store strings are defined in MySQL with a character set and collation. The MySQL connector uses the column’s character set when reading the binary representation of the column values in the binlog events.
The connector can map MySQL data types to both literal and semantic types.
- Literal type: how the value is represented using Kafka Connect schema types.
- Semantic type: how the Kafka Connect schema captures the meaning of the field (schema name).
If the default data type conversions do not meet your needs, you can create a custom converter for the connector.
Details are in the following sections:
Basic types
The following table shows how the connector maps basic MySQL data types.
MySQL type | Literal type | Semantic type |
---|---|---|
|
| n/a |
|
| n/a |
|
|
|
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
The precision is used only to determine storage size. A precision |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.
Temporal types
Excluding the TIMESTAMP
data type, MySQL temporal types depend on the value of the time.precision.mode
connector configuration property. For TIMESTAMP
columns whose default value is specified as CURRENT_TIMESTAMP
or NOW
, the value 1970-01-01 00:00:00
is used as the default value in the Kafka Connect schema.
MySQL allows zero-values for DATE
, DATETIME
, and TIMESTAMP
columns because zero-values are sometimes preferred over null values. The MySQL connector represents zero-values as null values when the column definition allows null values, or as the epoch day when the column does not allow null values.
Temporal values without time zones
The DATETIME
type represents a local date and time such as "2018-01-13 09:48:27". As you can see, there is no time zone information. Such columns are converted into epoch milliseconds or microseconds based on the column’s precision by using UTC. The TIMESTAMP
type represents a timestamp without time zone information. It is converted by MySQL from the server (or session’s) current time zone into UTC when writing and from UTC into the server (or session’s) current time zone when reading back the value. For example:
-
DATETIME
with a value of2018-06-20 06:37:03
becomes1529476623000
. -
TIMESTAMP
with a value of2018-06-20 06:37:03
becomes2018-06-20T13:37:03Z
.
Such columns are converted into an equivalent io.debezium.time.ZonedTimestamp
in UTC based on the server (or session’s) current time zone. The time zone will be queried from the server by default.
The time zone of the JVM running Kafka Connect and Debezium does not affect these conversions.
More details about properties related to temporal values are in the documentation for MySQL connector configuration properties.
- time.precision.mode=adaptive_time_microseconds(default)
The MySQL connector determines the literal type and semantic type based on the column’s data type definition so that events represent exactly the values in the database. All time fields are in microseconds. Only positive
TIME
field values in the range of00:00:00.000000
to23:59:59.999999
can be captured correctly.Table 2.89. Mappings when time.precision.mode=adaptive_time_microseconds MySQL type Literal type Semantic type DATE
INT32
io.debezium.time.Date
Represents the number of days since the epoch.TIME[(M)]
INT64
io.debezium.time.MicroTime
Represents the time value in microseconds and does not include time zone information. MySQL allowsM
to be in the range of0-6
.DATETIME, DATETIME(0), DATETIME(1), DATETIME(2), DATETIME(3)
INT64
io.debezium.time.Timestamp
Represents the number of milliseconds past the epoch and does not include time zone information.DATETIME(4), DATETIME(5), DATETIME(6)
INT64
io.debezium.time.MicroTimestamp
Represents the number of microseconds past the epoch and does not include time zone information.- time.precision.mode=connect
The MySQL connector uses defined Kafka Connect logical types. This approach is less precise than the default approach and the events could be less precise if the database column has a fractional second precision value of greater than
3
. Values in only the range of00:00:00.000
to23:59:59.999
can be handled. Settime.precision.mode=connect
only if you can ensure that theTIME
values in your tables never exceed the supported ranges. Theconnect
setting is expected to be removed in a future version of Debezium.Table 2.90. Mappings when time.precision.mode=connect MySQL type Literal type Semantic type DATE
INT32
org.apache.kafka.connect.data.Date
Represents the number of days since the epoch.TIME[(M)]
INT64
org.apache.kafka.connect.data.Time
Represents the time value in microseconds since midnight and does not include time zone information.DATETIME[(M)]
INT64
org.apache.kafka.connect.data.Timestamp
Represents the number of milliseconds since the epoch, and does not include time zone information.
Decimal types
Debezium connectors handle decimals according to the setting of the decimal.handling.mode
connector configuration property.
- decimal.handling.mode=precise
Table 2.91. Mappings when decimal.handling.mode=precise MySQL type Literal type Semantic type NUMERIC[(M[,D])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.DECIMAL[(M[,D])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.- decimal.handling.mode=double
Table 2.92. Mappings when decimal.handling.mode=double MySQL type Literal type Semantic type NUMERIC[(M[,D])]
FLOAT64
n/a
DECIMAL[(M[,D])]
FLOAT64
n/a
- decimal.handling.mode=string
Table 2.93. Mappings when decimal.handling.mode=string MySQL type Literal type Semantic type NUMERIC[(M[,D])]
STRING
n/a
DECIMAL[(M[,D])]
STRING
n/a
Boolean values
MySQL handles the BOOLEAN
value internally in a specific way. The BOOLEAN
column is internally mapped to the TINYINT(1)
data type. When the table is created during streaming then it uses proper BOOLEAN
mapping as Debezium receives the original DDL. During snapshots, Debezium executes SHOW CREATE TABLE
to obtain table definitions that return TINYINT(1)
for both BOOLEAN
and TINYINT(1)
columns. Debezium then has no way to obtain the original type mapping and so maps to TINYINT(1)
.
To enable you to convert source columns to Boolean data types, Debezium provides a TinyIntOneToBooleanConverter
custom converter that you can use in one of the following ways:
-
Map all
TINYINT(1)
orTINYINT(1) UNSIGNED
columns toBOOLEAN
types. Enumerate a subset of columns by using a comma-separated list of regular expressions.
To use this type of conversion, you must set theconverters
configuration property with theselector
parameter, as shown in the following example:converters=boolean boolean.type=io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter boolean.selector=db1.table1.*, db1.table2.column1
NOTE: In some cases, the database may not show the length of
tinyint unsigned
when the snapshot executesSHOW CREATE TABLE
, which means this converter doesn’t work. The new optionlength.checker
can solve this issue, the default value istrue
. Disable thelength.checker
and specify the columns that need to be converted toselected
property instead of converting all columns based on type, as shown in the following example:converters=boolean boolean.type=io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter boolean.length.checker=false boolean.selector=db1.table1.*, db1.table2.column1
Spatial types
Currently, the Debezium MySQL connector supports the following spatial data types.
MySQL type | Literal type | Semantic type |
---|---|---|
|
|
|
2.4.4. Custom converters for mapping MySQL data to alternative data types
By default, the Debezium MySQL connector provides several CustomConverter
implementations for MySQL data types. These custom converters provide alternative mappings for specific data types based on the connector configuration. To add a CustomConverter
to the connector, follow the instructions in the Custom Converters documentation.
TINYINT(1)
to Boolean
By default, during a connector snapshot, the Debezium MySQL connector obtains column types from the JDBC driver, which assigns the TINYINT(1)
type to BOOLEAN
columns. Debezium then uses these JDBC column types to define the schema for the snapshot events. After the connector transitions from the snapshot to the streaming phase, the change event schema that results from the default mapping can lead to inconsistent mappings for BOOLEAN
columns. To help ensure that MySQL emits BOOLEAN
columns uniformly, you can apply the custom TinyIntOneToBooleanConverter
, as shown in the following configuration example.
Example: TinyIntOneToBooleanConverter
configuration
converters=tinyint-one-to-boolean tinyint-one-to-boolean.type=io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter tinyint-one-to-boolean.selector=.*.MY_TABLE.DATA tinyint-one-to-boolean.length.checker=false
In the preceding example, the selector
and length.checker
properties are optional. By default, the converter checks that TINYINT
data types conform to a length of 1
. If length.checker
to false
, the converter does not explicitly confirm that the TINYINT
data type conforms to a length of 1
. The selector
designates the tables or columns to convert, based on the supplied regular expression. If you omit the selector
property, the converter maps all TINYINT
columns to logical BOOL
field types. If you do not configure a selector
option, and you want to map TINYINT
columns to TINYINT(1)
, omit the length.checker
property, or set its value to true
.
JDBC sink data types
If you integrate the Debezium JDBC sink connector with a Debezium MySQL source connector, the MySQL connector emits some column attributes differently during the snapshot and streaming phases. For the JDBC sink connector to consistently consume changes from both the snapshot and streaming phase, you must include the JdbcSinkDataTypesConverter
converter as part of the MySQL source connector configuration, as shown in the following example:
Example: JdbcSinkDataTypesConverter
configuration
converters=jdbc-sink jdbc-sink.type=io.debezium.connector.binlog.converters.JdbcSinkDataTypesConverter jdbc-sink.selector.boolean=.*.MY_TABLE.BOOL_COL jdbc-sink.selector.real=.*.MY_TABLE.REAL_COL jdbc-sink.selector.string=.*.MY_TABLE.STRING_COL jdbc-sink.treat.real.as.double=true
In the preceding example, the selector.*
and treat.real.as.double
configuration properties are optional.
The selector.*
properties specify comma-separated lists of regular expressions that specify which tables and columns that the converter applies to. By default, the converter applies the following rules apply to all Boolean, real, and string-based column data types, across all tables:
-
BOOLEAN
data types are always emitted asINT16
logical types, with1
representingtrue
and0
representingfalse
-
REAL
data types are always emitted asFLOAT64
logical types. -
String-based columns always include the
__debezium.source.column.character_set
schema parameter that contains the column’s character set.
For each data type, you can configure a selector rule to override the default scope and apply the selector to specific tables and columns only. For example, to set the scope of the Boolean converter, add the following rule to the connector configuration, as in the preceding example: converters.jdbc-sink.selector.boolean=.*.MY_TABLE.BOOL_COL
2.4.5. Setting up MySQL to run a Debezium connector
Some MySQL setup tasks are required before you can install and run a Debezium connector.
Details are in the following sections:
- Section 2.4.5.1, “Creating a MySQL user for a Debezium connector”
- Section 2.4.5.2, “Enabling the MySQL binlog for Debezium”
- Section 2.4.5.3, “Enabling MySQL Global Transaction Identifiers for Debezium”
- Section 2.4.5.4, “Configuring MySQL session timesouts for Debezium”
- Section 2.4.5.5, “Enabling query log events for Debezium MySQL connectors”
2.4.5.1. Creating a MySQL user for a Debezium connector
A Debezium MySQL connector requires a MySQL user account. This MySQL user must have appropriate permissions on all databases for which the Debezium MySQL connector captures changes.
Prerequisites
- A MySQL server.
- Basic knowledge of SQL commands.
Procedure
Create the MySQL user:
mysql> CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
Grant the required permissions to the user:
mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'user' IDENTIFIED BY 'password';
For a description of the required permissions, see Table 2.95, “Descriptions of user permissions”.
ImportantIf using a hosted option such as Amazon RDS or Amazon Aurora that does not allow a global read lock, table-level locks are used to create the consistent snapshot. In this case, you need to also grant
LOCK TABLES
permissions to the user that you create. See snapshots for more details.Finalize the user’s permissions:
mysql> FLUSH PRIVILEGES;
Table 2.95. Descriptions of user permissions Keyword Description SELECT
Enables the connector to select rows from tables in databases. This is used only when performing a snapshot.
RELOAD
Enables the connector the use of the
FLUSH
statement to clear or reload internal caches, flush tables, or acquire locks. This is used only when performing a snapshot.SHOW DATABASES
Enables the connector to see database names by issuing the
SHOW DATABASE
statement. This is used only when performing a snapshot.REPLICATION SLAVE
Enables the connector to connect to and read the MySQL server binlog.
REPLICATION CLIENT
Enables the connector the use of the following statements:
-
SHOW MASTER STATUS
-
SHOW SLAVE STATUS
-
SHOW BINARY LOGS
The connector always requires this.
ON
Identifies the database to which the permissions apply.
TO 'user'
Specifies the user to grant the permissions to.
IDENTIFIED BY 'password'
Specifies the user’s MySQL password.
-
2.4.5.2. Enabling the MySQL binlog for Debezium
You must enable binary logging for MySQL replication. The binary logs record transaction updates in a way that enables replicas to propagate those changes.
Prerequisites
- A MySQL server.
- Appropriate MySQL user privileges.
Procedure
-
Check whether the
log-bin
option is enabled: If the binlog is
OFF
, add the properties in the following table to the configuration file for the MySQL server:server-id = 223344 # Querying variable is called server_id, e.g. SELECT variable_value FROM information_schema.global_variables WHERE variable_name='server_id'; log_bin = mysql-bin binlog_format = ROW binlog_row_image = FULL binlog_expire_logs_seconds = 864000
- Confirm your changes by checking the binlog status once more:
If you run MySQL on Amazon RDS, you must enable automated backups for your database instance for binary logging to occur. If the database instance is not configured to perform automated backups, the binlog is disabled, even if you apply the settings described in the previous steps.
Table 2.96. Descriptions of MySQL binlog configuration properties Property Description server-id
The value for the
server-id
must be unique for each server and replication client in the MySQL cluster.log_bin
The value of
log_bin
is the base name of the sequence of binlog files.binlog_format
The
binlog-format
must be set toROW
orrow
.binlog_row_image
The
binlog_row_image
must be set toFULL
orfull
.binlog_expire_logs_seconds
The
binlog_expire_logs_seconds
corresponds to deprecated system variableexpire_logs_days
. This is the number of seconds for automatic binlog file removal. The default value is2592000
, which equals 30 days. Set the value to match the needs of your environment. For more information, see MySQL purges binlog files.
2.4.5.3. Enabling MySQL Global Transaction Identifiers for Debezium
Global transaction identifiers (GTIDs) uniquely identify transactions that occur on a server within a cluster. Though not required for a Debezium MySQL connector, using GTIDs simplifies replication and enables you to more easily confirm if primary and replica servers are consistent.
GTIDs are available in MySQL 5.6.5 and later. See the MySQL documentation for more details.
Prerequisites
- A MySQL server.
- Basic knowledge of SQL commands.
- Access to the MySQL configuration file.
Procedure
Enable
gtid_mode
:mysql> gtid_mode=ON
Enable
enforce_gtid_consistency
:mysql> enforce_gtid_consistency=ON
Confirm the changes:
mysql> show global variables like '%GTID%';
Result
+--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | enforce_gtid_consistency | ON | | gtid_mode | ON | +--------------------------+-------+
Table 2.97. Descriptions of GTID options Option Description gtid_mode
Boolean that specifies whether GTID mode of the MySQL server is enabled or not.
-
ON
= enabled -
OFF
= disabled
enforce_gtid_consistency
Boolean that specifies whether the server enforces GTID consistency by allowing the execution of statements that can be logged in a transactionally safe manner. Required when using GTIDs.
-
ON
= enabled -
OFF
= disabled
-
2.4.5.4. Configuring MySQL session timesouts for Debezium
When an initial consistent snapshot is made for large databases, your established connection could timeout while the tables are being read. You can prevent this behavior by configuring interactive_timeout
and wait_timeout
in your MySQL configuration file.
Prerequisites
- A MySQL server.
- Basic knowledge of SQL commands.
- Access to the MySQL configuration file.
Procedure
Configure
interactive_timeout
:mysql> interactive_timeout=<duration-in-seconds>
Configure
wait_timeout
:mysql> wait_timeout=<duration-in-seconds>
Table 2.98. Descriptions of MySQL session timeout options Option Description interactive_timeout
The number of seconds the server waits for activity on an interactive connection before closing it. For more information see the:
wait_timeout
The number of seconds that the server waits for activity on a non-interactive connection before closing it.
2.4.5.5. Enabling query log events for Debezium MySQL connectors
You might want to see the original SQL
statement for each binlog event. Enabling the binlog_rows_query_log_events
option in the MySQL configuration file allows you to do this.
This option is available in MySQL 5.6 and later.
Prerequisites
- A MySQL server.
- Basic knowledge of SQL commands.
- Access to the MySQL configuration file.
Procedure
Enable
binlog_rows_query_log_events
in MySQL:mysql> binlog_rows_query_log_events=ON
binlog_rows_query_log_events
is set to a value that enables/disables support for including the originalSQL
statement in the binlog entry.-
ON
= enabled -
OFF
= disabled
-
2.4.5.6. validate binlog row value options for Debezium MySQL connectors
Verify the setting of the binlog_row_value_options
variable in the database. To enable the connector to consume UPDATE events, this variable must be set to a value other than PARTIAL_JSON
.
Prerequisites
- A MySQL server.
- Basic knowledge of SQL commands.
- Access to the MySQL configuration file.
Procedure
Check current variable value
mysql> show global variables where variable_name = 'binlog_row_value_options';
Result
+--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | binlog_row_value_options | | +--------------------------+-------+
If the value of the variable is set to
PARTIAL_JSON
, run the following command to unset it:mysql> set @@global.binlog_row_value_options="" ;
2.4.6. Deployment of Debezium MySQL connectors
You can use either of the following methods to deploy a Debezium MySQL connector:
Additional resources
2.4.6.1. MySQL connector deployment using Streams for Apache Kafka
Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in.
During the deployment process, you create and use the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. -
A
KafkaConnector
CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying theKafkaConnector
CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The spec.build.output
parameter in the KafkaConnect
CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically.
If you use a KafkaConnect
resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information.
Additional resources
- Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift.
2.4.6.2. Using Streams for Apache Kafka to deploy a Debezium MySQL connector
With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use.
During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect
custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
The newly created container is pushed to the container registry that is specified in .spec.build.output
, and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector
custom resources to start the connectors that are included in the build.
Prerequisites
- You have access to an OpenShift cluster on which the cluster Operator is installed.
- The Streams for Apache Kafka Operator is running.
- An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Kafka Connect is deployed on Streams for Apache Kafka
- You have a Red Hat build of Debezium license.
-
The OpenShift
oc
CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
- To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub
- An account and permissions to create and manage images in the registry.
- To store the build image as a native OpenShift ImageStream
- An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams in the OpenShift Container Platform documentation.
Procedure
- Log in to the OpenShift cluster.
Create a Debezium
KafkaConnect
custom resource (CR) for the connector, or modify an existing one. For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifies themetadata.annotations
andspec.build
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
Example 2.26. A
dbz-connect.yaml
file that defines aKafkaConnect
custom resource that includes a Debezium connectorIn the example that follows, the custom resource is configured to download the following artifacts:
- The Debezium MySQL connector archive.
- The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector.
- The Debezium scripting SMT archive and the associated scripting engine that you want to use with the Debezium connector. The SMT archive and scripting language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-mysql artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mysql/2.7.3.Final-redhat-00001/debezium-connector-mysql-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat-<build-number>/apicurio-registry-distro-connect-converter-2.4.4.Final-redhat-<build-number>.zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ...
Table 2.99. Descriptions of Kafka Connect configuration settings Item Description 1
Sets the
strimzi.io/use-connector-resources
annotation to"true"
to enable the Cluster Operator to useKafkaConnector
resources to configure connectors in this Kafka Connect cluster.2
The
spec.build
configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.3
The
build.output
specifies the registry in which the newly built image is stored.4
Specifies the name and image name for the image output. Valid values for
output.type
aredocker
to push into a container registry such as Docker Hub or Quay, orimagestream
to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying thebuild.output
in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}.5
The
plugins
configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-inname
, and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component.6
The value of
artifacts.type
specifies the file type of the artifact specified in theartifacts.url
. Valid types arezip
,tgz
, orjar
. Debezium connector archives are provided in.zip
file format. Thetype
value must match the type of the file that is referenced in theurl
field.7
The value of
artifacts.url
specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server.8
(Optional) Specifies the artifact
type
andurl
for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter.9
(Optional) Specifies the artifact
type
andurl
for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy.10
(Optional) Specifies the artifact
type
andurl
for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT.ImportantIf you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components
artifacts.url
must specify the location of a JAR file, and the value ofartifacts.type
must also be set tojar
. Invalid values cause the connector fails at runtime.To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries:
-
groovy
-
groovy-jsr223
(scripting agent) -
groovy-json
(module for parsing JSON strings)
As an alternative, the Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript.
Apply the
KafkaConnect
build specification to the OpenShift cluster by entering the following command:oc create -f dbz-connect.yaml
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster.Create a
KafkaConnector
resource to define an instance of each connector that you want to deploy.
For example, create the followingKafkaConnector
CR, and save it asmysql-inventory-connector.yaml
Example 2.27.
mysql-inventory-connector.yaml
file that defines theKafkaConnector
custom resource for a Debezium connectorapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-mysql 1 spec: class: io.debezium.connector.mysql.MySqlConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: mysql.debezium-mysql.svc.cluster.local 5 database.port: 3306 6 database.user: debezium 7 database.password: dbz 8 database.server.id: 184054 9 topic.prefix: inventory-connector-mysql 10 table.include.list: inventory.* 11 ...
Table 2.100. Descriptions of connector configuration settings Item Description 1
The name of the connector to register with the Kafka Connect cluster.
2
The name of the connector class.
3
The number of tasks that can operate concurrently.
4
The connector’s configuration.
5
The address of the host database instance.
6
The port number of the database instance.
7
The name of the account that Debezium uses to connect to the database.
8
The password that Debezium uses to connect to the database user account.
9
Unique numeric ID of the connector.
10
The topic prefix for the database instance or cluster.
The specified name must be formed only from alphanumeric characters or underscores.
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster.
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector.11
The list of tables from which the connector captures change events.
Create the connector resource by running the following command:
oc create -n <namespace> -f <kafkaConnector>.yaml
For example,
oc create -n debezium -f mysql-inventory-connector.yaml
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by
spec.config.database.dbname
in theKafkaConnector
CR. After the connector pod is ready, Debezium is running.
You are now ready to verify the Debezium MySQL deployment.
2.4.6.3. Deploying Debezium MySQL connectors by building a custom Kafka Connect container image from a Dockerfile
To deploy a Debezium MySQL connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive, and then push this container image to a container registry. You then need to create the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance. Theimage
property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat Streams for Apache Kafka is deployed. Streams for Apache Kafka offers operators and images that bring Apache Kafka to OpenShift. -
A
KafkaConnector
CR that defines your Debezium MySQL connector. Apply this CR to the same OpenShift instance where you apply theKafkaConnect
CR.
Prerequisites
- MySQL is running and you completed the steps to set up MySQL to work with a Debezium connector.
- Streams for Apache Kafka is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Managing Streams for Apache Kafka on OpenShift.
- Podman or Docker is installed.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector.
Procedure
Create the Debezium MySQL container for Kafka Connect:
Create a Dockerfile that uses
registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0
as the base image. For example, from a terminal window, enter the following command:cat <<EOF >debezium-container-for-mysql.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mysql/2.7.3.Final-redhat-00001/debezium-connector-mysql-2.7.3.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-mysql-2.7.3.Final-redhat-00001-plugin.zip \ && rm debezium-connector-mysql-2.7.3.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ USER 1001 EOF
Item Description 1
You can specify any file name that you want.
2
Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory.
The command creates a Dockerfile with the name
debezium-container-for-mysql.yaml
in the current directory.Build the container image from the
debezium-container-for-mysql.yaml
Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:podman build -t debezium-container-for-mysql:latest .
docker build -t debezium-container-for-mysql:latest .
The preceding commands build a container image with the name
debezium-container-for-mysql
.Push your custom image to a container registry, such as
quay.io
or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:podman push <myregistry.io>/debezium-container-for-mysql:latest
docker push <myregistry.io>/debezium-container-for-mysql:latest
Create a new Debezium MySQL
KafkaConnect
custom resource (CR). For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: #... image: debezium-container-for-mysql 2 ...
Item Description 1
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.2
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply the
KafkaConnect
CR to the OpenShift Kafka Connect environment by entering the following command:oc create -f dbz-connect.yaml
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium MySQL connector instance.You configure a Debezium MySQL connector in a
.yaml
file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.The following example configures a Debezium connector that connects to a MySQL host,
192.168.99.100
, on port3306
, and captures changes to theinventory
database.dbserver1
is the server’s logical name.MySQL
inventory-connector.yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-mysql 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MariaDbConnector tasksMax: 1 2 config: 3 database.hostname: mysql 4 database.port: 3306 database.user: debezium database.password: dbz database.server.id: 184054 5 topic.prefix: inventory-connector-mysql 6 table.include.list: inventory 7 schema.history.internal.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 8 schema.history.internal.kafka.topic: schema-changes.inventory 9
Table 2.101. Descriptions of connector configuration settings Item Description 1
The name of the connector.
2
Only one task should operate at any one time. Because the MySQL connector reads the MySQL server’s
binlog
, using a single connector task ensures proper order and event handling. The Kafka Connect service uses connectors to start one or more tasks that do the work, and it automatically distributes the running tasks across the cluster of Kafka Connect services. If any of the services stop or crash, those tasks will be redistributed to running services.3
The connector’s configuration.
4
The database host, which is the name of the container running the MySQL server (
mysql
).5
Unique ID of the connector.
6
Topic prefix for the MySQL server or cluster. This name is used as the prefix for all Kafka topics that receive change event records.
7
The connector captures changes from the
inventory
table only.8
The list of Kafka brokers that this connector will use to write and recover DDL statements to the database schema history topic. Upon restart, the connector recovers the schemas of the database that existed at the point in time in the binlog when the connector should begin reading.
9
The name of the database schema history topic. This topic is for internal use only and should not be used by consumers.
Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in theinventory-connector.yaml
file, you would run the following command:oc apply -f inventory-connector.yaml
The preceding command registers
inventory-connector
and the connector starts to run against theinventory
database as defined in theKafkaConnector
CR.
For the complete list of the configuration properties that you can set for the Debezium MySQL connector, see MySQL connector configuration properties.
Results
After the connector starts, it performs a consistent snapshot of the MySQL databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
2.4.6.4. Verifying that the Debezium MySQL connector is running
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc):
- Verify the connector status.
- Verify that the connector generates topics.
- Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table.
Prerequisites
- A Debezium connector is deployed to Streams for Apache Kafka on OpenShift.
-
The OpenShift
oc
CLI client is installed. - You have access to the OpenShift Container Platform web console.
Procedure
Check the status of the
KafkaConnector
resource by using one of the following methods:From the OpenShift Container Platform web console:
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaConnector
. - From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-mysql.
- In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc describe KafkaConnector <connector-name> -n <project>
For example,
oc describe KafkaConnector inventory-connector-mysql -n debezium
The command returns status information that is similar to the following output:
Example 2.28.
KafkaConnector
resource statusName: inventory-connector-mysql Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-mysql Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-mysql.inventory inventory-connector-mysql.inventory.addresses inventory-connector-mysql.inventory.customers inventory-connector-mysql.inventory.geom inventory-connector-mysql.inventory.orders inventory-connector-mysql.inventory.products inventory-connector-mysql.inventory.products_on_hand Events: <none>
Verify that the connector created Kafka topics:
From the OpenShift Container Platform web console.
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaTopic
. -
From the KafkaTopics list, click the name of the topic that you want to check, for example,
inventory-connector-mysql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d
. - In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc get kafkatopics
The command returns status information that is similar to the following output:
Example 2.29.
KafkaTopic
resource statusNAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-mysql--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-mysql.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
Check topic content.
- From a terminal window, enter the following command:
oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=<topic-name>
For example,
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-mysql.inventory.products_on_hand
The format for specifying the topic name is the same as the
oc describe
command returns in Step 1, for example,inventory-connector-mysql.inventory.addresses
.For each event in the topic, the command returns information that is similar to the following output:
Example 2.30. Content of a Debezium change event
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory-connector-mysql.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-mysql.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-mysql.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"int64","optional":false,"field":"ts_us"},{"type":"int64","optional":false,"field":"ts_ns"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.mysql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"int64","optional":true,"field":"ts_us"},{"type":"int64","optional":true,"field":"ts_ns"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"inventory-connector-mysql.inventory.products_on_hand.Envelope"},"payload":{"before":null,"after":{"product_id":101,"quantity":3},"source":{"version":"2.7.3.Final-redhat-00001","connector":"mysql","name":"inventory-connector-mysql","ts_ms":1638985247805,"ts_us":1638985247805000000,"ts_ns":1638985247805000000,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"mysql-bin.000003","pos":156,"row":0,"thread":null,"query":null},"op":"r","ts_ms":1638985247805,"ts_us":1638985247805102,"ts_ns":1638985247805102588,"transaction":null}}
In the preceding example, the
payload
value shows that the connector snapshot generated a read ("op" ="r"
) event from the tableinventory.products_on_hand
. The"before"
state of theproduct_id
record isnull
, indicating that no previous value exists for the record. The"after"
state shows aquantity
of3
for the item withproduct_id
101
.
2.4.6.5. Descriptions of Debezium MySQL connector configuration properties
The Debezium MySQL connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:
- Required connector configuration properties
- Advanced connector configuration properties
- Database schema history connector configuration properties that control how Debezium processes events that it reads from the database schema history topic.
Pass-through MySQL connector configuration properties
- Pass-through database schema history properties for configuring producer and consumer clients
- Pass-through Kafka signals configuration properties
- Pass-through Kafka signals consumer client configuration properties
- Pass-through sink notification configuration properties
- Pass-through database driver configuration properties
Required Debezium MySQL connector configuration properties
The following configuration properties are required unless a default value is available.
bigint.unsigned.handling.mode
Default value:long
Specifies how the connector represents BIGINT UNSIGNED columns in change events. Set one of the following options:long
-
Uses Java
long
data types to represent BIGINT UNSIGNED column values. Although thelong
type does not offer the greatest precision, it is easy implement in most consumers. In most environments, this is the preferred setting. precise
-
Uses
java.math.BigDecimal
data types to represent values. The connector uses the Kafka Connectorg.apache.kafka.connect.data.Decimal
data type to represent values in encoded binary format. Set this option if the connector typically works with values larger than 2^63. Thelong
data type cannot convey values of that size.
binary.handling.mode
Default value:
bytes
Specifies how the connector represents values for binary columns, such as,blob
,binary
,varbinary
, in change events.
Set one of the following options:bytes
- Represents binary data as a byte array.
base64
- Represents binary data as a base64-encoded String.
base64-url-safe
- Represents binary data as a base64-url-safe-encoded String.
hex
- Represents binary data as a hex-encoded (base16) String.
column.exclude.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Other columns in the source record are captured as usual. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. If you include this property in the configuration, do not also set the
column.include.list
property.
column.include.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to include in change event record values. Other columns are omitted from the event record. Fully-qualified names for columns are of the form databaseName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name.
If you include this property in the configuration, do not set thecolumn.exclude.list
property.
column.mask.hash.v2.hashAlgorithm.with.salt.salt
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form<databaseName>.<tableName>.<columnName>
.
To match the name of a column Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. In the resulting change event record, the values for the specified columns are replaced with pseudonyms.
A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation.
In the following example,
CzQMA0cB5K
is a randomly selected salt.column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts.
Depending on the hashAlgorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked.
Hashing strategy version 2 ensures fidelity of values that are hashed in different places or systems.
column.mask.with.length.chars
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Setlength
to a positive integer to replace data in the specified columns with the number of asterisk (*
) characters specified by the length in the property name. Set length to0
(zero) to replace data in the specified columns with an empty string.The fully-qualified name of a column observes the following format: databaseName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
You can specify multiple properties with different lengths in a single configuration.
column.propagate.source.type
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:-
__debezium.source.column.type
-
__debezium.source.column.length
-
__debezium.source.column.scale
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
-
column.truncate.to.length.chars
Default value: No default
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Setlength
to a positive integer value, for example,column.truncate.to.20.chars
.The fully-qualified name of a column observes the following format: databaseName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
You can specify multiple properties with different lengths in a single configuration.
connect.timeout.ms
-
Default value:
30000
(30 seconds)
A positive integer value that specifies the maximum time in milliseconds that the connector waits to establish a connection to the MySQL database server before the connection request times out.
connector.class
-
Default value: No default
The name of the Java class for the connector. Always specify for the MySQL connector.
database.exclude.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the names of databases from which you do not want the connector to capture changes. The connector captures changes in any database that is not named in thedatabase.exclude.list
.
To match the name of a database, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the database; it does not match substrings that might be present in a database name.
If you include this property in the configuration, do not also set thedatabase.include.list
property.
database.hostname
-
Default value: No default
The IP address or hostname of the MySQL database server.
database.include.list
Default value: empty string
An optional, comma-separated list of regular expressions that match the names of the databases from which the connector captures changes. The connector does not capture changes in any database whose name is not indatabase.include.list
. By default, the connector captures changes in all databases.
To match the name of a database, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the database; it does not match substrings that might be present in a database name.
If you include this property in the configuration, do not also set thedatabase.exclude.list
property.
database.password
-
Default value: No default
The password of the MySQL user that the connector uses to connect to the MySQL database server.
database.port
-
Default value:
3306
Integer port number of the MySQL database server.
database.server.id
-
Default value: No default
The numeric ID of this database client. The specified ID must be unique across all currently running database processes in the MySQL cluster. To enable it to read the binlog, the connector uses this unique ID to join the MySQL database cluster as another server.
database.user
-
Default value: No default
The name of the MySQL user that the connector uses to connect to the MySQL database server.
decimal.handling.mode
Default value:
precise
Specifies how the connector handles values forDECIMAL
andNUMERIC
columns in change events.
Set one of the following options:precise
(default)-
Uses
java.math.BigDecimal
values in binary form to represent values precisely. double
-
Uses the
double
data type to represent values. This option can result in a loss of precision, but it is easier for most consumers to use. string
- Encodes values as formatted strings. This option is easy to consume, but can result in the loss of semantic information about the real type.
event.deserialization.failure.handling.mode
Default value:
fail
Specifies how the connector reacts after an exception occurs during deserialization of binlog events. This option is deprecated, please useevent.processing.failure.handling.mode
option instead.fail
- Propagates the exception, which indicates the problematic event and its binlog offset, and causes the connector to stop.
warn
- Logs the problematic event and its binlog offset and then skips the event.
ignore
- Passes over the problematic event and does not log anything.
field.name.adjustment.mode
Default value: No default
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Set one of the following options:none
- No adjustment.
avro
- Replaces characters that are not valid in Avro names with underscore characters.
avro_unicode
Replaces underscore characters or characters that cannot be used in Avro names with corresponding unicode, such as
_uxxxx
.
Note`_` is an escape sequence, similar to a backslash in Java
For more information, see: Avro naming.
gtid.source.excludes
-
Default value: No default
A comma-separated list of regular expressions that match source domain IDs in the GTID set that the connector uses to find the binlog position on the MySQL server. When this property is set, the connector uses only the GTID ranges that have source UUIDs that do not match any of the specifiedexclude
patterns.
To match the value of a GTID, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the GTID’s domain identifier.
If you include this property in the configuration, do not also set thegtid.source.includes
property.
gtid.source.includes
-
Default value: No default
A comma-separated list of regular expressions that match source domain IDs in the GTID set used that the connector uses to find the binlog position on the MySQL server. When this property is set, the connector uses only the GTID ranges that have source UUIDs that match one of the specifiedinclude
patterns.
To match the value of a GTID, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the GTID’s domain identifier.
If you include this property in the configuration, do not also set thegtid.source.excludes
property.
include.query
-
Default value:
false
Boolean value that specifies whether the connector should include the original SQL query that generated the change event.
If you set this option totrue
then you must also configure MySQL with thebinlog_annotate_row_events
option set toON
. Wheninclude.query
istrue
, the query is not present for events that the snapshot process generates.
Settinginclude.query
totrue
might expose tables or fields that are explicitly excluded or masked by including the original SQL statement in the change event. For this reason, the default setting isfalse
.
For more information about configuring the database to return the originalSQL
statement for each log event, see Enabling query log events.
include.schema.changes
-
Default value:
true
Boolean value that specifies whether the connector publishes changes that occur to the database schema to a Kafka topic with the name of the database server ID. Each schema change event that the connector captures uses a key that contains the database name and a value that includes the DDL statements that describe the change. This setting does not affect how the connector records schema changes in its internal database schema history.
include.schema.comments
Default value:
false
Boolean value that specifies whether the connector parses and publishes table and column comments on metadata objects.
NoteWhen you set this option to
true
, the schema comments that the connector includes can add a significant amount of string data to each schema object. Increasing the number and size of logical schema objects increases the amount of memory that the connector uses.
inconsistent.schema.handling.mode
Default value:
fail
Specifies how the connector responds to binlog events that refer to tables that are not present in the internal schema representation. That is, the internal representation is not consistent with the database.
Set one of the following options:fail
- The connector throws an exception that reports the problematic event and its binlog offset. The connector then stops.
warn
- The connector logs the problematic event and its binlog offset, and then skips the event.
skip
- The connector skips the problematic event and does not report it in the log.
message.key.columns
-
Default value: No default
A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns.
To establish a custom message key for a table, list the table, followed by the columns to use as the message key. Each list entry takes the following format:<fully-qualified_tableName>:<keyColumn>,<keyColumn>
To base a table key on multiple column names, insert commas between the column names.
Each fully-qualified table name is a regular expression in the following format:<databaseName>.<tableName>
The property can include entries for multiple tables. Use a semicolon to separate table entries in the list.
The following example sets the message key for the tablesinventory.customers
andpurchase.orders
:inventory.customers:pk1,pk2;(.*).purchaseorders:pk3,pk4
For the tableinventory.customer
, the columnspk1
andpk2
are specified as the message key. For thepurchaseorders
tables in any database, the columnspk3
andpk4
server as the message key.
There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key.
name
-
Default value: No default
Unique name for the connector. If you attempt to use the same name to register another connector, registration fails. This property is required by all Kafka Connect connectors.
schema.name.adjustment.mode
Default value: No default
Specifies how the connector adjusts schema names for compatibility with the message converter used by the connector. Set one of the following options:none
- No adjustment.
avro
- Replaces characters that are not valid in Avro names with underscore characters.
avro_unicode
-
Replaces underscore characters or characters that cannot be used in Avro names with corresponding unicode, such as
_uxxxx.
NOTE:_
is an escape sequence, similar to a backslash in Java
skip.messages.without.change
-
Default value:
false
Specifies whether the connector emits messages for records when it does not detect a change in the included columns. Columns are considered to be included if they are listed in thecolumn.include.list
, or are not listed in thecolumn.exclude.list
. Set the value totrue
to prevent the connector from capturing records when no changes are present in the included columns.
table.exclude.list
Default value: empty string
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables from which you do not want the connector to capture changes. The connector captures changes in any table that is not included intable.exclude.list
. Each identifier is of the form databaseName.tableName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
If you include this property in the configuration, do not also set thetable.include.list
property.
table.include.list
Default value: empty string
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. The connector does not capture changes in any table that is not included intable.include.list
. Each identifier is of the form databaseName.tableName. By default, the connector captures changes in all non-system tables in every database from which it is configured to captures changes.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
If you include this property in the configuration, do not also set thetable.exclude.list
property.
tasks.max
-
Default value:
1
The maximum number of tasks to create for this connector. Because the MySQL connector always uses a single task, changing the default value has no effect.
time.precision.mode
Default value:
adaptive_time_microseconds
Specifies the type of precision that the connector uses to represent time, date, and timestamps values. Set one of the following options:
adaptive_time_microseconds
(default)-
The connector captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type, with the exception of TIME type fields, which are always captured as microseconds.
connect
- The connector always represents time and timestamp values using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which use millisecond precision regardless of the database columns' precision.
tombstones.on.delete
Default value:
true
Specifies whether a delete event is followed by a tombstone event. After a source record is deleted, the connector can emit a tombstone event (the default behavior) to enable Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic. Set one of the following options:
true
(default)-
The connector represents delete operations by emitting a delete event and a subsequent tombstone event.
false
-
The connector emits only delete events.
topic.prefix
Default value: No default
Topic prefix that provides a namespace for the particular MySQL database server or cluster in which Debezium is capturing changes. Because the topic prefix is used to name all of the Kafka topics that receive events that this connector emits, it’s important that the topic prefix is unique across all connectors. Values must contain only alphanumeric characters, hyphens, dots, and underscores.
WarningAfter you set this property, do not change its value. If you change the value, after the connector restarts, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. The connector is also unable to recover its database schema history topic.
Advanced Debezium MySQL connector configuration properties
The following list describes advanced MySQL connector configuration properties. The default values for these properties rarely require changes. Therefore, you do not need to specify them in the connector configuration.
connect.keep.alive
-
Default value:
true
A Boolean value that specifies whether a separate thread should be used to ensure that the connection to the MySQL server or cluster is kept alive.
converters
Default value: No default
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use.
For example,boolean
.
This property is required to enable the connector to use a custom converter.
For each converter that you configure for a connector, you must also add a.type
property, which specifies the fully-qualified name of the class that implements the converter interface. The.type
property uses the following format:
<converterSymbolicName>.type
For example,
boolean.type: io.debezium.connector.binlog.converters.TinyIntOneToBooleanConverter
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate these additional configuration parameter with a converter, prefix the parameter name with the symbolic name of the converter.
For example, to define a
selector
parameter that specifies the subset of columns that theboolean
converter processes, add the following property:
boolean.selector=db1.table1.*, db1.table2.column1
custom.metric.tags
-
Default value: No default
Defines tags that customize MBean object names by adding metadata that provides contextual information. Specify a comma-separated list of key-value pairs. Each key represents a tag for the MBean object name, and the corresponding value represents a value for the key, for example,k1=v1,k2=v2
The connector appends the specified tags to the base MBean object name. Tags can help you to organize and categorize metrics data. You can define tags to identify particular application instances, environments, regions, versions, and so forth. For more information, see Customized MBean names.
database.initial.statements
Default value: No default
A semicolon separated list of SQL statements to be executed when a JDBC connection, not the connection that is reading the transaction log, to the database is established. To specify a semicolon as a character in a SQL statement and not as a delimiter, use two semicolons, (;;
).
The connector might establish JDBC connections at its own discretion, so this property is ony for configuring session parameters. It is not for executing DML statements.
database.query.timeout.ms
-
Default value:
600000
(10 minutes)
Specifies the time, in milliseconds, that the connector waits for a query to complete. Set the value to0
(zero) to remove the timeout limit.
database.ssl.keystore
-
Default value: No default
An optional setting that specifies the location of the key store file. A key store file can be used for two-way authentication between the client and the MySQL server.
database.ssl.keystore.password
-
Default value: No default
The password for the key store file. Specify a password only if thedatabase.ssl.keystore
is configured.
database.ssl.mode
Default value:
preferred
Specifies whether the connector uses an encrypted connection. The following settings are available:disabled
- Specifies the use of an unencrypted connection.
preferred
(Default)- The connector establishes an encrypted connection if the server supports secure connections. If the server does not support secure connections, the connector falls back to using an unencrypted connection.
required
- The connector establishes an encrypted connection. If it is unable to establish an encrypted connection, the connector fails.
verify_ca
-
The connector behaves as when you set the
required
option, but it also verifies the server TLS certificate against the configured Certificate Authority (CA) certificates. If the server TLS certificate does not match any valid CA certificates, the connector fails.
verify_identity
-
The connector behaves as when you set the
verify_ca
option, but it also verifies that the server certificate matches the host of the remote connection.
database.ssl.truststore
-
Default value: No default
The location of the trust store file for the server certificate verification.
database.ssl.truststore.password
-
Default value: No default
The password for the trust store file. Used to check the integrity of the truststore, and unlock the truststore.
enable.time.adjuster
Default value:
true
Boolean value that indicates whether the connector converts a 2-digit year specification to 4 digits. Set the value tofalse
when conversion is fully delegated to the database.
MySQL users can insert year values with either 2-digits or 4-digits. 2-digit values are mapped to a year in the range 1970 - 2069. By default, the connector performs the conversion.
errors.max.retries
Default value:
-1
Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
Set one of the following options:-1
- No limit. The connector always restarts automatically, and retries the operation, regardless of the number of previous failures.
0
- Disabled. The connector fails immediately, and never retries the operation. User intervention is required to restart the connector.
> 0
- The connector restarts automatically until it reaches the specified maximum number of retries. After the next failure, the connector stops, and user intervention is required to restart it.
event.converting.failure.handling.mode
Default value:
warn
Specifies how the connector responds when it cannot convert a table record due to a mismatch between the data type of a column and the type specified by the Debezium internal schema.
Set one of the following options:fail
-
An exception reports that conversion failed because the data type of the field did not match the schema type, and indicates that it might be necessary to restart the connector in
schema _only_recovery
mode to enable a successful conversion. warn
-
The connector writes a
null
value to the event field for the column that failed conversion, writes a message to the warning log .
skip
-
The connector writes a
null
value to the event field for the column that failed conversion, and writes a message to the debug log.
event.processing.failure.handling.mode
Default value:
fail
Specifies how the connector handles failures that occur when processing events, for example, if it encounters a corrupted event. The following settings are available:fail
- The connector raises an exception that reports the problematic event and its position. The connector then stops.
warn
- The connector does not raise an exception. Instead, it logs the problematic event and its position, and then skips the event.
ignore
- The connector ignores the problematic event, and does not generate a log entry.
heartbeat.action.query
Default value: No default
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message.
For example, the following query periodically captures the state of the executed GTID set in the source database.
INSERT INTO gtid_history_table (select @gtid_executed)
heartbeat.interval.ms
Default value:
0
Specifies how frequently the connector sends heartbeat messages to a Kafka topic. By default, the connector does not send heartbeat messages.
Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages.
incremental.snapshot.allow.schema.changes
-
Default value:
false
Specifies whether the connector allows schema changes during an incremental snapshot. When the value is set totrue
, the connector detects schema change during an incremental snapshot, and re-select a current chunk to avoid locking DDLs.
Changes to a primary key are not supported. Changing the primary during an incremental snapshot, can lead to incorrect results. A further limitation is that if a schema change affects only the default values of columns, then the change is not detected until the DDL is processed from the binlog stream. This does not affect the values of snapshot events, but the schema of these snapshot events may have outdated defaults.
incremental.snapshot.chunk.size
-
Default value:
1024
The maximum number of rows that the connector fetches and reads into memory when it retrieves an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment.
incremental.snapshot.watermarking.strategy
Default value:
insert_insert
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
You can specify one of the following options:insert_insert
(default)- When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads during the snapshot, it writes an entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, Debezium inserts a second entry that records the signal to close the window.
insert_delete
- When you send a signal to initiate an incremental snapshot, for every chunk that Debezium reads, it writes a single entry to the signaling data collection to record the signal to open the snapshot window. After the snapshot completes, this entry is removed. No entry is created for the signal to close the snapshot window. Set this option to prevent rapid growth of the signaling data collection.
max.batch.size
-
Default value:
2048
Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector.
max.queue.size
-
Default value:
8192
A positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always setmax.queue.size
to a value that is larger than the value ofmax.batch.size
.
max.queue.size.in.bytes
-
Default value:
0
A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value.
Ifmax.queue.size
is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property. For example, if you setmax.queue.size=1000
, andmax.queue.size.in.bytes=5000
, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes.
min.row.count.to.stream.results
Default value:
1000
During a snapshot, the connector queries each table for which the connector is configured to capture changes. The connector uses each query result to produce a read event that contains data for all rows in that table. This property determines whether the MySQL connector puts results for a table into memory, which is fast but requires large amounts of memory, or streams the results, which can be slower but work for very large tables. The setting of this property specifies the minimum number of rows a table must contain before the connector streams results.
To skip all table size checks and always stream all results during a snapshot, set this property to
0
.
notification.enabled.channels
Default value: No default
List of notification channel names that are enabled for the connector. By default, the following channels are available:-
sink
-
log
-
jmx
-
poll.interval.ms
-
Default value:
500
(0.5 seconds)
Positive integer value that specifies the number of milliseconds the connector waits for new change events to appear before it starts processing a batch of events.
provide.transaction.metadata
-
Default value:
false
Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specifytrue
if you want the connector to do this. For more information, see Transaction metadata.
signal.data.collection
-
Default value: No default
Fully-qualified name of the data collection that is used to send signals to the connector.
Use the following format to specify the collection name:<databaseName>.<tableName>
signal.enabled.channels
Default value: No default
List of the signaling channel names that are enabled for the connector. By default, the following channels are available:-
source
-
kafka
-
file
-
jmx
-
skipped.operations
-
Default value:
t
A comma-separated list of operation types that will be skipped during streaming. The operations include:c
for inserts/create,u
for updates,d
for deletes,t
for truncates, andnone
to not skip any operations. By default, truncate operations are skipped.
snapshot.delay.ms
-
Default value: No default
An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors.
snapshot.fetch.size
-
Default value: Unset
By default, during a snapshot, the connector reads table content in batches of rows. Set this property to specify the maximum number of rows in a batch.
To maintain connector performance, it’s best to preserve the unset default of this property. This default configuration enables MySQL to stream the result set to Debezium one row at a time. By contrast, if you set this property, performance problems can result, because Debezium attempts to fetch the entire result set into memory at once.
snapshot.include.collection.list
-
Default value: All tables specified in the
table.include.list
.
An optional, comma-separated list of regular expressions that match the fully-qualified names (<databaseName>.<tableName>
) of the tables to include in a snapshot. The specified items must be named in the connector’stable.include.list
property. This property takes effect only if the connector’ssnapshot.mode
property is set to a value other thannever
.
This property does not affect the behavior of incremental snapshots.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
snapshot.lock.timeout.ms
-
Default value:
10000
Positive integer that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. For more information, see
snapshot.locking.mode
Default value:
minimal
Specifies whether and for how long the connector holds the global MySQL read lock, which prevents any updates to the database while the connector is performing a snapshot. The following settings are available:minimal
-
The connector holds the global read lock for only the initial phase of the snapshot during which it reads the database schemas and other metadata. During the next phase of the snapshot, the connector releases the lock as it selects all rows from each table. To perform the SELECT operation in a consistent fashion, the connector uses a REPEATABLE READ transaction. Although the release of the global read lock permits other MySQL clients to update the database, use of REPEATABLE READ isolation ensures a consistent snapshot, because the connector continues to read the same data for the duration of the transaction.
extended
-
Blocks all write operations for the duration of the snapshot. Use this setting if clients submit concurrent operations that are incompatible with the REPEATABLE READ isolation level in MySQL.
none
- Prevents the connector from acquiring any table locks during the snapshot. Although this option is allowed with all snapshot modes, it is safe to use only if no schema changes occur while the snapshot is running. Tables that are defined with the MyISAM engine always acquire a table lock. As a result, such tables are locked even if you set this option. This behavior differs from tables that are defined by the InnoDB engine, which acquire row-level locks.
snapshot.max.threads
Default value:
1
Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently.
ImportantParallel initial snapshots is a Developer Preview feature only. Developer Preview software is not supported by Red Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.
snapshot.mode
Default value:
initial
Specifies the criteria for running a snapshot when the connector starts. The following settings are available:always
- The connector performs a snapshot every time that it starts. The snapshot includes the structure and data of the captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables every time that the connector starts.
initial
(default)- The connector runs a snapshot only when no offsets have been recorded for the logical server name, or if it detects that an earlier snapshot failed to complete. After the snapshot completes, the connector begins to stream event records for subsequent database changes.
initial_only
- The connector runs a snapshot only when no offsets have been recorded for the logical server name. After the snapshot completes, the connector stops. It does not transition to streaming to read change events from the binlog.
schema_only
-
Deprecated, see
no_data
. no_data
- The connector runs a snapshot that captures only the schema, but not any table data. Set this option if you do not need the topics to contain a consistent snapshot of the data, but you want to capture any schema changes that were applied after the last connector restart.
schema_only_recovery
-
Deprecated, see
recovery
. recovery
Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth.
WarningDo not use this mode to perform a snapshot if schema changes were committed to the database after the last connector shutdown.
never
-
When the connector starts, rather than performing a snapshot, it immediately begins to stream event records for subsequent database changes. This option is under consideration for future deprecation, in favor of the
no_data
option. when_needed
After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
- It cannot detect any topic offsets.
- A previously recorded offset specifies a binlog position or GTID that is not available on the server.
snapshot.query.mode
Default value:
select_all
Specifies how the connector queries data while performing a snapshot.
Set one of the following options:select_all
(default)-
The connector uses a
select all
query to retrieve rows from captured tables, optionally adjusting the columns selected based on the columninclude
andexclude
list configurations.
This setting enables you to manage snapshot content in a more flexible manner compared to using the
snapshot.select.statement.overrides
property.
snapshot.select.statement.overrides
Default value: No default
Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a comma-separated list of fully-qualified table names in the form<databaseName>.<tableName>
. For example,
"snapshot.select.statement.overrides": "inventory.products,customers.orders"
For each table in the list, add a further configuration property that specifies the
SELECT
statement for the connector to run on the table when it takes a snapshot. The specifiedSELECT
statement determines the subset of table rows to include in the snapshot. Use the following format to specify the name of thisSELECT
statement property:
snapshot.select.statement.overrides.<databaseName>.<tableName>
For example,snapshot.select.statement.overrides.customers.orders
From acustomers.orders
table that includes the soft-delete column,delete_flag
, add the following properties if you want a snapshot to include only those records that are not soft-deleted:"snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM [customers].[orders] WHERE delete_flag = 0 ORDER BY id DESC"
In the resulting snapshot, the connector includes only the records for which
delete_flag = 0
.
snapshot.tables.order.by.row.count
Default value:
disabled
Specifies the order in which the connector processes tables when it performs an initial snapshot. Set one of the following options:descending
- The connector snapshots tables in order, based on the number of rows from the highest to the lowest.
ascending
- The connector snapshots tables in order, based on the number of rows, from lowest to highest.
disabled
- The connector disregards row count when performing an initial snapshot.
streaming.delay.ms
-
Default value:
0
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of theoffset.flush.interval.ms
property that is set for the Kafka Connect worker.
table.ignore.builtin
-
Default value:
true
A Boolean value that specifies whether built-in system tables should be ignored. This applies regardless of the table include and exclude lists. By default, changes that occur to the values in system tables are excluded from capture, and Debezium does not generate events for system table changes.
topic.cache.size
-
Default value:
10000
Specifies the number of topic names that can be stored in memory in a bounded concurrent hash map. The connector uses the cache to help determine the topic name that corresponds to a data collection.
topic.delimiter
-
Default value:
.
Specifies the delimiter that the connector inserts between components of the topic name.
topic.heartbeat.prefix
Default value:
__debezium-heartbeat
Specifies the name of the topic to which the connector sends heartbeat messages. The topic name takes the following format:
topic.heartbeat.prefix.topic.prefix
For example, if the topic prefix is
fulfillment
, the default topic name is__debezium-heartbeat.fulfillment
.
topic.naming.strategy
-
Default value:
io.debezium.schema.DefaultTopicNamingStrategy
The name of theTopicNamingStrategy
class that the connector uses. The specified strategy determines how the connector names the topics that store event records for data changes, schema changes, transactions, heartbeats, and so forth.
topic.transaction
Default value:
transaction
Specifies the name of the topic to which the connector sends transaction metadata messages. The topic name takes the following pattern:
topic.prefix.topic.transaction
For example, if the topic prefix is
fulfillment
, the default topic name isfulfillment.transaction
.
use.nongraceful.disconnect
-
Default value: false
A Boolean value that specifies whether the binary log client’s keepalive thread sets theSO_LINGER
socket option to0
to immediately close stale TCP connections.
Set the value totrue
if the connector experiences deadlocks inSSLSocketImpl.close
.
Debezium connector database schema history configuration properties
Debezium provides a set of schema.history.internal.*
properties that control how the connector interacts with the schema history topic.
The following table describes the schema.history.internal
properties for configuring the Debezium connector.
Property | Default | Description |
---|---|---|
No default | The full name of the Kafka topic where the connector stores the database schema history. | |
No default | A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while fetching cluster information using Kafka admin client. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while create kafka history topic using Kafka admin client. | |
|
The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is | |
|
A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is | |
|
A Boolean value that specifies whether the connector records schema structures from all tables in a schema or database, or only from tables that are designated for capture.
| |
|
A Boolean value that specifies whether the connector records schema structures from all logical databases in the database instance.
|
Pass-through MySQL connector configuration properties
You can set pass-through properties in the connector configuration to customize the behavior of the Apache Kafka producer and consumer. For information about the full range of configuration properties for Kafka producers and consumers, see the Kafka documentation.
Pass-through properties for configuring how producer and consumer clients interact with schema history topics
Debezium relies on an Apache Kafka producer to write schema changes to database schema history topics. Similarly, it relies on a Kafka consumer to read from database schema history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the schema.history.internal.producer.*
and schema.history.internal.consumer.*
prefixes. The pass-through producer and consumer database schema history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:
schema.history.internal.producer.security.protocol=SSL schema.history.internal.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.producer.ssl.keystore.password=test1234 schema.history.internal.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.producer.ssl.truststore.password=test1234 schema.history.internal.producer.ssl.key.password=test1234 schema.history.internal.consumer.security.protocol=SSL schema.history.internal.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.consumer.ssl.keystore.password=test1234 schema.history.internal.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.consumer.ssl.truststore.password=test1234 schema.history.internal.consumer.ssl.key.password=test1234
Debezium strips the prefix from the property name before it passes the property to the Kafka client.
For more information about Kafka producer configuration properties and Kafka consumer configuration properties, see the Apache Kafka documentation .
Pass-through properties for configuring how the MySQL connector interacts with the Kafka signaling topic
Debezium provides a set of signal.*
properties that control how the connector interacts with the Kafka signals topic.
The following table describes the Kafka signal
properties.
Property | Default | Description |
---|---|---|
<topic.prefix>-signal | The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. | |
kafka-signal | The name of the group ID that is used by Kafka consumers. | |
No default | A list of the host and port pairs that the connector uses to establish its initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. | |
| Specifies whether the Kafka consumer writes an offset commit after it reads a message from the signaling topic. The value that you assign to this property determines whether the connector can process requests that the signaling topic receives while the connector is offline. Choose one of the following settings:
|
Pass-through properties for configuring the Kafka consumer client for the signaling channel
The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.*
. For example, the connector passes properties such as signal.consumer.security.protocol=SSL
to the Kafka consumer.
Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer.
Pass-through properties for configuring the MySQL connector sink notification channel
The following table describes properties that you can use to configure the Debezium sink notification
channel.
Property | Default | Description |
---|---|---|
No default |
The name of the topic that receives notifications from Debezium. This property is required when you configure the |
Debezium connector pass-through database driver configuration properties
The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix driver.*
. For example, the connector passes properties such as driver.foobar=false
to the JDBC URL.
Debezium strips the prefixes from the properties before it passes the properties to the database driver.
2.4.7. Monitoring Debezium MySQL connector performance
The Debezium MySQL connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is reading the binlog.
- Schema history metrics provide information about the status of the connector’s schema history.
Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
2.4.7.1. Customized names for MySQL connector snapshot and streaming MBean objects
Debezium connectors expose metrics via the MBean name for the connector. These metrics, which are specific to each connector instance, provide data about the behavior of the connector’s snapshot, streaming, and schema history processes.
By default, when you deploy a correctly configured connector, Debezium generates a unique MBean name for each of the different connector metrics. To view the metrics for a connector process, you configure your observability stack to monitor its MBean. But these default MBean names depend on the connector configuration; configuration changes can result in changes to the MBean names. A change to the MBean name breaks the linkage between the connector instance and the MBean, disrupting monitoring activity. In this scenario, you must reconfigure the observability stack to use the new MBean name if you want to resume monitoring.
To prevent monitoring disruptions that result from MBean name changes, you can configure custom metrics tags. You configure custom metrics by adding the custom.metric.tags
property to the connector configuration. The property accepts key-value pairs in which each key represents a tag for the MBean object name, and the corresponding value represents the value of that tag. For example: k1=v1,k2=v2
. Debezium appends the specified tags to the MBean name of the connector.
After you configure the custom.metric.tags
property for a connector, you can configure the observability stack to retrieve metrics associated with the specified tags. The observability stack then uses the specified tags, rather than the mutable MBean names to uniquely identify connectors. Later, if Debezium redefines how it constructs MBean names, or if the topic.prefix
in the connector configuration changes, metrics collection is uninterrupted, because the metrics scrape task uses the specified tag patterns to identify the connector.
A further benefit of using custom tags, is that you can use tags that reflect the architecture of your data pipeline, so that metrics are organized in a way that suits you operational needs. For example, you might specify tags with values that declare the type of connector activity, the application context, or the data source, for example, db1-streaming-for-application-abc
. If you specify multiple key-value pairs, all of the specified pairs are appended to the connector’s MBean name.
The following example illustrates how tags modify the default MBean name.
Example 2.31. How custom tags modify the connector MBean name
By default, the MySQL connector uses the following MBean name for streaming metrics:
debezium.mysql:type=connector-metrics,context=streaming,server=<topic.prefix>
If you set the value of custom.metric.tags
to database=salesdb-streaming,table=inventory
, Debezium generates the following custom MBean name:
debezium.mysql:type=connector-metrics,context=streaming,server=<topic.prefix>,database=salesdb-streaming,table=inventory
2.4.7.2. Monitoring Debezium during snapshots of MySQL databases
The MBean is debezium.mysql:type=connector-metrics,context=snapshot,server=<topic.prefix>
.
Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
The following table lists the snapshot metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last snapshot event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of events that this connector has seen since last started or reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The total number of tables that are being included in the snapshot. | |
| The number of tables that the snapshot has yet to copy. | |
| Whether the snapshot was started. | |
| Whether the snapshot was paused. | |
| Whether the snapshot was aborted. | |
| Whether the snapshot completed. | |
| The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. | |
| The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. | |
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The connector also provides the following additional snapshot metrics when an incremental snapshot is executed:
Attributes | Type | Description |
---|---|---|
| The identifier of the current snapshot chunk. | |
| The lower bound of the primary key set defining the current chunk. | |
| The upper bound of the primary key set defining the current chunk. | |
| The lower bound of the primary key set of the currently snapshotted table. | |
| The upper bound of the primary key set of the currently snapshotted table. |
2.4.7.3. Monitoring Debezium MySQL connector record streaming
The Debezium MySQL connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is reading the binlog.
- Schema history metrics provide information about the status of the connector’s schema history.
Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
The MBean is debezium.mysql:type=connector-metrics,context=streaming,server=<topic.prefix>
.
The following table lists the streaming metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last streaming event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of data change events reported by the source database since the last connector start, or since a metrics reset. Represents the data change workload for Debezium to process. | |
| The total number of create events processed by the connector since its last start or metrics reset. | |
| The total number of update events processed by the connector since its last start or metrics reset. | |
| The total number of delete events processed by the connector since its last start or metrics reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| Flag that denotes whether the connector is currently connected to the database server. | |
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. | |
| The number of processed transactions that were committed. | |
| The coordinates of the last received event. | |
| Transaction identifier of the last processed transaction. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
2.4.7.4. Monitoring Debezium MySQL connector schema history
The MBean is debezium.mysql:type=connector-metrics,context=schema-history,server=<topic.prefix>
.
The following table lists the schema history metrics that are available.
Attributes | Type | Description |
---|---|---|
|
One of | |
| The time in epoch seconds at what recovery has started. | |
| The number of changes that were read during recovery phase. | |
| the total number of schema changes applied during recovery and runtime. | |
| The number of milliseconds that elapsed since the last change was recovered from the history store. | |
| The number of milliseconds that elapsed since the last change was applied. | |
| The string representation of the last change recovered from the history store. | |
| The string representation of the last applied change. |
2.4.8. How Debezium MySQL connectors handle faults and problems
Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.
If a fault does occur, the system does not lose any events. However, while Debezium is recovering from a fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.
Details are in the following sections:
- Configuration and startup errors
In the following situations, the connector fails when trying to start, reports an error or exception in the log, and stops running:
- The connector’s configuration is invalid.
- The connector cannot successfully connect to the MySQL server by using the specified connection parameters.
- The connector is attempting to restart at a position in the binlog for which MySQL no longer has the history available.
In these cases, the error message has details about the problem and possibly a suggested workaround. After you correct the configuration or address the MySQL problem, restart the connector.
However, if you are connecting to a highly available MySQL cluster, you can restart the connector immediately. It will connect to a different MySQL server in the cluster, find the location in the server’s binlog that represents the last transaction, and begin reading the new server’s binlog from that specific location.
- Kafka Connect stops gracefully
- When Kafka Connect stops gracefully, there is a short delay while the Debezium MySQL connector tasks are stopped and restarted on new Kafka Connect processes.
- Kafka Connect process crashes
- If Kafka Connect crashes, the process stops and any Debezium MySQL connector tasks terminate without their most recently-processed offsets being recorded. In distributed mode, Kafka Connect restarts the connector tasks on other processes. However, the MySQL connector resumes from the last offset recorded by the earlier processes. As a result, the replacement tasks might regenerate some events that were processed before the crash, creating duplicate events.
Each change event message includes source-specific information that you can use to identify duplicate events, for example:
- Event origin
- MySQL server’s event time
- The binlog file name and position
- GTIDs, if used
- MySQL purges binlog files
- If the Debezium MySQL connector stops for too long, the MySQL server purges older binlog files and the connector’s last position may be lost. When the connector is restarted, the MySQL server no longer has the starting point and the connector performs another initial snapshot. If the snapshot is disabled, the connector fails with an error.
See snapshots for details about how MySQL connectors perform initial snapshots.
2.5. Debezium Connector for Oracle
Debezium’s Oracle connector captures and records row-level changes that occur in databases on an Oracle server, including tables that are added while the connector is running. You can configure the connector to emit change events for specific subsets of schemas and tables, or to ignore, mask, or truncate values in specific columns.
For information about the Oracle Database versions that are compatible with this connector, see the Debezium Supported Configurations page.
Debezium ingests change events from Oracle by using the native LogMiner database package .
Information and procedures for using a Debezium Oracle connector are organized as follows:
- Section 2.5.1, “How Debezium Oracle connectors work”
- Section 2.5.2, “Descriptions of Debezium Oracle connector data change events”
- Section 2.5.3, “How Debezium Oracle connectors map data types”
- Section 2.5.4, “Setting up Oracle to work with Debezium”
- Custom converters
- Section 2.5.5, “Deployment of Debezium Oracle connectors”
- Section 2.5.6, “Descriptions of Debezium Oracle connector configuration properties”
- Section 2.5.7, “Monitoring Debezium Oracle connector performance”
- Section 2.5.8, “Oracle connector frequently asked questions”
2.5.1. How Debezium Oracle connectors work
To optimally configure and run a Debezium Oracle connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, uses metadata, and implements event buffering.
For more information, see the following topics:
- Section 2.5.1.1, “How Debezium Oracle connectors perform database snapshots”
- Section 2.5.1.2, “Ad hoc snapshots”
- Section 2.5.1.3, “Incremental snapshots”
- Section 2.5.1.5, “Default names of Kafka topics that receive Debezium Oracle change event records”
- Section 2.5.1.7, “How Debezium Oracle connectors expose database schema changes”
- Section 2.5.1.8, “Debezium Oracle connector-generated events that represent transaction boundaries”
- Section 2.5.1.9, “How the Debezium Oracle connector uses event buffering”
2.5.1.1. How Debezium Oracle connectors perform database snapshots
Typically, the redo logs on an Oracle server are configured to not retain the complete history of the database. As a result, the Debezium Oracle connector cannot retrieve the entire history of the database from the logs. To enable the connector to establish a baseline for the current state of the database, the first time that the connector starts, it performs an initial consistent snapshot of the database.
If the time needed to complete the initial snapshot exceeds the UNDO_RETENTION
time that is set for the database (fifteen minutes, by default), an ORA-01555 exception can occur. For more information about the error, and about the steps that you can take to recover from it, see the Frequently asked questions.
During a table’s snapshot, it’s possible for Oracle to raise an ORA-01466 exception. This happens when a user modifies the schema of the table or adds, changes, or drops an index or related object associated with the table being snapshot. In the event this happens, the connector will stop and the initial snapshot will need to be taken from the beginning.
To remediate the problem, you can configure the snapshot.database.errors.max.retries
property with a value greater than 0
so that the specific table’s snapshot will restart. While the entire snapshot will not start from the beginning when retrying, the specific table in question will be re-read from the beginning and the table’s topic will contain duplicate snapshot events.
You can find more information about snapshots in the following sections:
2.5.1.1.1. Default workflow that the Oracle connector uses to perform an initial snapshot
The following workflow lists the steps that Debezium takes to create a snapshot. These steps describe the process for a snapshot when the snapshot.mode
configuration property is set to its default value, which is initial
. You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property. If you configure a different snapshot mode, the connector completes the snapshot by using a modified version of this workflow.
When the snapshot mode is set to the default, the connector completes the following tasks to create a snapshot:
- Establish a connection to the database.
-
Determine the tables to be captured. By default, the connector captures all tables except those with schemas that exclude them from capture. After the snapshot completes, the connector continues to stream data for the specified tables. If you want the connector to capture data only from specific tables you can direct the connector to capture the data for only a subset of tables or table elements by setting properties such as
table.include.list
ortable.exclude.list
. -
Obtain a
ROW SHARE MODE
lock on each of the captured tables to prevent structural changes from occurring during creation of the snapshot. Debezium holds the locks for only a short time. - Read the current system change number (SCN) position from the server’s redo log.
Capture the structure of all database tables, or all tables that are designated for capture. The connector persists schema information in its internal database schema history topic. The schema history provides information about the structure that is in effect when a change event occurs.
NoteBy default, the connector captures the schema of every table in the database that is in capture mode, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see Understanding why initial snapshots capture the schema for all tables.
- Release the locks obtained in Step 3. Other database clients can now write to any previously locked tables.
At the SCN position that was read in Step 4, the connector scans the tables that are designated for capture (
SELECT * FROM … AS OF SCN 123
). During the scan, the connector completes the following tasks:- Confirms that the table was created before the snapshot began. If the table was created after the snapshot began, the connector skips the table. After the snapshot is complete, and the connector transitions to streaming, it emits change events for any tables that were created after the snapshot began.
-
Produces a
read
event for each row that is captured from a table. Allread
events contain the same SCN position, which is the SCN position that was obtained in step 4. -
Emits each
read
event to the Kafka topic for the source table. - Releases data table locks, if applicable.
- Record the successful completion of the snapshot in the connector offsets.
The resulting initial snapshot captures the current state of each row in the captured tables. From this baseline state, the connector captures subsequent changes as they occur.
After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts. After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 3 so that it does not miss any updates. If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.
Setting | Description |
---|---|
| Perform snapshot on each connector start. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot as described in the default workflow for creating an initial snapshot. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot and stops before streaming any change event records, not allowing any subsequent change events to be captured. |
|
Deprecated, see |
|
The connector captures the structure of all relevant tables, performing all of the steps described in the default snapshot workflow, except that it does not create |
|
Deprecated, see |
|
Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth. |
| After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
|
For more information, see snapshot.mode
in the table of connector configuration properties.
2.5.1.1.2. Description of why initial snapshots capture the schema history for all tables
The initial snapshot that a connector runs captures two types of information:
- Table data
-
Information about
INSERT
,UPDATE
, andDELETE
operations in tables that are named in the connector’stable.include.list
property. - Schema data
- DDL statements that describe the structural changes that are applied to tables. Schema data is persisted to both the internal schema history topic, and to the connector’s schema change topic, if one is configured.
After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture. By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture. Connectors require that the table’s schema is present in the schema history topic before they can capture a table. By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, Debezium prepares the connector to readily capture event data from these tables should that later become necessary. If the initial snapshot does not capture a table’s schema, you must add the schema to the history topic before the connector can capture data from the table.
In some cases, you might want to limit schema capture in the initial snapshot. This can be useful when you want to reduce the time required to complete a snapshot. Or when Debezium connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database.
Additional information
- Capturing data from tables not captured by the initial snapshot (no schema change)
- Capturing data from tables not captured by the initial snapshot (schema change)
-
Setting the
schema.history.internal.store.only.captured.tables.ddl
property to specify the tables from which to capture schema information. -
Setting the
schema.history.internal.store.only.captured.databases.ddl
property to specify the logical databases from which to capture schema changes.
2.5.1.1.3. Capturing data from tables not captured by the initial snapshot (no schema change)
In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot. Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database. If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error.
You might still be able to capture data from the table, but you must perform additional steps to add the table schema.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- All entries for the table in the transaction log use the same schema. For information about capturing data from a new table that has undergone structural changes, see Section 2.5.1.1.4, “Capturing data from tables not captured by the initial snapshot (schema change)”.
Procedure
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. In the connector configuration:
-
Set the
snapshot.mode
toschema_only_recovery
. -
(Optional) Set the value of
schema.history.internal.store.only.captured.tables.ddl
tofalse
to ensure that in the future the connector can readily capture data for tables that are not currently designated for capture. Connectors can capture data from a table only if the table’s schema history is present in the history topic. -
Add the tables that you want the connector to capture to
table.include.list
.
-
Set the
- Restart the connector. The snapshot recovery process rebuilds the schema history based on the current structure of the tables.
- (Optional) After the snapshot completes, initiate an incremental snapshot on the newly added tables. The incremental snapshot first streams the historical data of the newly added tables, and then resumes reading changes from the redo and archive logs for previously configured tables, including changes that occur while that connector was off-line.
-
(Optional) Reset the
snapshot.mode
back toschema_only
to prevent the connector from initiating recovery after a future restart.
2.5.1.1.4. Capturing data from tables not captured by the initial snapshot (schema change)
If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change. When Debezium captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event. If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results.
If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available. You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- A schema change was applied to the table so that the records to be captured do not have a uniform structure.
Procedure
- Initial snapshot captured the schema for all tables (
store.only.captured.tables.ddl
was set tofalse
) -
Edit the
table.include.list
property to specify the tables that you want to capture. - Restart the connector.
- Initiate an incremental snapshot if you want to capture existing data from the newly added tables.
-
Edit the
- Initial snapshot did not capture the schema for all tables (
store.only.captured.tables.ddl
was set totrue
) If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures:
- Procedure 1: Schema snapshot, followed by incremental snapshot
In this procedure, the connector first performs a schema snapshot. You can then initiate an incremental snapshot to enable the connector to synchronize data.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toschema_only
. -
Edit the
table.include.list
to add the tables that you want to capture.
-
Set the value of the
- Restart the connector.
- Wait for Debezium to capture the schema of the new and existing tables. Data changes that occurred any tables after the connector stopped are not captured.
- To ensure that no data is lost, initiate an incremental snapshot.
- Procedure 2: Initial snapshot, followed by optional incremental snapshot
In this procedure the connector performs a full initial snapshot of the database. As with any initial snapshot, in a database with many large tables, running an initial snapshot can be a time-consuming operation. After the snapshot completes, you can optionally trigger an incremental snapshot to capture any changes that occur while the connector is off-line.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Edit the
table.include.list
to add the tables that you want to capture. Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toinitial
. -
(Optional) Set
schema.history.internal.store.only.captured.tables.ddl
tofalse
.
-
Set the value of the
- Restart the connector. The connector takes a full database snapshot. After the snapshot completes, the connector transitions to streaming.
- (Optional) To capture any data that changed while the connector was off-line, initiate an incremental snapshot.
2.5.1.2. Ad hoc snapshots
By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.
However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. You might want to perform an ad hoc snapshot after any of the following changes occur in your Debezium environment:
- The connector configuration is modified to capture a different set of tables.
- Kafka topics are deleted and must be rebuilt.
- Data corruption occurs due to a configuration error or some other problem.
You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling tables. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table.
When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.
Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database.
You specify the tables to capture by sending an execute-snapshot
message to the signaling table. Set the type of the execute-snapshot
signal to incremental
or blocking
, and provide the names of the tables to include in the snapshot, as described in the following table:
Field | Default | Value |
---|---|---|
|
|
Specifies the type of snapshot that you want to run. |
| N/A |
An array that contains regular expressions matching the fully-qualified names of the tables to include in the snapshot. |
| N/A |
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
|
| N/A | An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. |
Triggering an ad hoc incremental snapshot
You initiate an ad hoc incremental snapshot by adding an entry with the execute-snapshot
signal type to the signaling table, or by sending a signal message to a Kafka signaling topic. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
For more information, see Incremental snapshots.
Triggering an ad hoc blocking snapshot
You initiate an ad hoc blocking snapshot by adding an entry with the execute-snapshot
signal type to the signaling table or signaling topic. After the connector processes the message, it begins the snapshot operation. The connector temporarily stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the connector resumes streaming.
For more information, see Blocking snapshots.
2.5.1.3. Incremental snapshots
To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows.
As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:
- You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
- If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning.
-
You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its
table.include.list
property.
Incremental snapshot process
When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size. Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ
event. That event represents the value of the row when the snapshot for the chunk began.
As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT
, UPDATE
, or DELETE
operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.
How Debezium resolves collisions among records with the same primary key
In some cases, the UPDATE
or DELETE
events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ
event for that row. When the snapshot eventually emits the corresponding READ
event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.
Snapshot window
To assist in resolving collisions between late-arriving READ
events and streamed events that modify the same table row, Debezium employs a so-called snapshot window. The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key..
For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ
operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE
or DELETE
operations for each change.
As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ
events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ
event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ
events for which no related transaction log events exist. Debezium emits these remaining READ
events to the table’s Kafka topic.
The connector repeats the process for each snapshot chunk.
Currently, you can use either of the following methods to initiate an incremental snapshot:
The Debezium connector for Oracle does not support schema changes while an incremental snapshot is running.
2.5.1.3.1. Triggering an incremental snapshot
To initiate an incremental snapshot, you can send an ad hoc snapshot signal to the signaling table on the source database. You submit snapshot signals as SQL INSERT
queries.
After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation.
The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the type of snapshot operation. Debezium currently supports the incremental
and blocking
snapshot types.
To specify the tables to include in the snapshot, provide a data-collections
array that lists the tables, or an array of regular expressions used to match tables, for example,
{"data-collections": ["public.MyFirstTable", "public.MySecondTable"]}
The data-collections
array for an incremental snapshot signal has no default value. If the data-collections
array is empty, Debezium interprets the empty array to mean that no action is required, and it does not perform a snapshot.
If the name of a table that you want to include in a snapshot contains a dot (.
), a space, or some other non-alphanumeric character, you must escape the table name in double quotes.
For example, to include a table that exists in the public
schema in the db1
database, and that has the name My.Table
, use the following format: "db1.public.\"My.Table\""
.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to trigger an incremental snapshot
Send a SQL query to add the ad hoc incremental snapshot request to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example,
INSERT INTO db1.myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], 4 "type":"incremental", 5 "additional-conditions":[{"data-collection": "db1.schema1.table1" ,"filter":"color=\'blue\'"}]}'); 6
The values of the
id
,type
, anddata
parameters in the command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.107. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1
database.schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its ownid
string as a watermarking signal.3
execute-snapshot
The
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
A required component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot.
The array lists regular expressions that use the formatdatabase.schema.table
to match the fully-qualified names of the tables. This format is the same as the one that you use to specify the name of the connector’s signaling table.5
incremental
An optional
type
component of thedata
field of a signal that specifies the type of snapshot operation to run.
Valid values areincremental
andblocking
.
If you do not specify a value, the connector defaults to performing an incremental snapshot.6
additional-conditions
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
Each additional condition is an object withdata-collection
andfilter
properties. You can specify different filters for each data collection.
* Thedata-collection
property is the fully-qualified name of the data collection that the filter applies to. For more information about theadditional-conditions
parameter, see Section 2.5.1.3.2, “Running an ad hoc incremental snapshots withadditional-conditions
”.
2.5.1.3.2. Running an ad hoc incremental snapshots with additional-conditions
If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-conditions
parameter to the snapshot signal.
The SQL query for a typical snapshot takes the following form:
SELECT * FROM <tableName> ....
By adding an additional-conditions
parameter, you append a WHERE
condition to the SQL query, as in the following example:
SELECT * FROM <data-collection> WHERE <filter> ....
The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example, suppose you have a products
table that contains the following columns:
-
id
(primary key) -
color
-
quantity
If you want an incremental snapshot of the products
table to include only the data items where color=blue
, you can use the following SQL statement to trigger the snapshot:
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue"}]}');
The additional-conditions
parameter also enables you to pass conditions that are based on more than one column. For example, using the products
table from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue
and quantity>10
:
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue AND quantity>10"}]}');
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
Example 2.32. Incremental snapshot event message
{ "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "ts_us":"1620393591654547", "ts_ns":"1620393591654547920", "transaction":null }
Item | Field name | Description |
---|---|---|
1 |
|
Specifies the type of snapshot operation to run. |
2 |
|
Specifies the event type. |
2.5.1.3.3. Using the Kafka signaling channel to trigger an incremental snapshot
You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is execute-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports the |
| N/A |
An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. |
| N/A |
An optional array of additional conditions that specifies criteria that the connector evaluates to designate a subset of records to include in a snapshot. |
Example 2.33. An execute-snapshot
Kafka message
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
Ad hoc incremental snapshots with additional-conditions
Debezium uses the additional-conditions
field to select a subset of a table’s content.
Typically, when Debezium runs a snapshot, it runs a SQL query such as:
SELECT * FROM <tableName> ….
When the snapshot request includes an additional-conditions
property, the data-collection
and filter
parameters of the property are appended to the SQL query, for example:
SELECT * FROM <data-collection> WHERE <filter> ….
For example, given a products
table with the columns id
(primary key), color
, and brand
, if you want a snapshot to include only content for which color='blue'
, when you request the snapshot, you could add the additional-conditions
property to filter the content:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue'"}]}}`
You can also use the additional-conditions
property to pass conditions based on multiple columns. For example, using the same products
table as in the previous example, if you want a snapshot to include only the content from the products
table for which color='blue'
, and brand='MyBrand'
, you could send the following request:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
2.5.1.3.4. Stopping an incremental snapshot
In some situations, it might be necessary to stop an incremental snapshot. For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations. You can stop a snapshot that is already running by sending a signal to the signaling table on the source database.
You submit a stop snapshot signal to the signaling table by sending it in a SQL INSERT
query. The stop-snapshot signal specifies the type
of the snapshot operation as incremental
, and optionally specifies the tables that you want to omit from the currently running snapshot. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it’s in progress.
Additional resources
You can also stop an incremental snapshot by sending a JSON message to the Kafka signaling topic.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to stop an incremental snapshot
Send a SQL query to stop the ad hoc incremental snapshot to the signaling table:
INSERT INTO <signalTable> (id, type, data) values ('<id>', 'stop-snapshot', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"incremental"}');
For example,
INSERT INTO db1.myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], 4 "type":"incremental"}'); 5
The values of the
id
,type
, anddata
parameters in the signal command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.110. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1
database.schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string.3
stop-snapshot
Specifies
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
An optional component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot.
The array lists regular expressions which match tables by their fully-qualified names in the formatdatabase.schema.table
If you omit this component from the
data
field, the signal stops the entire incremental snapshot that is in progress.5
incremental
A required component of the
data
field of a signal that specifies the type of snapshot operation that is to be stopped.
Currently, the only valid option isincremental
.
If you do not specify atype
value, the signal fails to stop the incremental snapshot.
2.5.1.3.5. Using the Kafka signaling channel to stop an incremental snapshot
You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is stop-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports only the |
| N/A |
An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of table names or regular expressions to match table names to remove from the snapshot. |
The following example shows a typical stop-snapshot
Kafka message:
Key = `test_connector` Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], "type": "INCREMENTAL"}}`
2.5.1.4. Blocking snapshots
To provide more flexibility in managing snapshots, Debezium includes a supplementary ad hoc snapshot mechanism, known as a blocking snapshot. Blocking snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
A blocking snapshot behaves just like an initial snapshot, except that you can trigger it at run time.
You might want to run a blocking snapshot rather than use the standard initial snapshot process in the following situations:
- You add a new table and you want to complete the snapshot while the connector is running.
- You add a large table, and you want the snapshot to complete in less time than is possible with an incremental snapshot.
Blocking snapshot process
When you run a blocking snapshot, Debezium stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the streaming is resumed.
Configure snapshot
You can set the following properties in the data
component of a signal:
- data-collections: to specify which tables must be snapshot
additional-conditions: You can specify different filters for different table.
-
The
data-collection
property is the fully-qualified name of the table for which the filter will be applied. -
The
filter
property will have the same value used in thesnapshot.select.statement.overrides
-
The
For example:
{"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]}
Possible duplicates
A delay might exist between the time that you send the signal to trigger the snapshot, and the time when streaming stops and the snapshot starts. As a result of this delay, after the snapshot completes, the connector might emit some event records that duplicate records captured by the snapshot.
2.5.1.5. Default names of Kafka topics that receive Debezium Oracle change event records
By default, the Oracle connector writes change events for all INSERT
, UPDATE
, and DELETE
operations that occur in a table to a single Apache Kafka topic that is specific to that table. The connector uses the following convention to name change event topics:
topicPrefix.schemaName.tableName
The following list provides definitions for the components of the default name:
- topicPrefix
-
The topic prefix as specified by the
topic.prefix
connector configuration property. - schemaName
- The name of the schema in which the operation occurred.
- tableName
- The name of the table in which the operation occurred.
For example, if fulfillment
is the server name, inventory
is the schema name, and the database contains tables with the names orders
, customers
, and products
, the Debezium Oracle connector emits events to the following Kafka topics, one for each table in the database:
fulfillment.inventory.orders fulfillment.inventory.customers fulfillment.inventory.products
The connector applies similar naming conventions to label its internal database schema history topics, schema change topics, and transaction metadata topics.
If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing.
2.5.1.6. How Debezium Oracle connectors handle database schema changes
When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot necessarily apply the current schema to every event. If an event is relatively old, it’s possible that it was recorded before the current schema was applied.
To ensure correct processing of events that occur after a schema change, Oracle includes in the redo log not only the row-level changes that affect the data, but also the DDL statements that are applied to the database. As the connector encounters these DDL statements in the redo log, it parses them and updates an in-memory representation of each table’s schema. The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete operation and to produce the appropriate change event. In a separate database schema history Kafka topic, the connector records all DDL statements along with the position in the redo log where each DDL statement appeared.
When the connector restarts after either a crash or a graceful stop, it starts reading the redo log from a specific position, that is, from a specific point in time. The connector rebuilds the table structures that existed at this point in time by reading the database schema history Kafka topic and parsing all DDL statements up to the point in the redo log where the connector is starting.
This database schema history topic is internal for internal connector use only. Optionally, the connector can also emit schema change events to a different topic that is intended for consumer applications.
Additional resources
- Default names for topics that receive Debezium event records.
2.5.1.7. How Debezium Oracle connectors expose database schema changes
You can configure a Debezium Oracle connector to produce schema change events that describe structural changes that are applied to tables in the database. The connector writes schema change events to a Kafka topic named <serverName>
, where serverName
is the namespace that is specified in the topic.prefix
configuration property.
Debezium emits a new message to the schema change topic whenever it streams data from a new table, or when the structure of the table is altered.
Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
The schema for the schema change event has the following elements:
name
- The name of the schema change event message.
type
- The type of the change event message.
version
- The version of the schema. The version is an integer that is incremented each time the schema is changed.
fields
- The fields that are included in the change event message.
Example: Schema of the Oracle connector schema change topic
The following example shows a typical schema in JSON format.
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.oracle.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "inventory" } }
The payload of a schema change event message includes the following elements:
ddl
-
Provides the SQL
CREATE
,ALTER
, orDROP
statement that results in the schema change. databaseName
-
The name of the database to which the statements are applied. The value of
databaseName
serves as the message key. tableChanges
-
A structured representation of the entire table schema after the schema change. The
tableChanges
field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser.
By default, the connector uses the ALL_TABLES
database view to identify the table names to store in the schema history topic. Within that view, the connector can access data only from tables that are available to the user account through which it connects to the database.
You can modify settings so that the schema history topic stores a different subset of tables. Use one of the following methods to alter the set of tables that the topic stores:
-
Change the permissions of the account that Debezium uses to access the database so that a different set of tables are visible in the
ALL_TABLES
view. -
Set the connector property
schema.history.internal.store.only.captured.tables.ddl
totrue
.
When the connector is configured to capture a table, it stores the history of the table’s schema changes not only in the schema change topic, but also in an internal database schema history topic. The internal database schema history topic is for connector use only and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic.
Never partition the database schema history topic. For the database schema history topic to function correctly, it must maintain a consistent, global order of the event records that the connector emits to it.
To ensure that the topic is not split among partitions, set the partition count for the topic by using one of the following methods:
-
If you create the database schema history topic manually, specify a partition count of
1
. -
If you use the Apache Kafka broker to create the database schema history topic automatically, the topic is created, set the value of the Kafka
num.partitions
configuration option to1
.
Example: Message emitted to the Oracle connector schema change topic
The following example shows a typical schema change message in JSON format. The message contains a logical representation of the table schema.
{ "schema": { ... }, "payload": { "source": { "version": "2.7.3.Final", "connector": "oracle", "name": "server1", "ts_ms": 1588252618953, "ts_us": 1588252618953000, "ts_ns": 1588252618953000000, "snapshot": "true", "db": "ORCLPDB1", "schema": "DEBEZIUM", "table": "CUSTOMERS", "txId" : null, "scn" : "1513734", "commit_scn": "1513754", "lcr_position" : null, "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user", "row_id": "AAASgjAAMAAAACnAAA" }, "ts_ms": 1588252618953, 1 "ts_us": 1588252618953987, 2 "ts_ns": 1588252618953987512, 3 "databaseName": "ORCLPDB1", 4 "schemaName": "DEBEZIUM", // "ddl": "CREATE TABLE \"DEBEZIUM\".\"CUSTOMERS\" \n ( \"ID\" NUMBER(9,0) NOT NULL ENABLE, \n \"FIRST_NAME\" VARCHAR2(255), \n \"LAST_NAME" VARCHAR2(255), \n \"EMAIL\" VARCHAR2(255), \n PRIMARY KEY (\"ID\") ENABLE, \n SUPPLEMENTAL LOG DATA (ALL) COLUMNS\n ) SEGMENT CREATION IMMEDIATE \n PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 \n NOCOMPRESS LOGGING\n STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645\n PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1\n BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)\n TABLESPACE \"USERS\" ", 5 "tableChanges": [ 6 { "type": "CREATE", 7 "id": "\"ORCLPDB1\".\"DEBEZIUM\".\"CUSTOMERS\"", 8 "table": { 9 "defaultCharsetName": null, "primaryKeyColumnNames": [ 10 "ID" ], "columns": [ 11 { "name": "ID", "jdbcType": 2, "nativeType": null, "typeName": "NUMBER", "typeExpression": "NUMBER", "charsetName": null, "length": 9, "scale": 0, "position": 1, "optional": false, "autoIncremented": false, "generated": false }, { "name": "FIRST_NAME", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR2", "typeExpression": "VARCHAR2", "charsetName": null, "length": 255, "scale": null, "position": 2, "optional": false, "autoIncremented": false, "generated": false }, { "name": "LAST_NAME", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR2", "typeExpression": "VARCHAR2", "charsetName": null, "length": 255, "scale": null, "position": 3, "optional": false, "autoIncremented": false, "generated": false }, { "name": "EMAIL", "jdbcType": 12, "nativeType": null, "typeName": "VARCHAR2", "typeExpression": "VARCHAR2", "charsetName": null, "length": 255, "scale": null, "position": 4, "optional": false, "autoIncremented": false, "generated": false } ], "attributes": [ 12 { "customAttribute": "attributeValue" } ] } } ] } }
Item | Field name | Description |
---|---|---|
1 |
| Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium. |
2 |
| Identifies the database and the schema that contains the change. |
3 |
| This field contains the DDL that is responsible for the schema change. |
4 |
| An array of one or more items that contain the schema changes generated by a DDL command. |
5 |
|
Describes the kind of change. The
|
6 |
|
Full identifier of the table that was created, altered, or dropped. In the case of a table rename, this identifier is a concatenation of |
7 |
| Represents table metadata after the applied change. |
8 |
| List of columns that compose the table’s primary key. |
9 |
| Metadata for each column in the changed table. |
10 |
| Custom attribute metadata for each table change. |
In messages that the connector sends to the schema change topic, the message key is the name of the database that contains the schema change. In the following example, the payload
field contains the databaseName
key:
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.oracle.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "ORCLPDB1" } }
2.5.1.8. Debezium Oracle connector-generated events that represent transaction boundaries
Debezium can generate events that represent transaction metadata boundaries and that enrich data change event messages.
Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.
Database transactions are represented by a statement block that is enclosed between the BEGIN
and END
keywords. Debezium generates transaction boundary events for the BEGIN
and END
delimiters in every transaction. Transaction boundary events contain the following fields:
status
-
BEGIN
orEND
. id
- String representation of the unique transaction identifier.
ts_ms
-
The time of a transaction boundary event (
BEGIN
orEND
event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count
(forEND
events)- Total number of events emmitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
elements that indicates the number of events that the connector emits for changes that originate from a data collection.
The following example shows a typical transaction boundary message:
Example: Oracle connector transaction boundary event
{ "status": "BEGIN", "id": "5.6.641", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "5.6.641", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "ORCLPDB1.DEBEZIUM.CUSTOMER", "event_count": 1 }, { "data_collection": "ORCLPDB1.DEBEZIUM.ORDER", "event_count": 1 } ] }
Unless overridden via the topic.transaction
option, the connector emits transaction events to the <topic.prefix>
.transaction
topic.
2.5.1.8.1. How the Debezium Oracle connector enriches change event messages with transaction metadata
When transaction metadata is enabled, the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
id
- String representation of unique transaction identifier.
total_order
- The absolute position of the event among all events generated by the transaction.
data_collection_order
- The per-data collection position of the event among all events that were emitted by the transaction.
The following example shows a typical transaction event message:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "ts_us": "1580390884335741", "ts_ns": "1580390884335741963", "transaction": { "id": "5.6.641", "total_order": "1", "data_collection_order": "1" } }
LogMiner Mining Strategies
Entries in the Oracle redo logs do not store the original SQL statements that users submit to make DML changes. Instead, a redo entry holds a set of change vectors and a set of object identifiers that represent the tablespace, table, and columns related to these vectors. In other words, redo log entries don’t include the names of the schemas, tables, or columns affected by DML changes.
The Debezium Oracle connector uses the log.mining.strategy
configuration property to control how Oracle LogMiner handles the lookup of the object identifiers in the change vectors. In certain situations, one log mining strategy might prove more reliable than another with regard to schema changes. However, before you choose a log mining strategy, it’s important to consider the implications it might have on performance and overhead.
Writing the data dictionary to redo logs
The default mining strategy is called redo_log_catalog
. In this strategy, the database flushes a copy of the data dictionary to the redo logs immediately after each redo log switch. This is the most reliable strategy for tracking schema changes that are interwoven with data changes, because Oracle LogMiner has a way to interpolate between the starting and ending data dictionary states across a series of change vectors.
However, the redo_log_catalog
mode is also the most expensive, because it requires several key steps to function. First, this mode requires the data dictionary to be flushed to the redo logs after every log switch. Flushing the logs after each switch can quickly consume valuable space in the archive log, and the high volume of archive logs might exceed the number that database administrators prepared for. If you intend to use this mode, coordinate with your database administrators to ensure that the database is configured appropriately.
If you configure the connector to use the redo_log_catalog
mode, do not use multiple Debezium Oracle connectors to capture changes from the same logical database.
Using the online catalog directly
The next strategy mode, online_catalog
, works differently from the redo_log_catalog
mode. When the strategy is set to online_catalog
, the database never flushes the data dictionary to the redo logs. Instead, Oracle LogMiner always uses the most current data dictionary state to perform comparisons. By always using the current dictionary, and eliminating flushing to the redo logs, this strategy requires less overhead, and operates more efficiently. However, these benefits are offset by the inability to parse interwoven schema changes and data changes. As a result, this strategy can sometimes result in event failures.
If LogMiner was unable to reconstruct the SQL reliability after a schema change, check the redo logs for evidence. Look for references to tables with names like OBJ# 123456
(where the number is the table’s object identifier), or for columns with names like COL1
or COL2
. When you configure the connector to use the online_catalog
strategy, take steps to ensure that the table schema and its indices remain static and free from change. If the Debezium connector is configured to use the online_catalog
mode, and you must apply a schema change, perform the following steps:
- Wait for the connector to capture all existing data changes (DML).
- Perform the schema (DDL) change, and then wait for the connector to capture the change.
- Resume data changes (DML) on the table.
Following this procedure helps to ensure that Oracle LogMiner can safely reconstruct the SQL for all data changes.
Query Modes
The Debezium Oracle connector integrates with Oracle LogMiner by default. This integration requires a specialized set of steps which includes generating a complex JDBC SQL query to ingest the changes recorded in the transaction logs as change events. The V$LOGMNR_CONTENTS
view used by the JDBC SQL query does not have any indices to improve the query’s performance, and so there are different query modes that can be used that control how the SQL query is generated as a way to improve the query’s execution.
The log.mining.query.filter.mode
connector property can be configured with one of the following to influence how the JDBC SQL query is generated:
none
-
(Default) This mode creates a JDBC query that only filters based on the different operation types, such as inserts, updates, or deletes, at the database level. When filtering the data based on the schema, table, or username include/exclude lists, this is done during the processing loop within the connector.
This mode is often useful when capturing a small number of tables from a database that is not heavily saturated with changes. The generated query is quite simple, and focuses primarily on reading as quickly as possible with low database overhead. in
-
This mode creates a JDBC query that filters not only operation types at the database level, but also schema, table, and username include/exclude lists. The query’s predicates are generated using a SQL in-clause based on the values specified in the include/exclude list configuration properties.
This mode is often useful when capturing a large number of tables from a database that is heavily saturated with changes. The generated query is much more complex than thenone
mode, and focuses on reducing network overhead and performing as much filtering at the database level as possible.
Finally, do not specify regular expressions as part of schema and table include/exclude configuration properties. Using regular expressions will cause the connector to not match changes based on these configuration properties, causing changes to be missed. regex
-
This mode creates a JDBC query that filters not only operation types at the database level, but also schema, table, and username include/exclude lists. However, unlike the
in
mode, this mode generates a SQL query using the OracleREGEXP_LIKE
operator using a conjunction or disjunction depending on whether include or excluded values are specified.
This mode is often useful when capturing a variable number of tables that can be identified using a small number of regular expressions. The generated query is much more complex than any other mode, and focuses on reducing network overhead and performing as much filtering at the database level as possible.
2.5.1.9. How the Debezium Oracle connector uses event buffering
Oracle writes all changes to the redo logs in the order in which they occur, including changes that are later discarded by a rollback. As a result, concurrent changes from separate transactions are intertwined. When the connector first reads the stream of changes, because it cannot immediately determine which changes are committed or rolled back, it temporarily stores the change events in an internal buffer. After a change is committed, the connector writes the change event from the buffer to Kafka. The connector drops change events that are discarded by a rollback.
You can configure the buffering mechanism that the connector uses by setting the property log.mining.buffer.type
.
Heap
The default buffer type is configured using memory
. Under the default memory
setting, the connector uses the heap memory of the JVM process to allocate and manage buffered event records. If you use the memory
buffer setting, be sure that the amount of memory that you allocate to the Java process can accommodate long-running and large transactions in your environment.
2.5.1.10. How the Debezium Oracle connector detects gaps in SCN values
When the Debezium Oracle connector is configured to use LogMiner, it collects change events from Oracle by using a start and end range that is based on system change numbers (SCNs). The connector manages this range automatically, increasing or decreasing the range depending on whether the connector is able to stream changes in near real-time, or must process a backlog of changes due to the volume of large or bulk transactions in the database.
Under certain circumstances, the Oracle database advances the SCN by an unusually high amount, rather than increasing the SCN value at a constant rate. Such a jump in the SCN value can occur because of the way that a particular integration interacts with the database, or as a result of events such as hot backups.
The Debezium Oracle connector relies on the following configuration properties to detect the SCN gap and adjust the mining range.
log.mining.scn.gap.detection.gap.size.min
- Specifies the minimum gap size.
log.mining.scn.gap.detection.time.interval.max.ms
- Specifies the maximum time interval.
The connector first compares the difference in the number of changes between the current SCN and the highest SCN in the current mining range. If the difference between the current SCN value and the highest SCN value is greater than the minimum gap size, then the connector has potentially detected a SCN gap. To confirm whether a gap exists, the connector next compares the timestamps of the current SCN and the SCN at the end of the previous mining range. If the difference between the timestamps is less than the maximum time interval, then the existence of an SCN gap is confirmed.
When an SCN gap occurs, the Debezium connector automatically uses the current SCN as the end point for the range of the current mining session. This allows the connector to quickly catch up to the real-time events without mining smaller ranges in between that return no changes because the SCN value was increased by an unexpectedly large number. When the connector performs the preceding steps in response to an SCN gap, it ignores the value that is specified by the log.mining.batch.size.max property. After the connector finishes the mining session and catches back up to real-time events, it resumes enforcement of the maximum log mining batch size.
SCN gap detection is available only if the large SCN increment occurs while the connector is running and processing near real-time events.
2.5.1.11. How Debezium manages offsets in databases that change infrequently
The Debezium Oracle connector tracks system change numbers in the connector offsets so that when the connector is restarted, it can begin where it left off. These offsets are part of each emitted change event; however, when the frequency of database changes are low (every few hours or days), the offsets can become stale and prevent the connector from successfully restarting if the system change number is no longer available in the transaction logs.
For connectors that use non-CDB mode to connect to Oracle, you can enable heartbeat.interval.ms
to force the connector to emit a heartbeat event at regular intervals so that offsets remain synchronized.
For connectors that use CDB mode to connect to Oracle, maintaining synchronization is more complicated. Not only must you set heartbeat.interval.ms
, but it’s also necessary to set heartbeat.action.query
. Specifying both properties is required, because in CDB mode, the connector specifically tracks changes inside the PDB only. A supplementary mechanism is needed to trigger change events from within the pluggable database. At regular intervals, the heartbeat action query causes the connector to insert a new table row, or update an existing row in the pluggable database. Debezium detects the table changes and emits change events for them, ensuring that offsets remain synchronized, even in pluggable databases that process changes infrequently.
For the connector to use the heartbeat.action.query
with tables that are not owned by the connector user account, you must grant the connector user permission to run the necessary INSERT
or UPDATE
queries on those tables.
2.5.2. Descriptions of Debezium Oracle connector data change events
Every data change event that the Oracle connector emits has a key and a value. The structures of the key and value depend on the table from which the change events originate. For information about how Debezium constructs topic names, see Topic names.
The Debezium Oracle connector ensures that all Kafka Connect schema names are valid Avro schema names. This means that the logical server name must start with alphabetic characters or an underscore ([a-z,A-Z,_]), and the remaining characters in the logical server name and all characters in the schema and table names must be alphanumeric characters or an underscore ([a-z,A-Z,0-9,\_]). The connector automatically replaces invalid characters with an underscore character.
Unexpected naming conflicts can result when the only distinguishing characters between multiple logical server names, schema names, or table names are not valid characters, and those characters are replaced with underscores.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events might change over time, which can be difficult for topic consumers to handle. To facilitate the processing of mutable event structures, each event in Kafka Connect is self-contained. Every message key and value has two parts: a schema and payload. The schema describes the structure of the payload, while the payload contains the actual data.
Changes that are performed by the SYS
or SYSTEM
user accounts are not captured by the connector.
The following topics contain more details about data change events:
2.5.2.1. About keys in Debezium Oracle connector change events
For each changed table, the change event key is structured such that a field exists for each column in the primary key (or unique key constraint) of the table at the time when the event is created.
For example, a customers
table that is defined in the inventory
database schema, might have the following change event key:
CREATE TABLE customers ( id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY, first_name VARCHAR2(255) NOT NULL, last_name VARCHAR2(255) NOT NULL, email VARCHAR2(255) NOT NULL UNIQUE );
If the value of the <topic.prefix>
.transaction
configuration property is set to server1
, the JSON representation for every change event that occurs in the customers
table in the database features the following key structure:
{ "schema": { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" } ], "optional": false, "name": "server1.INVENTORY.CUSTOMERS.Key" }, "payload": { "ID": 1004 } }
The schema
portion of the key contains a Kafka Connect schema that describes the content of the key portion. In the preceding example, the payload
value is not optional, the structure is defined by a schema named server1.DEBEZIUM.CUSTOMERS.Key
, and there is one required field named id
of type int32
. The value of the key’s payload
field indicates that it is indeed a structure (which in JSON is just an object) with a single id
field, whose value is 1004
.
Therefore, you can interpret this key as describing the row in the inventory.customers
table (output from the connector named server1
) whose id
primary key column had a value of 1004
.
2.5.2.2. About values in Debezium Oracle connector change events
The structure of a value in a change event message mirrors the structure of the message key in the change event in the message, and contains both a schema section and a payload section.
Payload of a change event value
An envelope structure in the payload sections of a change event value contains the following fields:
op
-
A mandatory field that contains a string value describing the type of operation. The
op
field in the payload of an Oracle connector change event value contains one of the following values:c
(create or insert),u
(update),d
(delete), orr
(read, which indicates a snapshot). before
-
An optional field that, if present, describes the state of the row before the event occurred. The structure is described by the
server1.INVENTORY.CUSTOMERS.Value
Kafka Connect schema, which theserver1
connector uses for all rows in theinventory.customers
table.
after
-
An optional field that, if present, contains the state of a row after a change occurs. The structure is described by the same
server1.INVENTORY.CUSTOMERS.Value
Kafka Connect schema that is used for thebefore
field. source
A mandatory field that contains a structure that describes the source metadata for the event. In the case of the Oracle connector, the structure includes the following fields:
- The Debezium version.
- The connector name.
- Whether the event is part of an ongoing snapshot or not.
- The transaction id (not includes for snapshots).
- The SCN of the change.
- A timestamp that indicates when the record in the source database changed (for snapshots, the timestamp indicates when the snapshot occurred).
- Username who made the change
The ROWID associated with the row
TipThe
commit_scn
field is optional and describes the SCN of the transaction commit that the change event participates within.
ts_ms
- An optional field that, if present, contains the time (based on the system clock in the JVM that runs the Kafka Connect task) at which the connector processed the event.
Schema of a change event value
The schema portion of the event message’s value contains a schema that describes the envelope structure of the payload and the nested fields within it.
For more information about change event values, see the following topics:
create events
The following example shows the value of a create event value from the customers
table that is described in the change event keys example:
{ "schema": { "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" }, { "type": "string", "optional": false, "field": "FIRST_NAME" }, { "type": "string", "optional": false, "field": "LAST_NAME" }, { "type": "string", "optional": false, "field": "EMAIL" } ], "optional": true, "name": "server1.DEBEZIUM.CUSTOMERS.Value", "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "ID" }, { "type": "string", "optional": false, "field": "FIRST_NAME" }, { "type": "string", "optional": false, "field": "LAST_NAME" }, { "type": "string", "optional": false, "field": "EMAIL" } ], "optional": true, "name": "server1.DEBEZIUM.CUSTOMERS.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": true, "field": "version" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" }, { "type": "string", "optional": true, "field": "txId" }, { "type": "string", "optional": true, "field": "scn" }, { "type": "string", "optional": true, "field": "commit_scn" }, { "type": "string", "optional": true, "field": "rs_id" }, { "type": "int64", "optional": true, "field": "ssn" }, { "type": "int32", "optional": true, "field": "redo_thread" }, { "type": "string", "optional": true, "field": "user_name" }, { "type": "boolean", "optional": true, "field": "snapshot" }, { "type": "string", "optional": true, "field": "row_id" } ], "optional": false, "name": "io.debezium.connector.oracle.Source", "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "server1.DEBEZIUM.CUSTOMERS.Envelope" }, "payload": { "before": null, "after": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "annek@noanswer.org" }, "source": { "version": "2.7.3.Final", "name": "server1", "ts_ms": 1520085154000, "ts_us": 1520085154000000, "ts_ns": 1520085154000000000, "txId": "6.28.807", "scn": "2122185", "commit_scn": "2122185", "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user", "snapshot": false, "row_id": "AAASgjAAMAAAACnAAA" }, "op": "c", "ts_ms": 1532592105975, "ts_us": 1532592105975741, "ts_ns": 1532592105975741582 } }
In the preceding example, notice how the event defines the following schema:
-
The envelope (
server1.DEBEZIUM.CUSTOMERS.Envelope
). -
The
source
structure (io.debezium.connector.oracle.Source
, which is specific to the Oracle connector and reused across all events). -
The table-specific schemas for the
before
andafter
fields.
The names of the schemas for the before
and after
fields are of the form <logicalName>.<schemaName>.<tableName>.Value
, and thus are entirely independent from the schemas for all other tables. As a result, when you use the Avro converter, the Avro schemas for tables in each logical source have their own evolution and history.
The payload
portion of this event’s value, provides information about the event. It describes that a row was created (op=c
), and shows that the after
field value contains the values that were inserted into the ID
, FIRST_NAME
, LAST_NAME
, and EMAIL
columns of the row.
By default, the JSON representations of events are much larger than the rows that they describe. The larger size is due to the JSON representation including both the schema and payload portions of a message. You can use the Avro Converter to decrease the size of messages that the connector writes to Kafka topics.
update events
The following example shows an update change event that the connector captures from the same table as the preceding create event.
{ "schema": { ... }, "payload": { "before": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "annek@noanswer.org" }, "after": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "anne@example.com" }, "source": { "version": "2.7.3.Final", "name": "server1", "ts_ms": 1520085811000, "ts_us": 1520085811000000, "ts_ns": 1520085811000000000, "txId": "6.9.809", "scn": "2125544", "commit_scn": "2125544", "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user", "snapshot": false, "row_id": "AAASgjAAMAAAACnAAA" }, "op": "u", "ts_ms": 1532592713485, "ts_us": 1532592713485152, "ts_ns": 1532592713485152954, } }
The payload has the same structure as the payload of a create (insert) event, but the following values are different:
-
The value of the
op
field isu
, signifying that this row changed because of an update. -
The
before
field shows the former state of the row with the values that were present before theupdate
database commit. -
The
after
field shows the updated state of the row, with theEMAIL
value now set toanne@example.com
. -
The structure of the
source
field includes the same fields as before, but the values are different, because the connector captured the event from a different position in the redo log. -
The
ts_ms
field shows the timestamp that indicates when Debezium processed the event.
The payload
section reveals several other useful pieces of information. For example, by comparing the before
and after
structures, we can determine how a row changed as the result of a commit. The source
structure provides information about Oracle’s record of this change, providing traceability. It also gives us insight into when this event occurred in relation to other events in this topic and in other topics. Did it occur before, after, or as part of the same commit as another event?
When the columns for a row’s primary/unique key are updated, the value of the row’s key changes. As a result, Debezium emits three events after such an update:
-
A
DELETE
event. - A tombstone event with the old key for the row.
-
An
INSERT
event that provides the new key for the row.
delete events
The following example shows a delete event for the table that is shown in the preceding create and update event examples. The schema
portion of the delete event is identical to the schema
portion for those events.
{ "schema": { ... }, "payload": { "before": { "ID": 1004, "FIRST_NAME": "Anne", "LAST_NAME": "Kretchmar", "EMAIL": "anne@example.com" }, "after": null, "source": { "version": "2.7.3.Final", "name": "server1", "ts_ms": 1520085153000, "ts_us": 1520085153000000, "ts_ns": 1520085153000000000, "txId": "6.28.807", "scn": "2122184", "commit_scn": "2122184", "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user", "snapshot": false, "row_id": "AAASgjAAMAAAACnAAA" }, "op": "d", "ts_ms": 1532592105960, "ts_us": 1532592105960854, "ts_ns": 1532592105960854693 } }
The payload
portion of the event reveals several differences when compared to the payload of a create or update event:
-
The value of the
op
field isd
, signifying that the row was deleted. -
The
before
field shows the former state of the row that was deleted with the database commit. -
The value of the
after
field isnull
, signifying that the row no longer exists. -
The structure of the
source
field includes many of the keys that exist in create or update events, but the values in thets_ms
,scn
, andtxId
fields are different. -
The
ts_ms
shows a timestamp that indicates when Debezium processed this event.
The delete event provides consumers with the information that they require to process the removal of this row.
The Oracle connector’s events are designed to work with Kafka log compaction, which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state.
When a row is deleted, the delete event value shown in the preceding example still works with log compaction, because Kafka is able to remove all earlier messages that use the same key. The message value must be set to null
to instruct Kafka to remove all messages that share the same key. To make this possible, by default, Debezium’s Oracle connector always follows a delete event with a special tombstone event that has the same key but null
value. You can change the default behavior by setting the connector property tombstones.on.delete
.
truncate events
A truncate change event signals that a table has been truncated. The message key is null
in this case, the message value looks like this:
{ "schema": { ... }, "payload": { "before": null, "after": null, "source": { 1 "version": "2.7.3.Final", "connector": "oracle", "name": "oracle_server", "ts_ms": 1638974535000, "ts_us": 1638974535000000, "ts_ns": 1638974535000000000, "snapshot": "false", "db": "ORCLPDB1", "sequence": null, "schema": "DEBEZIUM", "table": "TEST_TABLE", "txId": "02000a0037030000", "scn": "13234397", "commit_scn": "13271102", "lcr_position": null, "rs_id": "001234.00012345.0124", "ssn": 1, "redo_thread": 1, "user_name": "user" }, "op": "t", 2 "ts_ms": 1638974558961, 3 "ts_us": 1638974558961987, 4 "ts_ns": 1638974558961987251, 5 "transaction": null } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory field that describes the source metadata for the event. In a truncate event value, the
|
2 |
|
Mandatory string that describes the type of operation. The |
3 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task.
In the |
Because truncate events represent changes made to an entire table, and have no message key, in topics with multiple partitions, there is no guarantee that consumers receive truncate events and change events (create, update, etc.) for to a table in order. For example, when a consumer reads events from different partitions, it might receive an update event for a table after it receives a truncate event for the same table. Ordering can be guaranteed only if a topic uses a single partition.
If you do not want to capture truncate events, use the skipped.operations
option to filter them out.
2.5.3. How Debezium Oracle connectors map data types
When the Debezium Oracle connector detects a change in the value of a table row, it emits a change event that represents the change. Each change event record is structured in the same way as the original table, with the event record containing a field for each column value. The data type of a table column determines how the connector represents the column’s values in change event fields, as shown in the tables in the following sections.
For each column in a table, Debezium maps the source data type to a literal type and, and in some cases, a semantic type, in the corresponding event field.
- Literal types
-
Describe how the value is literally represented, using one of the following Kafka Connect schema types:
INT8
,INT16
,INT32
,INT64
,FLOAT32
,FLOAT64
,BOOLEAN
,STRING
,BYTES
,ARRAY
,MAP
, andSTRUCT
. - Semantic types
- Describe how the Kafka Connect schema captures the meaning of the field, by using the name of the Kafka Connect schema for the field.
If the default data type conversions do not meet your needs, you can create a custom converter for the connector.
For some Oracle large object (CLOB, NCLOB, and BLOB) and numeric data types, you can manipulate the way that the connector performs the type mapping by changing default configuration property settings. For more information about how Debezium properties control mappings for these data types, see Binary and Character LOB types and Numeric types.
For more information about how the Debezium connector maps Oracle data types, see the following topics:
Character types
The following table describes how the connector maps basic character types.
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
Use of the BLOB
, CLOB
, and NCLOB
with the Debezium Oracle connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
The following table describes how the connector maps binary and character large object (LOB) data types.
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
| n/a | This data type is not supported |
|
|
Depending on the setting of the
|
|
| n/a |
| n/a | This data type is not supported. |
| n/a | This data type is not supported. |
|
| n/a |
| n/a |
Depending on the setting of the
|
Oracle only supplies column values for CLOB
, NCLOB
, and BLOB
data types if they’re explicitly set or changed in a SQL statement. As a result, change events never contain the value of an unchanged CLOB
, NCLOB
, or BLOB
column. Instead, they contain placeholders as defined by the connector property, unavailable.value.placeholder
.
If the value of a CLOB
, NCLOB
, or BLOB
column is updated, the new value is placed in the after
element of the corresponding update change event. The before
element contains the unavailable value placeholder.
Numeric types
The following table describes how the Debezium Oracle connector maps numeric types.
You can modify the way that the connector maps the Oracle DECIMAL
, NUMBER
, NUMERIC
, and REAL
data types by changing the value of the connector’s decimal.handling.mode
configuration property. When the property is set to its default value of precise
, the connector maps these Oracle data types to the Kafka Connect org.apache.kafka.connect.data.Decimal
logical type, as indicated in the table. When the value of the property is set to double
or string
, the connector uses alternate mappings for some Oracle data types. For more information, see the Semantic type and Notes column in the following table.
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
| n/a |
|
| n/a |
|
|
When the
When the |
|
|
|
|
|
|
|
|
|
|
|
When the
When the |
|
|
When the
When the |
|
|
|
|
|
When the
When the |
|
|
|
|
|
When the
When the |
As mention above, Oracle allows negative scales in NUMBER
type. This can cause an issue during conversion to the Avro format when the number is represented as the Decimal
. Decimal
type includes scale information, but Avro specification allows only positive values for the scale. Depending on the schema registry used, it may result into Avro serialization failure. To avoid this issue, you can use NumberToZeroScaleConverter
, which converts sufficiently high numbers (P - S >= 19) with negative scale into Decimal
type with zero scale. It can be configured as follows:
converters=zero_scale zero_scale.type=io.debezium.connector.oracle.converters.NumberToZeroScaleConverter zero_scale.decimal.mode=precise
By default, the number is converted to Decimal
type (zero_scale.decimal.mode=precise
), but for completeness remaining two supported types (double
and string
) are supported as well.
Boolean types
Oracle does not provide native support for a BOOLEAN
data type. However, it is common practice to use other data types with certain semantics to simulate the concept of a logical BOOLEAN
data type.
To enable you to convert source columns to Boolean data types, Debezium provides a NumberOneToBooleanConverter
custom converter that you can use in one of the following ways:
-
Map all
NUMBER(1)
columns to aBOOLEAN
type. Enumerate a subset of columns by using a comma-separated list of regular expressions.
To use this type of conversion, you must set theconverters
configuration property with theselector
parameter, as shown in the following example:converters=boolean boolean.type=io.debezium.connector.oracle.converters.NumberOneToBooleanConverter boolean.selector=.*MYTABLE.FLAG,.*.IS_ARCHIVED
Temporal types
Other than the Oracle INTERVAL
, TIMESTAMP WITH TIME ZONE
, and TIMESTAMP WITH LOCAL TIME ZONE
data types, the way that the connector converts temporal types depends on the value of the time.precision.mode
configuration property.
When the time.precision.mode
configuration property is set to adaptive
(the default), then the connector determines the literal and semantic type for the temporal types based on the column’s data type definition so that events exactly represent the values in the database:
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When the time.precision.mode
configuration property is set to connect
, then the connector uses the predefined Kafka Connect logical types. This can be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. Because the level of precision that Oracle supports exceeds the level that the logical types in Kafka Connect support, if you set time.precision.mode
to connect
, a loss of precision results when the fractional second precision value of a database column is greater than 3:
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
ROWID types
The following table describes how the connector maps ROWID (row address) data types.
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
| n/a |
| n/a | This data type is not supported. |
Use of the XMLTYPE
with the Debezium Oracle connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
The following table describes how the connector maps XMLTYPE data types.
Oracle data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
User-defined types
Oracle enables you to define custom data types to provide flexibility when the built-in data types do not satisfy your requirements. There are a several user-defined types such as Object types, REF data types, Varrays, and Nested Tables. At this time, you cannot use the Debezium Oracle connector with any of these user-defined types.
Oracle-supplied types
Oracle provides SQL-based interfaces that you can use to define new types when the built-in or ANSI-supported types are insufficient. Oracle offers several commonly used data types to serve a broad array of purposes such as Any or Spatial types. At this time, you cannot use the Debezium Oracle connector with any of these data types.
Default Values
If a default value is specified for a column in the database schema, the Oracle connector will attempt to propagate this value to the schema of the corresponding Kafka record field. Most common data types are supported, including:
-
Character types (
CHAR
,NCHAR
,VARCHAR
,VARCHAR2
,NVARCHAR
,NVARCHAR2
) -
Numeric types (
INTEGER
,NUMERIC
, etc.) -
Temporal types (
DATE
,TIMESTAMP
,INTERVAL
, etc.)
If a temporal type uses a function call such as TO_TIMESTAMP
or TO_DATE
to represent the default value, the connector will resolve the default value by making an additional database call to evaluate the function. For example, if a DATE
column is defined with the default value of TO_DATE('2021-01-02', 'YYYY-MM-DD')
, the column’s default value will be the number of days since the UNIX epoch for that date or 18629
in this case.
If a temporal type uses the SYSDATE
constant to represent the default value, the connector will resolve this based on whether the column is defined as NOT NULL
or NULL
. If the column is nullable, no default value will be set; however, if the column isn’t nullable then the default value will be resolved as either 0
(for DATE
or TIMESTAMP(n)
data types) or 1970-01-01T00:00:00Z
(for TIMESTAMP WITH TIME ZONE
or TIMESTAMP WITH LOCAL TIME ZONE
data types). The default value type will be numeric except if the column is a TIMESTAMP WITH TIME ZONE
or TIMESTAMP WITH LOCAL TIME ZONE
in which case its emitted as a string.
Custom converters
By default, the Debezium Oracle connector provides several CustomConverter
implementations specific to Oracle data types. These custom converters provide alternative mappings for specific data types based on the connector configuration. To add a CustomConverter
to the connector, follow the instructions in the Custom Converters documentation.
The Debezium Oracle connector provides the following custom converters:
NUMBER(1)
to Boolean
Beginning with version 23, Oracle database provides a BOOLEAN
logical data type. In earlier versions, the database simulates a BOOLEAN
type by using a NUMBER(1)
data type, constrained with a value of 0
for false, or a value of 1
for true.
By default, when Debezium emits change events for source columns that use the NUMBER(1)
data type, it converts the data to the INT8
literal type. If the default mapping for NUMBER(1)
data types does not meet your needs, you can configure the connector to use the logical BOOL
type when it emits these columns by configuring the NumberOneToBooleanConverter
, as shown in the following example:
Example: NumberOneToBooleanConverter
configuration
converters=number-to-boolean number-to-boolean.type=io.debezium.connector.oracle.converters.NumberOneToBooleanConverter number-to-boolean.selector=.*.MY_TABLE.DATA
In the preceding example, the selector
property is optional. The selector
property specifies a regular expression that designates which tables or columns the converter applies to. If you omit the selector
property, when Debezium emits an event, every column with the NUMBER(1)
data type is converted to a field that uses the logical BOOL
type.
NUMBER
To Zero Scale
Oracle supports creating NUMBER
based columns with negative scale, that is, NUMBER(-2)
. Not all systems can process negative scale values, so these values can result in processing problems in your pipeline. For example, because Apache Avro does not support these values, problems can occur if Debezium converts events to Avro format. Similarly, downstream consumers that do not support these values can also encounter errors.
Example configuration
converters=number-zero-scale number-zero-scale.type=io.debezium.connector.oracle.converters.NumberToZeroScaleConverter number-zero-scale.decimal.mode=precise
In the preceding example, the decimal.mode
property specifies how the connector emits decimal values. This property is optional. If you omit the decimal.mode
property, the converter defaults to using the PRECISE
decimal handling mode.
RAW
to String
Although Oracle recommends against the use of certain data types, such as RAW
, legacy systems might continue to use such types. By default, Debezium emits RAW
column types as logical BYTES
, a type that enables the storage of binary or text-based data.
In some cases, RAW
columns might store character data as a series of bytes. To facilitate consumption by consumers, you can configure Debezium to use the RawToStringConverter
. The RawToStringConverter
provides a way to easily target such RAW
columns and emit values as strings, rather than bytes. The following example shows how to add the RawToStringConverter
to the connector configuration:
Example: RawToStringConverter
configuration
converters=raw-to-string raw-to-string.type=io.debezium.connector.oracle.converters.RawToStringConverter raw-to-string.selector=.*.MY_TABLE.DATA
In the preceding example, the selector
property enables you to define a regular expression that specifies the tables or columns that the converter processes. If you omit the selector
property, the converter maps all RAW
column types to logical STRING
field types.
2.5.4. Setting up Oracle to work with Debezium
The following steps are necessary to set up Oracle for use with the Debezium Oracle connector. These steps assume the use of the multi-tenancy configuration with a container database and at least one pluggable database. If you do not intend to use a multi-tenant configuration, it might be necessary to adjust the following steps.
For details about setting up Oracle for use with the Debezium connector, see the following sections:
- Section 2.5.4.1, “Compatibility of the Debezium Oracle connector with Oracle installation types”
- Section 2.5.4.2, “Schemas that the Debezium Oracle connector excludes when capturing change events”
- Section 2.5.4.4, “Preparing Oracle databases for use with Debezium”
- Section 2.5.4.5, “Resizing Oracle redo logs to accommodate the data dictionary”
- Section 2.5.4.7, “Creating an Oracle user for the Debezium Oracle connector”
- Section 2.5.4.8, “Running the connector with an Oracle standby database”
2.5.4.1. Compatibility of the Debezium Oracle connector with Oracle installation types
An Oracle database can be installed either as a standalone instance or using Oracle Real Application Cluster (RAC). The Debezium Oracle connector is compatible with both types of installation.
2.5.4.2. Schemas that the Debezium Oracle connector excludes when capturing change events
When the Debezium Oracle connector captures tables, it automatically excludes tables from the following schemas:
-
appqossys
-
audsys
-
ctxsys
-
dvsys
-
dbsfwuser
-
dbsnmp
-
qsmadmin_internal
-
lbacsys
-
mdsys
-
ojvmsys
-
olapsys
-
orddata
-
ordsys
-
outln
-
sys
-
system
-
vecsys
(Oracle 23+) -
wmsys
-
xdb
To enable the connector to capture changes from a table, the table must use a schema that is not named in the preceding list.
2.5.4.3. Tables that the Debezium Oracle connector excludes when capturing change events
When the Debezium Oracle connector captures tables, it automatically excludes tables that match the following rules:
-
Compression Advisor tables matching the pattern
CMP[3|4]$[0-9]+
. -
Index-organized tables matching the pattern
SYS_IOT_OVER_%
. -
Spatial tables matching the patterns
MDRT_%
,MDRS_%
, orMDXT_%
. - Nested tables
To enable the connector to capture a table with a name that matches any of the preceding rules, you must rename the table.
2.5.4.4. Preparing Oracle databases for use with Debezium
Configuration needed for Oracle LogMiner
ORACLE_SID=ORACLCDB dbz_oracle sqlplus /nolog CONNECT sys/top_secret AS SYSDBA alter system set db_recovery_file_dest_size = 10G; alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile; shutdown immediate startup mount alter database archivelog; alter database open; -- Should now "Database log mode: Archive Mode" archive log list exit;
Oracle AWS RDS does not allow you to execute the commands above nor does it allow you to log in as sysdba. AWS provides these alternative commands to configure LogMiner. Before executing these commands, ensure that your Oracle AWS RDS instance is enabled for backups.
To confirm that Oracle has backups enabled, execute the command below first. The LOG_MODE should say ARCHIVELOG. If it does not, you may need to reboot your Oracle AWS RDS instance.
Configuration needed for Oracle AWS RDS LogMiner
SQL> SELECT LOG_MODE FROM V$DATABASE; LOG_MODE ------------ ARCHIVELOG
Once LOG_MODE is set to ARCHIVELOG, execute the commands to complete LogMiner configuration. The first command set the database to archivelogs and the second adds supplemental logging.
Configuration needed for Oracle AWS RDS LogMiner
exec rdsadmin.rdsadmin_util.set_configuration('archivelog retention hours',24); exec rdsadmin.rdsadmin_util.alter_supplemental_logging('ADD');
To enable Debezium to capture the before state of changed database rows, you must also enable supplemental logging for captured tables or for the entire database. The following example illustrates how to configure supplemental logging for all columns in a single inventory.customers
table.
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Enabling supplemental logging for all table columns increases the volume of the Oracle redo logs. To prevent excessive growth in the size of the logs, apply the preceding configuration selectively.
Minimal supplemental logging must be enabled at the database level and can be configured as follows.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
2.5.4.5. Resizing Oracle redo logs to accommodate the data dictionary
Depending on the database configuration, the size and number of redo logs might not be sufficient to achieve acceptable performance. Before you set up the Debezium Oracle connector, ensure that the capacity of the redo logs is sufficient to support the database.
The capacity of the redo logs for a database must be sufficient to store its data dictionary. In general, the size of the data dictionary increases with the number of tables and columns in the database. If the redo log lacks sufficient capacity, both the database and the Debezium connector might experience performance problems.
Consult with your database administrator to evaluate whether the database might require increased log capacity.
2.5.4.6. Specifying the archive log destination that the Debezium Oracle connector uses
Oracle database administrators can configure up to 31 different destinations for archive logs. Administrators can set parameters for each destination to designate it for a specific use, for example, log shipping for physical standbys, or external storage to allow for extended log retention. Oracle reports details about archive log destinations in the V$ARCHIVE_DEST_STATUS
view.
The Debezium Oracle connector only uses destinations that have a status of VALID
and a type of LOCAL
. If your Oracle environment includes multiple destinations that satisfy that criteria, consult with your Oracle administrator to determine which archive log destination Debezium should use.
Procedure
-
To specify the archive log destination that you want Debezium to use, set the
log.mining.archive.destination.name
property in the connector configuration.
For example, suppose that a database is configured with two archive destination paths,/path/one
and/path/two
, and that theV$ARCHIVE_DEST_STATUS
table associates these paths with destination names that are specified in the columnDEST_NAME
. If both destinations satisfy the criteria for Debezium — that is, theirstatus
isVALID
and theirtype
isLOCAL
— to configure the connector to use the archive logs that the database writes to/path/two
, set the value oflog.mining.archive.destination.name
to the value in theDEST_NAME
column that is associated with/path/two
in theV$ARCHIVE_DEST_STATUS
table. For example, if theDEST_NAME
isLOG_ARCHIVE_DEST_3
for/path/two
, you would configure Debezium as follows:
{ "log.mining.archive.destination.name": "LOG_ARCHIVE_DEST_3" }
Do not set the value of log.mining.archive.destination.name
to the path that the database uses for the archive logs. Set the property to the name of an archive log destination in the DEST_NAME
column for a row in the V$ARCHIVE_DEST_STATUS
table that satisfies your archive log retention policy.
If your Oracle environment includes multiple destinations that satisfy that criteria, and you fail to specify the preferred destination, the Debezium Oracle connector selects the destination path at random. Because the retention policy that is configured for each destination might differ, this can lead to errors if the connector selects a path from which the requested log data was deleted.
2.5.4.7. Creating an Oracle user for the Debezium Oracle connector
For the Debezium Oracle connector to capture change events, it must run as an Oracle LogMiner user that has specific permissions. The following example shows the SQL for creating an Oracle user account for the connector in a multi-tenant database model.
The connector captures database changes that are made by its own Oracle user account. However, it does not capture changes that are made by the SYS
or SYSTEM
user accounts.
Creating the connector’s LogMiner user
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; exit; sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba CREATE USER c##dbzuser IDENTIFIED BY dbz DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs CONTAINER=ALL; GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL; 1 GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL; 2 GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL; 3 GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL; 4 GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL; 5 GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; 6 GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL; 7 GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL; 8 GRANT LOGMINING TO c##dbzuser CONTAINER=ALL; 9 GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL; 10 GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL; 11 GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL; 12 GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL; 13 GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL; 14 GRANT SELECT ON V_$LOG TO c##dbzuser CONTAINER=ALL; 15 GRANT SELECT ON V_$LOG_HISTORY TO c##dbzuser CONTAINER=ALL; 16 GRANT SELECT ON V_$LOGMNR_LOGS TO c##dbzuser CONTAINER=ALL; 17 GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL; 18 GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL; 19 GRANT SELECT ON V_$LOGFILE TO c##dbzuser CONTAINER=ALL; 20 GRANT SELECT ON V_$ARCHIVED_LOG TO c##dbzuser CONTAINER=ALL; 21 GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL; 22 GRANT SELECT ON V_$TRANSACTION TO c##dbzuser CONTAINER=ALL; 23 GRANT SELECT ON V_$MYSTAT TO c##dbzuser CONTAINER=ALL; 24 GRANT SELECT ON V_$STATNAME TO c##dbzuser CONTAINER=ALL; 25 exit;
Item | Role name | Description |
---|---|---|
1 | CREATE SESSION | Enables the connector to connect to Oracle. |
2 | SET CONTAINER | Enables the connector to switch between pluggable databases. This is only required when the Oracle installation has container database support (CDB) enabled. |
3 | SELECT ON V_$DATABASE |
Enables the connector to read the |
4 | FLASHBACK ANY TABLE |
Enables the connector to perform Flashback queries, which is how the connector performs the initial snapshot of data. Optionally, rather than granting |
5 | SELECT ANY TABLE |
Enables the connector to read any table. Optionally, rather than granting |
6 | SELECT_CATALOG_ROLE | Enables the connector to read the data dictionary, which is needed by Oracle LogMiner sessions. |
7 | EXECUTE_CATALOG_ROLE | Enables the connector to write the data dictionary into the Oracle redo logs, which is needed to track schema changes. |
8 | SELECT ANY TRANSACTION |
Enables the snapshot process to perform a Flashback snapshot query against any transaction. When |
9 | LOGMINING | This role was added in newer versions of Oracle as a way to grant full access to Oracle LogMiner and its packages. On older versions of Oracle that don’t have this role, you can ignore this grant. |
10 | CREATE TABLE | Enables the connector to create its flush table in its default tablespace. The flush table allows the connector to explicitly control flushing of the LGWR internal buffers to disk. |
11 | LOCK ANY TABLE | Enables the connector to lock tables during schema snapshot. If snapshot locks are explicitly disabled via configuration, this grant can be safely ignored. |
12 | CREATE SEQUENCE | Enables the connector to create a sequence in its default tablespace. |
13 | EXECUTE ON DBMS_LOGMNR |
Enables the connector to run methods in the |
14 | EXECUTE ON DBMS_LOGMNR_D |
Enables the connector to run methods in the |
15 to 25 | SELECT ON V_$…. | Enables the connector to read these tables. The connector must be able to read information about the Oracle redo and archive logs, and the current transaction state, to prepare the Oracle LogMiner session. Without these grants, the connector cannot operate. |
2.5.4.8. Running the connector with an Oracle standby database
A standby database provides a synchronized copy of the primary instance. In the event of a primary database failure, standby databases provide for continuous availability, and disaster recovery. Oracle makes use of both physical and logical standby databases.
Physical standbys
A physical standby is an exact, block-for-block copy of the primary production database, and its system change number (SCN) values are identical to those of the primary. The Debezium Oracle connector cannot capture change events directly from a physical standby database, because physical standbys do not accept external connections. The connector can capture events from a physical standby only after the standby is converted to the primary database. The connector then connects to the former standby as if it were any primary database.
Logical standbys
A logical standby contains the same logical data as the primary, but data might be stored in a different physical manner. SCN offsets in a logical standby differ from the offsets in the primary database. You can configure the Debezium Oracle connector to capture changes from a logical standby database.
2.5.4.8.1. Capturing data from an Oracle failover database
When you set up a failover database, it is generally best practice to use a physical standby database rather than a logical standby database. A physical standby maintains a more consistent state with the primary database than does a logical standby. Physical standbys contain an exact replica of the primary data, and the system change number (SCN) values of the standby are identical to those of the primary. In a Debezium environment, after the database fails over to physical standby, the presence of consistent SCN values ensure that the connector can find the last processed SCN value.
A physical standby is locked in a read-only mode, with managed recovery running to maintain synchronization. When a database is in standby mode, it does not accept external JDBC connections from clients, and it cannot be accessed by external application.
After a failure event, to permit Debezium to connect to the former physical standby,a DBA must perform several actions to enable failover to the standby, and promote it the the primary database. The following list identifies some of the key actions:
- Cancel managed recovery on the standby.
- Complete the active recovery process.
- Convert the standby to the primary role.
- Open the new primary to client read and write operations.
After the former physical standby is available for normal use, you can configure the Debezium Oracle connector to connect to it. To enable the connector to capture from the new primary, edit the database hostname in the connector configuration, replacing the hostname of the original primary with the hostname of the new primary.
2.5.4.8.2. Configuring the Debezium Oracle connector to capture events from a logical standby
When the Debezium connector for Oracle connects to a primary database, it uses an internal flush table to manage the flush cycles of the Oracle Log Writer Buffer (LGWR) process. The flush process requires that the user account through which the connector accesses the database has permission to create and write to this flush table. However, a logical stand-by database typically permits read-only access, preventing the connector from writing to the database. You can modify the connector configuration to enable the connector to capture events from a logical standby, or the DBA can create a new writable tablespace in which the connector can store the flush table.
The ability for the Debezium Oracle connector to ingest changes from a read-only logical standby database is a Developer Preview feature. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.
Procedure
To enable Debezium to capture events from an Oracle read-only logical standby database, add the following property to the connector configuration, to disable creation and management of the flush table:
internal.log.mining.read.only=true
The preceding setting prevents the database from creating and updating the
LOG_MINING_FLUSH
table. You can use theinternal.log.mining.read.only
property with an Oracle Standalone database, or with an Oracle RAC installation.
2.5.5. Deployment of Debezium Oracle connectors
You can use either of the following methods to deploy a Debezium Oracle connector:
Due to licensing requirements, the Debezium Oracle connector archive does not include the Oracle JDBC driver that the connector requires to connect to an Oracle database. To enable the connector to access the database, you must add the driver to your connector environment. For more information, see Obtaining the Oracle JDBC driver.
Additional resources
2.5.5.1. Obtaining the Oracle JDBC driver
Due to licensing requirements, the Oracle JDBC driver file that Debezium requires to connect to an Oracle database is not included in the Debezium Oracle connector archive. The driver is available for download from Maven Central. Depending on the deployment method that you use, you retrieve the driver by adding a command to the Kafka Connect custom resource or to the Dockerfile that you use to build the connector image.
-
If you use Streams for Apache Kafka to add the connector to your Kafka Connect image, add the Maven Central location for the driver to
builds.plugins.artifact.url
in theKafkaConnect
custom resource as shown in Section 2.5.5.3, “Using Streams for Apache Kafka to deploy a Debezium Oracle connector”. -
If you use a Dockerfile to build a container image for the connector, insert a
curl
command in the Dockerfile to specify the URL for downloading the required driver file from Maven Central. For more information, see Deploying a Debezium Oracle connector by building a custom Kafka Connect container image from a Dockerfile.
2.5.5.2. Debezium Oracle connector deployment using Streams for Apache Kafka
Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in.
During the deployment process, you create and use the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. -
A
KafkaConnector
CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying theKafkaConnector
CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The spec.build.output
parameter in the KafkaConnect
CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically.
If you use a KafkaConnect
resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information.
Additional resources
- Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift.
2.5.5.3. Using Streams for Apache Kafka to deploy a Debezium Oracle connector
With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use.
During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect
custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
The newly created container is pushed to the container registry that is specified in .spec.build.output
, and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector
custom resources to start the connectors that are included in the build.
Prerequisites
- You have access to an OpenShift cluster on which the cluster Operator is installed.
- The Streams for Apache Kafka Operator is running.
- An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Kafka Connect is deployed on Streams for Apache Kafka
- You have a Red Hat build of Debezium license.
-
The OpenShift
oc
CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
- To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub
- An account and permissions to create and manage images in the registry.
- To store the build image as a native OpenShift ImageStream
- An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams on OpenShift Container Platform.
Procedure
- Log in to the OpenShift cluster.
Create a Debezium
KafkaConnect
custom resource (CR) for the connector, or modify an existing one. For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifies themetadata.annotations
andspec.build
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
Example 2.34. A
dbz-connect.yaml
file that defines aKafkaConnect
custom resource that includes a Debezium connectorIn the example that follows, the custom resource is configured to download the following artifacts:
- The Debezium Oracle connector archive.
- The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector.
- The Debezium scripting SMT archive and the associated language dependencies that you want to use with the Debezium connector. The SMT archive and language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT.
- The Oracle JDBC driver, which is required to connect to Oracle databases, but is not included in the connector archive.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-oracle artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-oracle/2.7.3.Final-redhat-00001/debezium-connector-oracle-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat-<build-number>/apicurio-registry-distro-connect-converter-2.4.4.Final-redhat-<build-number>.zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar - type: jar 11 url: https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/21.6.0.0/ojdbc8-21.6.0.0.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ...
Table 2.120. Descriptions of Kafka Connect configuration settings Item Description 1
Sets the
strimzi.io/use-connector-resources
annotation to"true"
to enable the Cluster Operator to useKafkaConnector
resources to configure connectors in this Kafka Connect cluster.2
The
spec.build
configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.3
The
build.output
specifies the registry in which the newly built image is stored.4
Specifies the name and image name for the image output. Valid values for
output.type
aredocker
to push into a container registry such as Docker Hub or Quay, orimagestream
to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying thebuild.output
in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}.5
The
plugins
configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-inname
, and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component.6
The value of
artifacts.type
specifies the file type of the artifact specified in theartifacts.url
. Valid types arezip
,tgz
, orjar
. Debezium connector archives are provided in.zip
file format. JDBC driver files are in.jar
format. Thetype
value must match the type of the file that is referenced in theurl
field.7
The value of
artifacts.url
specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server.8
(Optional) Specifies the artifact
type
andurl
for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter.9
(Optional) Specifies the artifact
type
andurl
for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy.10
(Optional) Specifies the artifact
type
andurl
for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT.ImportantIf you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components
artifacts.url
must specify the location of a JAR file, and the value ofartifacts.type
must also be set tojar
. Invalid values cause the connector fails at runtime.To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries:
-
groovy
-
groovy-jsr223
(scripting agent) -
groovy-json
(module for parsing JSON strings)
The Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript.
11
Specifies the location of the Oracle JDBC driver in Maven Central. The required driver is not included in the Debezium Oracle connector archive.
Apply the
KafkaConnect
build specification to the OpenShift cluster by entering the following command:oc create -f dbz-connect.yaml
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster.Create a
KafkaConnector
resource to define an instance of each connector that you want to deploy.
For example, create the followingKafkaConnector
CR, and save it asoracle-inventory-connector.yaml
Example 2.35.
oracle-inventory-connector.yaml
file that defines theKafkaConnector
custom resource for a Debezium connectorapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-oracle 1 spec: class: io.debezium.connector.oracle.OracleConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: oracle.debezium-oracle.svc.cluster.local 5 database.port: 1521 6 database.user: debezium 7 database.password: dbz 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-oracle 10 table.include.list: PUBLIC.INVENTORY 11 ...
Table 2.121. Descriptions of connector configuration settings Item Description 1
The name of the connector to register with the Kafka Connect cluster.
2
The name of the connector class.
3
The number of tasks that can operate concurrently.
4
The connector’s configuration.
5
The address of the host database instance.
6
The port number of the database instance.
7
The name of the account that Debezium uses to connect to the database.
8
The password that Debezium uses to connect to the database user account.
9
The name of the database to capture changes from.
10
The topic prefix for the database instance or cluster.
The specified name must be formed only from alphanumeric characters or underscores.
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster.
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector.11
The list of tables from which the connector captures change events.
Create the connector resource by running the following command:
oc create -n <namespace> -f <kafkaConnector>.yaml
For example,
oc create -n debezium -f oracle-inventory-connector.yaml
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by
spec.config.database.dbname
in theKafkaConnector
CR. After the connector pod is ready, Debezium is running.
You are now ready to verify the Debezium Oracle deployment.
2.5.5.4. Deploying a Debezium Oracle connector by building a custom Kafka Connect container image from a Dockerfile
To deploy a Debezium Oracle connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive, and then push this container image to a container registry. You then need to create the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance. Theimage
property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat Streams for Apache Kafka is deployed. Streams for Apache Kafka offers operators and images that bring Apache Kafka to OpenShift. -
A
KafkaConnector
CR that defines your Debezium Oracle connector. Apply this CR to the same OpenShift instance where you apply theKafkaConnect
CR.
Prerequisites
- Oracle Database is running and you completed the steps to set up Oracle to work with a Debezium connector.
- Streams for Apache Kafka is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Managing Streams for Apache Kafka on OpenShift
- Podman or Docker is installed.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector. The Kafka Connect server has access to Maven Central to download the required JDBC driver for Oracle. You can also use a local copy of the driver, or one that is available from a local Maven repository or other HTTP server.
For more information, see Obtaining the Oracle JDBC driver.
Procedure
Create the Debezium Oracle container for Kafka Connect:
Create a Dockerfile that uses
registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0
as the base image. For example, from a terminal window, enter the following command:cat <<EOF >debezium-container-for-oracle.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-oracle/2.7.3.Final-redhat-00001/debezium-connector-oracle-2.7.3.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-oracle-2.7.3.Final-redhat-00001-plugin.zip \ && rm debezium-connector-oracle-2.7.3.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://repo1.maven.org/maven2/com/oracle/ojdbc/ojdbc8/21.1.0.0/ojdbc8-21.1.0.0.jar USER 1001 EOF
Item Description 1
You can specify any file name that you want.
2
Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory.
The command creates a Dockerfile with the name
debezium-container-for-oracle.yaml
in the current directory.Build the container image from the
debezium-container-for-oracle.yaml
Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:podman build -t debezium-container-for-oracle:latest .
docker build -t debezium-container-for-oracle:latest .
The preceding commands build a container image with the name
debezium-container-for-oracle
.Push your custom image to a container registry, such as quay.io or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:
podman push <myregistry.io>/debezium-container-for-oracle:latest
docker push <myregistry.io>/debezium-container-for-oracle:latest
Create a new Debezium Oracle KafkaConnect custom resource (CR). For example, create a
KafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: image: debezium-container-for-oracle 2 ...
Item Description 1
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.2
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply the
KafkaConnect
CR to the OpenShift Kafka Connect environment by entering the following command:oc create -f dbz-connect.yaml
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium Oracle connector instance.You configure a Debezium Oracle connector in a
.yaml
file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.The following example configures a Debezium connector that connects to an Oracle host IP address, on port
1521
. This host has a database namedORCLCDB
, andserver1
is the server’s logical name.Oracle
inventory-connector.yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-oracle 1 labels: strimzi.io/cluster: my-connect-cluster annotations: strimzi.io/use-connector-resources: 'true' spec: class: io.debezium.connector.oracle.OracleConnector 2 config: database.hostname: <oracle_ip_address> 3 database.port: 1521 4 database.user: c##dbzuser 5 database.password: dbz 6 database.dbname: ORCLCDB 7 database.pdb.name : ORCLPDB1, 8 topic.prefix: inventory-connector-oracle 9 schema.history.internal.kafka.bootstrap.servers: kafka:9092 10 schema.history.internal.kafka.topic: schema-changes.inventory 11
Table 2.122. Descriptions of connector configuration settings Item Description 1
The name of our connector when we register it with a Kafka Connect service.
2
The name of this Oracle connector class.
3
The address of the Oracle instance.
4
The port number of the Oracle instance.
5
The name of the Oracle user, as specified in Creating users for the connector.
6
The password for the Oracle user, as specified in Creating users for the connector.
7
The name of the database to capture changes from.
8
The name of the Oracle pluggable database that the connector captures changes from. Used in container database (CDB) installations only.
9
Topic prefix identifies and provides a namespace for the Oracle database server from which the connector captures changes.
10
The list of Kafka brokers that this connector uses to write and recover DDL statements to the database schema history topic.
11
The name of the database schema history topic where the connector writes and recovers DDL statements. This topic is for internal use only and should not be used by consumers.
Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in theinventory-connector.yaml
file, you would run the following command:oc apply -f inventory-connector.yaml
The preceding command registers
inventory-connector
and the connector starts to run against theserver1
database as defined in theKafkaConnector
CR.
For the complete list of the configuration properties that you can set for the Debezium Oracle connector, see Oracle connector properties.
Results
After the connector starts, it performs a consistent snapshot of the Oracle databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming the change event records to Kafka topics.
2.5.5.5. Configuration of container databases and non-container-databases
Oracle Database supports the following deployment types:
- Container database (CDB)
- A database that can contain multiple pluggable databases (PDBs). Database clients connect to each PDB as if it were a standard, non-CDB database.
- Non-container database (non-CDB)
- A standard Oracle database, which does not support the creation of pluggable databases.
2.5.5.6. Verifying that the Debezium Oracle connector is running
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc):
- Verify the connector status.
- Verify that the connector generates topics.
- Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table.
Prerequisites
- A Debezium connector is deployed to Streams for Apache Kafka on OpenShift.
-
The OpenShift
oc
CLI client is installed. - You have access to the OpenShift Container Platform web console.
Procedure
Check the status of the
KafkaConnector
resource by using one of the following methods:From the OpenShift Container Platform web console:
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaConnector
. - From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-oracle.
- In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc describe KafkaConnector <connector-name> -n <project>
For example,
oc describe KafkaConnector inventory-connector-oracle -n debezium
The command returns status information that is similar to the following output:
Example 2.36.
KafkaConnector
resource statusName: inventory-connector-oracle Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-oracle Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-oracle.inventory inventory-connector-oracle.inventory.addresses inventory-connector-oracle.inventory.customers inventory-connector-oracle.inventory.geom inventory-connector-oracle.inventory.orders inventory-connector-oracle.inventory.products inventory-connector-oracle.inventory.products_on_hand Events: <none>
Verify that the connector created Kafka topics:
From the OpenShift Container Platform web console.
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaTopic
. -
From the KafkaTopics list, click the name of the topic that you want to check, for example,
inventory-connector-oracle.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d
. - In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc get kafkatopics
The command returns status information that is similar to the following output:
Example 2.37.
KafkaTopic
resource statusNAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-oracle--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-oracle.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
Check topic content.
- From a terminal window, enter the following command:
oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=<topic-name>
For example,
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-oracle.inventory.products_on_hand
The format for specifying the topic name is the same as the
oc describe
command returns in Step 1, for example,inventory-connector-oracle.inventory.addresses
.For each event in the topic, the command returns information that is similar to the following output:
Example 2.38. Content of a Debezium change event
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory-connector-oracle.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-oracle.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-oracle.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"int64","optional":false,"field":"ts_us"},{"type":"int64","optional":false,"field":"ts_ns"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.oracle.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"int64","optional":true,"field":"ts_us"},{"type":"int64","optional":true,"field":"ts_ns"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"inventory-connector-oracle.inventory.products_on_hand.Envelope"},"payload":{"before":null,"after":{"product_id":101,"quantity":3},"source":{"version":"2.7.3.Final-redhat-00001","connector":"oracle","name":"inventory-connector-oracle","ts_ms":1638985247805,"ts_us":1638985247805000000,"ts_ns":1638985247805000000,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"oracle-bin.000003","pos":156,"row":0,"thread":null,"query":null},"op":"r","ts_ms":1638985247805,"ts_us":1638985247805102,"ts_ns":1638985247805102588,"transaction":null}}
In the preceding example, the
payload
value shows that the connector snapshot generated a read ("op" ="r"
) event from the tableinventory.products_on_hand
. The"before"
state of theproduct_id
record isnull
, indicating that no previous value exists for the record. The"after"
state shows aquantity
of3
for the item withproduct_id
101
.
2.5.6. Descriptions of Debezium Oracle connector configuration properties
The Debezium Oracle connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:
- Required Debezium Oracle connector configuration properties
- Database schema history connector configuration properties that control how Debezium processes events that it reads from the database schema history topic.
Pass-through Db2 connector configuration properties
- Pass-through database schema history properties for configuring producer and consumer clients
- Pass-through Kafka signals configuration properties
- Pass-through Kafka signals consumer client configuration properties
- Pass-through sink notification configuration properties
- Pass-through database driver configuration properties
Required Debezium Oracle connector configuration properties
The following configuration properties are required unless a default value is available.
Property | Default | Description |
No default | Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) | |
No default |
The name of the Java class for the connector. Always use a value of | |
No default |
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use.
For each converter that you configure for a connector, you must also add a
For example, boolean.type: io.debezium.connector.oracle.converters.NumberOneToBooleanConverter
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameters with a converter, prefix the parameter names with the symbolic name of the converter. boolean.selector: .*MYTABLE.FLAG,.*.IS_ARCHIVED | |
| The maximum number of tasks to create for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable. | |
No default | IP address or hostname of the Oracle database server. | |
No default | Integer port number of the Oracle database server. | |
No default | Name of the Oracle user account that the connector uses to connect to the Oracle database server. | |
No default | Password to use when connecting to the Oracle database server. | |
No default | Name of the database to connect to. In a container database environment, specify the name of the root container database (CDB), not the name of an included pluggable database (PDB). | |
No default | Specifies the raw database JDBC URL. Use this property to provide flexibility in defining that database connection. Valid values include raw TNS names and RAC connection strings. | |
No default | Name of the Oracle pluggable database to connect to. Use this property with container database (CDB) installations only. | |
No default |
Topic prefix that provides a namespace for the Oracle database server from which the connector captures changes. The value that you set is used as a prefix for all Kafka topic names that the connector emits. Specify a topic prefix that is unique among all connectors in your Debezium environment. The following characters are valid: alphanumeric characters, hyphens, dots, and underscores. Warning Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. The connector is also unable to recover its database schema history topic. | |
| The adapter implementation that the connector uses when it streams database changes. You can set the following values:
| |
initial | Specifies the mode that the connector uses to take snapshots of a captured table. You can set the following values:
After the snapshot is complete, the connector continues to read change events from the database’s redo logs except when
For more information, see the table of | |
shared | Controls whether and for how long the connector holds a table lock. Table locks prevent certain types of changes table operations from occurring while the connector performs a snapshot. You can set the following values:
| |
|
Specifies how the connector queries data while performing a snapshot.
This setting enables you to manage snapshot content in a more flexible manner compared to using the | |
All tables specified in the connector’s |
An optional, comma-separated list of regular expressions that match the fully-qualified names (
In a multitenant container database (CDB) environment, the regular expression must include the pluggable database (PDB) name, using the format
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name.
A snapshot can only include tables that are named in the connector’s
This property takes effect only if the connector’s | |
No default | Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a comma-separated list of fully-qualified table names in the form
From a "snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM customers.orders WHERE delete_flag = 0 ORDER BY id DESC"
In the resulting snapshot, the connector includes only the records for which | |
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Only POSIX regular expressions are valid. Any schema name not included in
To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. | |
| Boolean value that specifies whether the connector should parse and publish table and column comments on metadata objects. Enabling this option will bring the implications on memory usage. The number and size of logical schema objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding potentially large string data to each of them can potentially be quite expensive. | |
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Only POSIX regular expressions are valid. Any schema whose name is not included in
To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. | |
No default |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be captured. Only POSIX regular expressions are valid. When this property is set, the connector captures changes only from the specified tables. Each table identifier uses the following format:
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. | |
No default |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring. Only POSIX regular expressions are valid. The connector captures change events from any table that is not specified in the exclude list. Specify the identifier for each table using the following format:
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. | |
No default |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that want to include in the change event message values. Only POSIX regular expressions are valid. Fully-qualified names for columns use the following format:
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column it does not match substrings that might be present in a column name. | |
No default |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that you want to exclude from change event message values. Only POSIX regular expressions are valid. Fully-qualified column names use the following format:
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column it does not match substrings that might be present in a column name. | |
|
Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per | |
| n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form
A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. |
bytes |
Specifies how binary ( | |
none |
Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
| |
none |
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
See Avro naming for more details. | |
|
Specifies how the connector should handle floating point values for
| |
|
Specifies how the connector should handle values for | |
| Specifies how the connector should react to exceptions during processing of events. You can set one of the following options:
| |
| A positive integer value that specifies the maximum size of each batch of events to process during each iteration of this connector. | |
|
Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of | |
|
A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. | |
| Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. | |
| Controls whether a delete event is followed by a tombstone event. The following values are possible:
After a source record is deleted, a tombstone event (the default behavior) enables Kafka to completely delete all events that share the key of the deleted row in topics that have log compaction enabled. | |
No default | A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. | |
No default |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set
The fully-qualified name of a column observes the following format: You can specify multiple properties with different lengths in a single configuration. | |
No default |
An optional comma-separated list of regular expressions for masking column names in change event messages by replacing characters with asterisks ( | |
No default | An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes one of the following formats: | |
No default | An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes one of the following formats: For the list of Oracle-specific data type names, see the Oracle data type mappings. | |
|
Specifies, in milliseconds, how frequently the connector sends messages to a heartbeat topic. | |
No default |
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. Set this property and create a heartbeat table to receive the heartbeat messages to resolve situations in which Debezium fails to synchronize offsets on low-traffic databases that are on the same host as a high-traffic database. After the connector inserts records into the configured table, it is able to receive changes from the low-traffic database and acknowledge SCN changes in the database, so that offsets can be synchronized with the broker. | |
No default |
Specifies an interval in milliseconds that the connector waits after it starts before it takes a snapshot. | |
0 |
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of the | |
| Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector reads table contents in multiple batches of the specified size. | |
|
Specifies the number of rows that will be fetched for each database round-trip of a given query. Using a value of | |
|
Set the property to See Transaction Metadata for additional details. | |
|
Specifies the mining strategy that controls how Oracle LogMiner builds and uses a given data dictionary for resolving table and column ids to names. | |
|
Specifies the mining query mode that controls how the Oracle LogMiner query is built. | |
|
The buffer type controls how the connector manages buffering transaction data. | |
|
The maximum number of milliseconds that a LogMiner session can be active before a new session is used. | |
|
Specifies whether the JDBC connection will be closed and re-opened on log switches or when mining session has reached maximum lifetime threshold. | |
| The minimum SCN interval size that this connector attempts to read from redo/archive logs. Active batch size is also increased/decreased by this amount for tuning connector throughput when needed. | |
| The maximum SCN interval size that this connector uses when reading from redo/archive logs. | |
| The starting SCN interval size that the connector uses for reading data from redo/archive logs. This also servers as a measure for adjusting batch size - when the difference between current SCN and beginning/end SCN of the batch is bigger than this value, batch size is increased/decreased. | |
| The minimum amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. | |
| The maximum amount of time that the connector ill sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. | |
| The starting amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds. | |
| The maximum amount of time up or down that the connector uses to tune the optimal sleep time when reading data from logminer. Value is in milliseconds. | |
|
The number of hours in the past from SYSDATE to mine archive logs. When the default setting ( | |
|
Controls whether or not the connector mines changes from just archive logs or a combination of the online redo logs and archive logs (the default). | |
|
The number of milliseconds the connector will sleep in between polling to determine if the starting system change number is in the archive logs. If | |
|
Positive integer value that specifies the number of milliseconds to retain long running transactions between redo log switches. When set to By default, the LogMiner adapter maintains an in-memory buffer of all running transactions. Because all of the DML operations that are part of a transaction are buffered until a commit or rollback is detected, long-running transactions should be avoided in order to not overflow that buffer. Any transaction that exceeds this configured value is discarded entirely, and the connector does not emit any messages for the operations that were part of the transaction. | |
No default |
Specifies the configured Oracle archive destination to use when mining archive logs with LogMiner. | |
No default | List of database users to include from the LogMiner query. It can be useful to set this property if you want the capturing process to include changes from the specified users. | |
No default | List of database users to exclude from the LogMiner query. It can be useful to set this property if you want the capturing process to always exclude the changes that specific users make. | |
|
Specifies a value that the connector compares to the difference between the current and previous SCN values to determine whether an SCN gap exists. If the difference between the SCN values is greater than the specified value, and the time difference is smaller than | |
|
Specifies a value, in milliseconds, that the connector compares to the difference between the current and previous SCN timestamps to determine whether an SCN gap exists. If the difference between the timestamps is less than the specified value, and the SCN delta is greater than | |
|
Specifies the name of the flush table that coordinates flushing the Oracle LogWriter Buffer (LGWR) to the redo logs. This name can be specified using the format | |
|
Specifies whether the redo log constructed SQL statement is included in | |
|
Controls whether or not large object (CLOB or BLOB) column values are emitted in change events. Note Use of large object data types is a Technology Preview feature. | |
| Specifies the constant that the connector provides to indicate that the original value is unchanged and not provided by the database. | |
No default | A comma-separated list of Oracle Real Application Clusters (RAC) node host names or addresses. This field is required to enable compatibility with an Oracle RAC deployment. Specify the list of RAC nodes by using one of the following methods:
If you supply a raw JDBC URL for the database by using the | |
| A comma-separated list of the operation types that you want the connector to skip during streaming. You can configure the connector to skip the following types of operations:
By default, only truncate operations are skipped. | |
No default value |
Fully-qualified name of the data collection that is used to send signals to the connector. When you use this property with an Oracle pluggable database (PDB), set its value to the name of the root database. | |
source | List of the signaling channel names that are enabled for the connector. By default, the following channels are available:
| |
No default | List of notification channel names that are enabled for the connector. By default, the following channels are available:
| |
| The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. | |
|
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
| |
|
The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to | |
|
Specify the delimiter for topic name, defaults to | |
| The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. | |
|
Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: | |
|
Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: | |
| Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. Important Parallel initial snapshots is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. | |
<oracle-property-snapshot-database-errors-max-retries, |
|
Specifies the number of retry attempts to snapshot a table when a database error occurs. This configuration property currently only retries failures related to |
|
Defines tags that customize MBean object names by adding metadata that provides contextual information. Specify a comma-separated list of key-value pairs. Each key represents a tag for the MBean object name, and the corresponding value represents a value for the key, for example, The connector appends the specified tags to the base MBean object name. Tags can help you to organize and categorize metrics data. You can define tags to identify particular application instances, environments, regions, versions, and so forth. For more information, see Customized MBean names. | |
|
Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
| |
| Time to wait for a query to execute, given in milliseconds. Defaults to 600 seconds (600,000 ms); zero means there is no limit. |
Debezium Oracle connector database schema history configuration properties
Debezium provides a set of schema.history.internal.*
properties that control how the connector interacts with the schema history topic.
The following table describes the schema.history.internal
properties for configuring the Debezium connector.
Property | Default | Description |
---|---|---|
No default | The full name of the Kafka topic where the connector stores the database schema history. | |
No default | A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while fetching cluster information using Kafka admin client. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while create kafka history topic using Kafka admin client. | |
|
The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is | |
|
A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is | |
|
A Boolean value that specifies whether the connector records schema structures from all tables in a schema or database, or only from tables that are designated for capture.
| |
|
A Boolean value that specifies whether the connector records schema structures from all logical databases in the database instance.
|
Pass-through Oracle connector configuration properties
The connector supports pass-through properties that enable Debezium to specify custom configuration options for fine-tuning the behavior of the Apache Kafka producer and consumer. For information about the full range of configuration properties for Kafka producers and consumers, see the Kafka documentation.
Pass-through properties for configuring how producer and consumer clients interact with schema history topics
Debezium relies on an Apache Kafka producer to write schema changes to database schema history topics. Similarly, it relies on a Kafka consumer to read from database schema history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the schema.history.internal.producer.*
and schema.history.internal.consumer.*
prefixes. The pass-through producer and consumer database schema history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:
schema.history.internal.producer.security.protocol=SSL schema.history.internal.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.producer.ssl.keystore.password=test1234 schema.history.internal.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.producer.ssl.truststore.password=test1234 schema.history.internal.producer.ssl.key.password=test1234 schema.history.internal.consumer.security.protocol=SSL schema.history.internal.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.consumer.ssl.keystore.password=test1234 schema.history.internal.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.consumer.ssl.truststore.password=test1234 schema.history.internal.consumer.ssl.key.password=test1234
Debezium strips the prefix from the property name before it passes the property to the Kafka client.
For more information about Kafka producer configuration properties and Kafka consumer configuration properties, see the Apache Kafka documentation .
Pass-through properties for configuring how the Oracle connector interacts with the Kafka signaling topic
Debezium provides a set of signal.*
properties that control how the connector interacts with the Kafka signals topic.
The following table describes the Kafka signal
properties.
Property | Default | Description |
---|---|---|
<topic.prefix>-signal | The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. | |
kafka-signal | The name of the group ID that is used by Kafka consumers. | |
No default | A list of the host and port pairs that the connector uses to establish its initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. | |
| Specifies whether the Kafka consumer writes an offset commit after it reads a message from the signaling topic. The value that you assign to this property determines whether the connector can process requests that the signaling topic receives while the connector is offline. Choose one of the following settings:
|
Pass-through properties for configuring the Kafka consumer client for the signaling channel
The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.*
. For example, the connector passes properties such as signal.consumer.security.protocol=SSL
to the Kafka consumer.
Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer.
Pass-through properties for configuring the Oracle connector sink notification channel
The following table describes properties that you can use to configure the Debezium sink notification
channel.
Property | Default | Description |
---|---|---|
No default |
The name of the topic that receives notifications from Debezium. This property is required when you configure the |
Debezium connector pass-through database driver configuration properties
The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix driver.*
. For example, the connector passes properties such as driver.foobar=false
to the JDBC URL.
Debezium strips the prefixes from the properties before it passes the properties to the database driver.
2.5.7. Monitoring Debezium Oracle connector performance
The Debezium Oracle connector provides three metric types in addition to the built-in support for JMX metrics that Apache Zookeeper, Apache Kafka, and Kafka Connect have.
- snapshot metrics; for monitoring the connector when performing snapshots
- streaming metrics; for monitoring the connector when processing change events
- schema history metrics; for monitoring the status of the connector’s schema history
Please refer to the monitoring documentation for details of how to expose these metrics via JMX.
2.5.7.1. Customized names for Oracle connector snapshot and streaming MBean objects
Debezium connectors expose metrics via the MBean name for the connector. These metrics, which are specific to each connector instance, provide data about the behavior of the connector’s snapshot, streaming, and schema history processes.
By default, when you deploy a correctly configured connector, Debezium generates a unique MBean name for each of the different connector metrics. To view the metrics for a connector process, you configure your observability stack to monitor its MBean. But these default MBean names depend on the connector configuration; configuration changes can result in changes to the MBean names. A change to the MBean name breaks the linkage between the connector instance and the MBean, disrupting monitoring activity. In this scenario, you must reconfigure the observability stack to use the new MBean name if you want to resume monitoring.
To prevent monitoring disruptions that result from MBean name changes, you can configure custom metrics tags. You configure custom metrics by adding the custom.metric.tags
property to the connector configuration. The property accepts key-value pairs in which each key represents a tag for the MBean object name, and the corresponding value represents the value of that tag. For example: k1=v1,k2=v2
. Debezium appends the specified tags to the MBean name of the connector.
After you configure the custom.metric.tags
property for a connector, you can configure the observability stack to retrieve metrics associated with the specified tags. The observability stack then uses the specified tags, rather than the mutable MBean names to uniquely identify connectors. Later, if Debezium redefines how it constructs MBean names, or if the topic.prefix
in the connector configuration changes, metrics collection is uninterrupted, because the metrics scrape task uses the specified tag patterns to identify the connector.
A further benefit of using custom tags, is that you can use tags that reflect the architecture of your data pipeline, so that metrics are organized in a way that suits you operational needs. For example, you might specify tags with values that declare the type of connector activity, the application context, or the data source, for example, db1-streaming-for-application-abc
. If you specify multiple key-value pairs, all of the specified pairs are appended to the connector’s MBean name.
The following example illustrates how tags modify the default MBean name.
Example 2.39. How custom tags modify the connector MBean name
By default, the Oracle connector uses the following MBean name for streaming metrics:
debezium.oracle:type=connector-metrics,context=streaming,server=<topic.prefix>
If you set the value of custom.metric.tags
to database=salesdb-streaming,table=inventory
, Debezium generates the following custom MBean name:
debezium.oracle:type=connector-metrics,context=streaming,server=<topic.prefix>,database=salesdb-streaming,table=inventory
2.5.7.2. Debezium Oracle connector snapshot metrics
The MBean is debezium.oracle:type=connector-metrics,context=snapshot,server=<topic.prefix>
.
Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
The following table lists the snapshot metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last snapshot event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of events that this connector has seen since last started or reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The total number of tables that are being included in the snapshot. | |
| The number of tables that the snapshot has yet to copy. | |
| Whether the snapshot was started. | |
| Whether the snapshot was paused. | |
| Whether the snapshot was aborted. | |
| Whether the snapshot completed. | |
| The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. | |
| The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. | |
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The connector also provides the following additional snapshot metrics when an incremental snapshot is executed:
Attributes | Type | Description |
---|---|---|
| The identifier of the current snapshot chunk. | |
| The lower bound of the primary key set defining the current chunk. | |
| The upper bound of the primary key set defining the current chunk. | |
| The lower bound of the primary key set of the currently snapshotted table. | |
| The upper bound of the primary key set of the currently snapshotted table. |
2.5.7.3. Debezium Oracle connector streaming metrics
The MBean is debezium.oracle:type=connector-metrics,context=streaming,server=<topic.prefix>
.
The following table lists the streaming metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last streaming event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of data change events reported by the source database since the last connector start, or since a metrics reset. Represents the data change workload for Debezium to process. | |
| The total number of create events processed by the connector since its last start or metrics reset. | |
| The total number of update events processed by the connector since its last start or metrics reset. | |
| The total number of delete events processed by the connector since its last start or metrics reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| Flag that denotes whether the connector is currently connected to the database server. | |
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. | |
| The number of processed transactions that were committed. | |
| The coordinates of the last received event. | |
| Transaction identifier of the last processed transaction. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The Debezium Oracle connector also provides the following additional streaming metrics:
Attributes | Type | Description |
---|---|---|
| The most recent system change number that has been processed. | |
| The oldest system change number in the transaction buffer. | |
|
The oldest system change number’s age in milliseconds. If the buffer is empty, the value will be | |
| The last committed system change number from the transaction buffer. | |
| The system change number currently written to the connector’s offsets. | |
| Array of the log files that are currently mined. | |
| The minimum number of logs specified for any LogMiner session. | |
| The maximum number of logs specified for any LogMiner session. | |
|
Array of the current state for each mined logfile with the format | |
| The number of times the database has performed a log switch for the last day. | |
| The number of DML operations observed in the last LogMiner session query. | |
| The maximum number of DML operations observed while processing a single LogMiner session query. | |
| The total number of DML operations observed. | |
| The total number of LogMiner session query (aka batches) performed. | |
| The duration of the last LogMiner session query’s fetch in milliseconds. | |
| The maximum duration of any LogMiner session query’s fetch in milliseconds. | |
| The duration for processing the last LogMiner query batch results in milliseconds. | |
| The time in milliseconds spent parsing DML event SQL statements. | |
| The duration in milliseconds to start the last LogMiner session. | |
| The longest duration in milliseconds to start a LogMiner session. | |
| The total duration in milliseconds spent by the connector starting LogMiner sessions. | |
| The minimum duration in milliseconds spent processing results from a single LogMiner session. | |
| The maximum duration in milliseconds spent processing results from a single LogMiner session. | |
| The total duration in milliseconds spent processing results from LogMiner sessions. | |
| The total duration in milliseconds spent by the JDBC driver fetching the next row to be processed from the log mining view. | |
| The total number of rows processed from the log mining view across all sessions. | |
| The number of entries fetched by the log mining query per database round-trip. | |
| The number of milliseconds the connector sleeps before fetching another batch of results from the log mining view. | |
| The maximum number of rows/second processed from the log mining view. | |
| The average number of rows/second processed from the log mining. | |
| The average number of rows/second processed from the log mining view for the last batch. | |
| The number of connection problems detected. | |
|
The number of hours that transactions are retained by the connector’s in-memory buffer without being committed or rolled back before being discarded. For more information, see | |
| The number of current active transactions in the transaction buffer. | |
| The number of committed transactions in the transaction buffer. | |
| The number of rolled back transactions in the transaction buffer. | |
| The average number of committed transactions per second in the transaction buffer. | |
| The number of registered DML operations in the transaction buffer. | |
| The time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer. | |
| The maximum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer. | |
| The minimum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer. | |
|
An array of the most recent abandoned transaction identifiers removed from the transaction buffer due to their age. See | |
| Current number of entries in the abandoned transactions list. | |
| An array of the most recent transaction identifiers that have been mined and rolled back in the transaction buffer. | |
| The duration of the last transaction buffer commit operation in milliseconds. | |
| The duration of the longest transaction buffer commit operation in milliseconds. | |
| The number of errors detected. | |
| The number of warnings detected. | |
|
The number of times that the system change number was checked for advancement and remains unchanged. A high value can indicate that a long-running transactions is ongoing and is preventing the connector from flushing the most recently processed system change number to the connector’s offsets. When conditions are optimal, the value should be close to or equal to | |
|
The number of DDL records that have been detected but could not be parsed by the DDL parser. This should always be | |
| The current mining session’s user global area (UGA) memory consumption in bytes. | |
| The maximum mining session’s user global area (UGA) memory consumption in bytes across all mining sessions. | |
| The current mining session’s process global area (PGA) memory consumption in bytes. | |
| The maximum mining session’s process global area (PGA) memory consumption in bytes across all mining sessions. |
2.5.7.4. Debezium Oracle connector schema history metrics
The MBean is debezium.oracle:type=connector-metrics,context=schema-history,server=<topic.prefix>
.
The following table lists the schema history metrics that are available.
Attributes | Type | Description |
---|---|---|
|
One of | |
| The time in epoch seconds at what recovery has started. | |
| The number of changes that were read during recovery phase. | |
| the total number of schema changes applied during recovery and runtime. | |
| The number of milliseconds that elapsed since the last change was recovered from the history store. | |
| The number of milliseconds that elapsed since the last change was applied. | |
| The string representation of the last change recovered from the history store. | |
| The string representation of the last applied change. |
2.5.8. Oracle connector frequently asked questions
- Is Oracle 11g supported?
- Oracle 11g is not supported; however, we do aim to be backward compatible with Oracle 11g on a best-effort basis. We rely on the community to communicate compatibility concerns with Oracle 11g as well as provide bug fixes when a regression is identified.
- Isn’t Oracle LogMiner deprecated?
- No, Oracle only deprecated the continuous mining option with Oracle LogMiner in Oracle 12c and removed that option starting with Oracle 19c. The Debezium Oracle connector does not rely on this option to function, and therefore can safely be used with newer versions of Oracle without any impact.
- How do I change the position in the offsets?
The Debezium Oracle connector maintains two critical values in the offsets, a field named
scn
and another namedcommit_scn
. Thescn
field is a string that represents the low-watermark starting position the connector used when capturing changes.-
Find out the name of the topic that contains the connector offsets. This is configured based on the value set as the
offset.storage.topic
configuration property. Find out the last offset for the connector, the key under which it is stored and identify the partition used to store the offset. This can be done using the
kafkacat
utility script provided by the Kafka broker installation. An example might look like this:kafkacat -b localhost -C -t my_connect_offsets -f 'Partition(%p) %k %s\n' Partition(11) ["inventory-connector",{"server":"server1"}] {"scn":"324567897", "commit_scn":"324567897: 0x2832343233323:1"}
The key for
inventory-connector
is["inventory-connector",{"server":"server1"}]
, the partition is11
and the last offset is the contents that follows the key.To move back to a previous offset the connector should be stopped and the following command has to be issued:
echo '["inventory-connector",{"server":"server1"}]|{"scn":"3245675000","commit_scn":"324567500"}' | \ kafkacat -P -b localhost -t my_connect_offsets -K \| -p 11
This writes to partition
11
of themy_connect_offsets
topic the given key and offset value. In this example, we are reversing the connector back to SCN3245675000
rather than324567897
.
-
Find out the name of the topic that contains the connector offsets. This is configured based on the value set as the
- What happens if the connector cannot find logs with a given offset SCN?
The Debezium connector maintains a low and high -watermark SCN value in the connector offsets. The low-watermark SCN represents the starting position and must exist in the available online redo or archive logs in order for the connector to start successfully. When the connector reports it cannot find this offset SCN, this indicates that the logs that are still available do not contain the SCN and therefore the connector cannot mine changes from where it left off.
When this happens, there are two options. The first is to remove the history topic and offsets for the connector and restart the connector, taking a new snapshot as suggested. This will guarantee that no data loss will occur for any topic consumers. The second is to manually manipulate the offsets, advancing the SCN to a position that is available in the redo or archive logs. This will cause changes that occurred between the old SCN value and the newly provided SCN value to be lost and not written to the topics. This is not recommended.
- What’s the difference between the various mining strategies?
The Debezium Oracle connector provides three options for
log.mining.strategy
.The default is
redo_in_catalog
, and this instructs the connector to write the Oracle data dictionary to the redo logs everytime a log switch is detected. This data dictionary is necessary for Oracle LogMiner to track schema changes effectively when parsing the redo and archive logs. This option will generate more than usual numbers of archive logs but allows tables being captured to be manipulated in real-time without any impact on capturing data changes. This option generally requires more Oracle database memory and will cause the Oracle LogMiner session and process to take slightly longer to start after each log switch.The second option,
online_catalog
, does not write the data dictionary to the redo logs. Instead, Oracle LogMiner will always use the online data dictionary that contains the current state of the table’s structure. This also means that if a table’s structure changes and no longer matches the online data dictionary, Oracle LogMiner will be unable to resolve table or column names if the table’s structure is changed. This mining strategy option should not be used if the tables being captured are subject to frequent schema changes. It’s important that all data changes be lock-stepped with the schema change such that all changes have been captured from the logs for the table, stop the connector, apply the schema change, and restart the connector and resume data changes on the table. This option requires less Oracle database memory and Oracle LogMiner sessions generally start substantially faster since the data dictionary does not need to be loaded or primed by the LogMiner process.The final option,
hybrid
, combines the strengths of the above two strategies with none of their weaknesses. This strategy harnesses the performance of theonline_catalog
with the resilience in schema tracking of theredo_in_catalog
while also avoiding the overhead and performance costs with the higher than normal archive log generation. This mode utilizes a fallback mode where if LogMiner fails to reconstruct the SQL for a database change, the Debezium connector will rely on the in-memory schema model maintained by the connector to reconstruct the SQL in-flight. The intent is that this mode will eventually transition to the default, and likely only mode of operation in the future.- Are there any limitations with the Hybrid mining strategy with LogMiner?
-
Yes, the Hybrid mode for
log.mining.strategy
is still a work-in-progress strategy, and therefore does not yet support all data types. At this time, this mode cannot reconstruct SQL statements that include operations againstCLOB
,NCLOB
,BLOB
,XML
, norJSON
data types. So in short, if you enablelob.enabled
with a value oftrue
, you will be unable to use the Hybrid strategy and the connector will fail to start as this combination is unsupported. - Why does the connector appear to stop capturing changes on AWS?
Due to the fixed idle timeout of 350 seconds on the AWS Gateway Load Balancer, JDBC calls that require more than 350 seconds to complete can hang indefinitely.
In situations where calls to the Oracle LogMiner API take more than 350 seconds to complete, a timeout can be triggered, causing the AWS Gateway Load Balancer to hang. For example, such timeouts can occur when a LogMiner session that processes large amounts of data runs concurrently with Oracle’s periodic checkpointing task.
To prevent timeouts from occurring on the AWS Gateway Load Balancer, enable keep-alive packets from the Kafka Connect environment, by performing the following steps as root or a super-user:
From a terminal, run the following command:
sysctl -w net.ipv4.tcp_keepalive_time=60
Edit
/etc/sysctl.conf
and set the value of the following variable as shown:net.ipv4.tcp_keepalive_time=60
Reconfigure the Debezium for Oracle connector to use the
database.url
property rather thandatabase.hostname
and add the(ENABLE=broken)
Oracle connect string descriptor as shown in the following example:database.url=jdbc:oracle:thin:username/password!@(DESCRIPTION=(ENABLE=broken)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(Host=hostname)(Port=port)))(CONNECT_DATA=(SERVICE_NAME=serviceName)))
The preceding steps configure the TCP network stack to send keep-alive packets every 60 seconds. As a result, the AWS Gateway Load Balancer does not timeout when JDBC calls to the LogMiner API take more than 350 seconds to complete, enabling the connector to continue to read changes from the database’s transaction logs.
- What’s the cause for ORA-01555 and how to handle it?
The Debezium Oracle connector uses flashback queries when the initial snapshot phase executes. A flashback query is a special type of query that relies on the flashback area, maintained by the database’s
UNDO_RETENTION
database parameter, to return the results of a query based on what the contents of the table had at a given time, or in our case at a given SCN. By default, Oracle generally only maintains an undo or flashback area for approximately 15 minutes unless this has been increased or decreased by your database administrator. For configurations that capture large tables, it may take longer than 15 minutes or your configuredUNDO_RETENTION
to perform the initial snapshot and this will eventually lead to this exception:ORA-01555: snapshot too old: rollback segment number 12345 with name "_SYSSMU11_1234567890$" too small
The first way to deal with this exception is to work with your database administrator and see whether they can increase the
UNDO_RETENTION
database parameter temporarily. This does not require a restart of the Oracle database, so this can be done online without impacting database availability. However, changing this may still lead to the above exception or a "snapshot too old" exception if the tablespace has inadequate space to store the necessary undo data.The second way to deal with this exception is to not rely on the initial snapshot at all, setting the
snapshot.mode
toschema_only
and then instead relying on incremental snapshots. An incremental snapshot does not rely on a flashback query and therefore isn’t subject to ORA-01555 exceptions.- What’s the cause for ORA-04036 and how to handle it?
The Debezium Oracle connector may report an ORA-04036 exception when the database changes occur infrequently. An Oracle LogMiner session is started and re-used until a log switch is detected. The session is re-used as it provides the optimal performance utilization with Oracle LogMiner, but should a long-running mining session occur, this can lead to excessive PGA memory usage, eventually causing an exception like this:
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
This exception can be avoided by specifying how frequent Oracle switches redo logs or how long the Debezium Oracle connector is allowed to re-use the mining session. The Debezium Oracle connector provides a configuration option,
log.mining.session.max.ms
, which controls how long the current Oracle LogMiner session can be re-used for before being closed and a new session started. This allows the database resources to be kept in-check without exceeding the PGA memory allowed by the database.- What’s the cause for ORA-01882 and how to handle it?
The Debezium Oracle connector may report the following exception when connecting to an Oracle database:
ORA-01882: timezone region not found
This happens when the timezone information cannot be correctly resolved by the JDBC driver. In order to solve this driver related problem, the driver needs to be told to not resolve the timezone details using regions. This can be done by specifying a driver pass through property using
driver.oracle.jdbc.timezoneAsRegion=false
.- What’s the cause for ORA-25191 and how to handle it?
The Debezium Oracle connector automatically ignores index-organized tables (IOT) as they are not supported by Oracle LogMiner. However, if an ORA-25191 exception is thrown, this could be due to a unique corner case for such a mapping and the additional rules may be necessary to exclude these automatically. An example of an ORA-25191 exception might look like this:
ORA-25191: cannot reference overflow table of an index-organized table
If an ORA-25191 exception is thrown, please raise a Jira issue with the details about the table and it’s mappings, related to other parent tables, etc. As a workaround, the include/exclude configuration options can be adjusted to prevent the connector from accessing such tables.
- How to solve SAX feature external-general-entities not supported
-
Debezium 2.4 introduced support for Oracle’s
XMLTYPE
column type and to support this feature, the Oraclexdb
andxmlparserv2
dependencies are required.
Oracle’sxmlparserv2
dependency implements a SAX-based parser and if the runtime finds an uses this implementation rather than the other on the classpath, this error will occur. In order to influence specifically which SAX implementation is used generally, the JVM will need to be started with a specific argument.
When the following JVM argument is provided, the Oracle connector will start successfully without this error.
-Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl
2.6. Debezium connector for PostgreSQL
The Debezium PostgreSQL connector captures row-level changes in the schemas of a PostgreSQL database. For information about the PostgreSQL versions that are compatible with the connector, see the Debezium Supported Configurations page.
The first time it connects to a PostgreSQL server or cluster, the connector takes a consistent snapshot of all schemas. After that snapshot is complete, the connector continuously captures row-level changes that insert, update, and delete database content and that were committed to a PostgreSQL database. The connector generates data change event records and streams them to Kafka topics. For each table, the default behavior is that the connector streams all generated events to a separate Kafka topic for that table. Applications and services consume data change event records from that topic.
Information and procedures for using a Debezium PostgreSQL connector is organized as follows:
- Section 2.6.1, “Overview of Debezium PostgreSQL connector”
- Section 2.6.2, “How Debezium PostgreSQL connectors work”
- Section 2.6.3, “Descriptions of Debezium PostgreSQL connector data change events”
- Section 2.6.4, “How Debezium PostgreSQL connectors map data types”
- Section 2.6.5, “Setting up PostgreSQL to run a Debezium connector”
- Custom converters
- Section 2.6.6, “Deployment of Debezium PostgreSQL connectors”
- Section 2.6.7, “Monitoring Debezium PostgreSQL connector performance”
- Section 2.6.8, “How Debezium PostgreSQL connectors handle faults and problems”
2.6.1. Overview of Debezium PostgreSQL connector
PostgreSQL’s logical decoding feature was introduced in version 9.4. It is a mechanism that allows the extraction of the changes that were committed to the transaction log and the processing of these changes in a user-friendly manner with the help of an output plug-in. The output plug-in enables clients to consume the changes.
The PostgreSQL connector contains two main parts that work together to read and process database changes:
-
pgoutput
is the standard logical decoding output plug-in in PostgreSQL 10+. This is the only supported logical decoding output plug-in in this Debezium release. This plug-in is maintained by the PostgreSQL community, and used by PostgreSQL itself for logical replication. This plug-in is always present so no additional libraries need to be installed. The Debezium connector interprets the raw replication event stream directly into change events. - Java code (the actual Kafka Connect connector) that reads the changes produced by the logical decoding output plug-in by using PostgreSQL’s streaming replication protocol and the PostgreSQL JDBC driver.
The connector produces a change event for every row-level insert, update, and delete operation that was captured and sends change event records for each table in a separate Kafka topic. Client applications read the Kafka topics that correspond to the database tables of interest, and can react to every row-level event they receive from those topics.
PostgreSQL normally purges write-ahead log (WAL) segments after some period of time. This means that the connector does not have the complete history of all changes that have been made to the database. Therefore, when the PostgreSQL connector first connects to a particular PostgreSQL database, it starts by performing a consistent snapshot of each of the database schemas. After the connector completes the snapshot, it continues streaming changes from the exact point at which the snapshot was made. This way, the connector starts with a consistent view of all of the data, and does not omit any changes that were made while the snapshot was being taken.
The connector is tolerant of failures. As the connector reads changes and produces events, it records the WAL position for each event. If the connector stops for any reason (including communication failures, network problems, or crashes), upon restart the connector continues reading the WAL where it last left off. This includes snapshots. If the connector stops during a snapshot, the connector begins a new snapshot when it restarts.
The connector relies on and reflects the PostgreSQL logical decoding feature, which has the following limitations:
- Logical decoding does not support DDL changes. This means that the connector is unable to report DDL change events back to consumers.
-
Logical decoding replication slots are supported on only
primary
servers. When there is a cluster of PostgreSQL servers, the connector can run on only the activeprimary
server. It cannot run onhot
orwarm
standby replicas. If theprimary
server fails or is demoted, the connector stops. After theprimary
server has recovered, you can restart the connector. If a different PostgreSQL server has been promoted toprimary
, adjust the connector configuration before restarting the connector. - Because logical decoding replication slots publish changes during commit — and not post commit — undesirable side-effects can occur. There are two main scenarios when clients can observe inconsistent states. First, publishing uncommitted changes when the master dies before replication completes. Second, publishing changes that cannot be read (i.e., read-after-write consistency) temporarily because they are being replicated. For example, an EmbeddedEngine consumer receives a notification of a row that was created but it cannot be read by a transaction.
Additionally, the pgoutput
logical decoding output plug-in does not capture values for generated columns, resulting in missing data for these columns in the connector’s output.
Behavior when things go wrong describes how the connector responds if there is a problem.
Debezium currently supports databases with UTF-8 character encoding only. With a single byte character encoding, it is not possible to correctly process strings that contain extended ASCII code characters.
2.6.2. How Debezium PostgreSQL connectors work
To optimally configure and run a Debezium PostgreSQL connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.
Details are in the following topics:
- Section 2.6.2.2, “How Debezium PostgreSQL connectors perform database snapshots”
- Section 2.6.2.3, “Ad hoc snapshots”
- Section 2.6.2.4, “Incremental snapshots”
- Section 2.6.2.6, “How Debezium PostgreSQL connectors stream change event records”
- Section 2.6.2.7, “Default names of Kafka topics that receive Debezium PostgreSQL change event records”
- Section 2.6.2.8, “Debezium PostgreSQL connector-generated events that represent transaction boundaries”
2.6.2.1. Security for PostgreSQL connector
To use the Debezium connector to stream changes from a PostgreSQL database, the connector must operate with specific privileges in the database. Although one way to grant the necessary privileges is to provide the user with superuser
privileges, doing so potentially exposes your PostgreSQL data to unauthorized access. Rather than granting excessive privileges to the Debezium user, it is best to create a dedicated Debezium replication user to which you grant specific privileges.
For more information about configuring privileges for the Debezium PostgreSQL user, see Setting up permissions. For more information about PostgreSQL logical replication security, see the PostgreSQL documentation.
2.6.2.2. How Debezium PostgreSQL connectors perform database snapshots
Most PostgreSQL servers are configured to not retain the complete history of the database in the WAL segments. This means that the PostgreSQL connector would be unable to see the entire history of the database by reading only the WAL. Consequently, the first time that the connector starts, it performs an initial consistent snapshot of the database.
You can find more information about snapshots in the following sections:
Default workflow behavior of initial snapshots
The default behavior for performing a snapshot consists of the following steps. You can change this behavior by setting the snapshot.mode
connector configuration property to a value other than initial
.
-
Start a transaction with a SERIALIZABLE, READ ONLY, DEFERRABLE isolation level to ensure that subsequent reads in this transaction are against a single consistent version of the data. Any changes to the data due to subsequent
INSERT
,UPDATE
, andDELETE
operations by other clients are not visible to this transaction. - Read the current position in the server’s transaction log.
-
Scan the database tables and schemas, generate a
READ
event for each row and write that event to the appropriate table-specific Kafka topic. - Commit the transaction.
- Record the successful completion of the snapshot in the connector offsets.
If the connector fails, is rebalanced, or stops after Step 1 begins but before Step 5 completes, upon restart the connector begins a new snapshot. After the connector completes its initial snapshot, the PostgreSQL connector continues streaming from the position that it read in Step 2. This ensures that the connector does not miss any updates. If the connector stops again for any reason, upon restart, the connector continues streaming changes from where it previously left off.
Option | Description |
---|---|
|
The connector always performs a snapshot when it starts. After the snapshot completes, the connector continues streaming changes from step 3 in the above sequence. This mode is useful in these situations:
|
| The connector performs a database snapshot when no Kafka offsets topic exists. After the database snapshot completes the Kafka offsets topic is written. If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. |
| The connector performs a database snapshot and stops before streaming any change event records. If the connector had started but did not complete a snapshot before stopping, the connector restarts the snapshot process and stops when the snapshot completes. |
| The connector never performs snapshots. When a connector is configured this way, after it starts, it behaves as follows: If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. If no LSN is stored, the connector starts streaming changes from the point at which the PostgreSQL logical replication slot was created on the server. Use this snapshot mode only when you know that all data of interest is still reflected in the WAL. |
|
Deprecated, see |
| After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
|
2.6.2.3. Ad hoc snapshots
By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.
However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. You might want to perform an ad hoc snapshot after any of the following changes occur in your Debezium environment:
- The connector configuration is modified to capture a different set of tables.
- Kafka topics are deleted and must be rebuilt.
- Data corruption occurs due to a configuration error or some other problem.
You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling tables. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table.
When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.
Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database.
You specify the tables to capture by sending an execute-snapshot
message to the signaling table. Set the type of the execute-snapshot
signal to incremental
or blocking
, and provide the names of the tables to include in the snapshot, as described in the following table:
Field | Default | Value |
---|---|---|
|
|
Specifies the type of snapshot that you want to run. |
| N/A |
An array that contains regular expressions matching the fully-qualified names of the tables to include in the snapshot. |
| N/A |
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
|
| N/A | An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. |
Triggering an ad hoc incremental snapshot
You initiate an ad hoc incremental snapshot by adding an entry with the execute-snapshot
signal type to the signaling table, or by sending a signal message to a Kafka signaling topic. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
For more information, see Incremental snapshots.
Triggering an ad hoc blocking snapshot
You initiate an ad hoc blocking snapshot by adding an entry with the execute-snapshot
signal type to the signaling table or signaling topic. After the connector processes the message, it begins the snapshot operation. The connector temporarily stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the connector resumes streaming.
For more information, see Blocking snapshots.
2.6.2.4. Incremental snapshots
To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows.
As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:
- You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
- If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning.
-
You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its
table.include.list
property.
Incremental snapshot process
When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size. Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ
event. That event represents the value of the row when the snapshot for the chunk began.
As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT
, UPDATE
, or DELETE
operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.
How Debezium resolves collisions among records with the same primary key
In some cases, the UPDATE
or DELETE
events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ
event for that row. When the snapshot eventually emits the corresponding READ
event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.
Snapshot window
To assist in resolving collisions between late-arriving READ
events and streamed events that modify the same table row, Debezium employs a so-called snapshot window. The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key..
For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ
operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE
or DELETE
operations for each change.
As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ
events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ
event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ
events for which no related transaction log events exist. Debezium emits these remaining READ
events to the table’s Kafka topic.
The connector repeats the process for each snapshot chunk.
Currently, you can use either of the following methods to initiate an incremental snapshot:
The Debezium connector for PostgreSQL does not support schema changes while an incremental snapshot is running. If a schema change is performed before the incremental snapshot start but after sending the signal then passthrough config option database.autosave
is set to conservative
to correctly process the schema change.
2.6.2.4.1. Triggering an incremental snapshot
To initiate an incremental snapshot, you can send an ad hoc snapshot signal to the signaling table on the source database. You submit snapshot signals as SQL INSERT
queries.
After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation.
The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the type of snapshot operation. Debezium currently supports the incremental
and blocking
snapshot types.
To specify the tables to include in the snapshot, provide a data-collections
array that lists the tables, or an array of regular expressions used to match tables, for example,
{"data-collections": ["public.MyFirstTable", "public.MySecondTable"]}
The data-collections
array for an incremental snapshot signal has no default value. If the data-collections
array is empty, Debezium interprets the empty array to mean that no action is required, and it does not perform a snapshot.
If the name of a table that you want to include in a snapshot contains a dot (.
), a space, or some other non-alphanumeric character, you must escape the table name in double quotes.
For example, to include a table that exists in the public
schema and that has the name My.Table
, use the following format: "public.\"My.Table\""
.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to trigger an incremental snapshot
Send a SQL query to add the ad hoc incremental snapshot request to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example,
INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["schema1.table1", "schema1.table2"], 4 "type":"incremental", 5 "additional-conditions":[{"data-collection": "schema1.table1" ,"filter":"color=\'blue\'"}]}'); 6
The values of the
id
,type
, anddata
parameters in the command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.129. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1
schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its ownid
string as a watermarking signal.3
execute-snapshot
The
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
A required component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot.
The array lists regular expressions that use the formatschema.table
to match the fully-qualified names of the tables. This format is the same as the one that you use to specify the name of the connector’s signaling table.5
incremental
An optional
type
component of thedata
field of a signal that specifies the type of snapshot operation to run.
Valid values areincremental
andblocking
.
If you do not specify a value, the connector defaults to performing an incremental snapshot.6
additional-conditions
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
Each additional condition is an object withdata-collection
andfilter
properties. You can specify different filters for each data collection.
* Thedata-collection
property is the fully-qualified name of the data collection that the filter applies to. For more information about theadditional-conditions
parameter, see Section 2.6.2.4.2, “Running an ad hoc incremental snapshots withadditional-conditions
”.
2.6.2.4.2. Running an ad hoc incremental snapshots with additional-conditions
If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-conditions
parameter to the snapshot signal.
The SQL query for a typical snapshot takes the following form:
SELECT * FROM <tableName> ....
By adding an additional-conditions
parameter, you append a WHERE
condition to the SQL query, as in the following example:
SELECT * FROM <data-collection> WHERE <filter> ....
The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example, suppose you have a products
table that contains the following columns:
-
id
(primary key) -
color
-
quantity
If you want an incremental snapshot of the products
table to include only the data items where color=blue
, you can use the following SQL statement to trigger the snapshot:
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue"}]}');
The additional-conditions
parameter also enables you to pass conditions that are based on more than one column. For example, using the products
table from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue
and quantity>10
:
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue AND quantity>10"}]}');
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
Example 2.40. Incremental snapshot event message
{ "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "ts_us":"1620393591654547", "ts_ns":"1620393591654547920", "transaction":null }
Item | Field name | Description |
---|---|---|
1 |
|
Specifies the type of snapshot operation to run. |
2 |
|
Specifies the event type. |
2.6.2.4.3. Using the Kafka signaling channel to trigger an incremental snapshot
You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is execute-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports the |
| N/A |
An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. |
| N/A |
An optional array of additional conditions that specifies criteria that the connector evaluates to designate a subset of records to include in a snapshot. |
Example 2.41. An execute-snapshot
Kafka message
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
Ad hoc incremental snapshots with additional-conditions
Debezium uses the additional-conditions
field to select a subset of a table’s content.
Typically, when Debezium runs a snapshot, it runs a SQL query such as:
SELECT * FROM <tableName> ….
When the snapshot request includes an additional-conditions
property, the data-collection
and filter
parameters of the property are appended to the SQL query, for example:
SELECT * FROM <data-collection> WHERE <filter> ….
For example, given a products
table with the columns id
(primary key), color
, and brand
, if you want a snapshot to include only content for which color='blue'
, when you request the snapshot, you could add the additional-conditions
property to filter the content:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue'"}]}}`
You can also use the additional-conditions
property to pass conditions based on multiple columns. For example, using the same products
table as in the previous example, if you want a snapshot to include only the content from the products
table for which color='blue'
, and brand='MyBrand'
, you could send the following request:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
2.6.2.4.4. Stopping an incremental snapshot
In some situations, it might be necessary to stop an incremental snapshot. For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations. You can stop a snapshot that is already running by sending a signal to the signaling table on the source database.
You submit a stop snapshot signal to the signaling table by sending it in a SQL INSERT
query. The stop-snapshot signal specifies the type
of the snapshot operation as incremental
, and optionally specifies the tables that you want to omit from the currently running snapshot. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it’s in progress.
Additional resources
You can also stop an incremental snapshot by sending a JSON message to the Kafka signaling topic.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to stop an incremental snapshot
Send a SQL query to stop the ad hoc incremental snapshot to the signaling table:
INSERT INTO <signalTable> (id, type, data) values ('<id>', 'stop-snapshot', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"incremental"}');
For example,
INSERT INTO myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["schema1.table1", "schema1.table2"], 4 "type":"incremental"}'); 5
The values of the
id
,type
, anddata
parameters in the signal command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.132. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1
schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string.3
stop-snapshot
Specifies
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
An optional component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot.
The array lists regular expressions which match tables by their fully-qualified names in the formatschema.table
If you omit this component from the
data
field, the signal stops the entire incremental snapshot that is in progress.5
incremental
A required component of the
data
field of a signal that specifies the type of snapshot operation that is to be stopped.
Currently, the only valid option isincremental
.
If you do not specify atype
value, the signal fails to stop the incremental snapshot.
2.6.2.4.5. Using the Kafka signaling channel to stop an incremental snapshot
You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is stop-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports only the |
| N/A |
An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of table names or regular expressions to match table names to remove from the snapshot. |
The following example shows a typical stop-snapshot
Kafka message:
Key = `test_connector` Value = `{"type":"stop-snapshot","data": {"data-collections": ["schema1.table1", "schema1.table2"], "type": "INCREMENTAL"}}`
2.6.2.5. Blocking snapshots
To provide more flexibility in managing snapshots, Debezium includes a supplementary ad hoc snapshot mechanism, known as a blocking snapshot. Blocking snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
A blocking snapshot behaves just like an initial snapshot, except that you can trigger it at run time.
You might want to run a blocking snapshot rather than use the standard initial snapshot process in the following situations:
- You add a new table and you want to complete the snapshot while the connector is running.
- You add a large table, and you want the snapshot to complete in less time than is possible with an incremental snapshot.
Blocking snapshot process
When you run a blocking snapshot, Debezium stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the streaming is resumed.
Configure snapshot
You can set the following properties in the data
component of a signal:
- data-collections: to specify which tables must be snapshot
additional-conditions: You can specify different filters for different table.
-
The
data-collection
property is the fully-qualified name of the table for which the filter will be applied. -
The
filter
property will have the same value used in thesnapshot.select.statement.overrides
-
The
For example:
{"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]}
Possible duplicates
A delay might exist between the time that you send the signal to trigger the snapshot, and the time when streaming stops and the snapshot starts. As a result of this delay, after the snapshot completes, the connector might emit some event records that duplicate records captured by the snapshot.
2.6.2.6. How Debezium PostgreSQL connectors stream change event records
The PostgreSQL connector typically spends the vast majority of its time streaming changes from the PostgreSQL server to which it is connected. This mechanism relies on PostgreSQL’s replication protocol. This protocol enables clients to receive changes from the server as they are committed in the server’s transaction log at certain positions, which are referred to as Log Sequence Numbers (LSNs).
Whenever the server commits a transaction, a separate server process invokes a callback function from the logical decoding plug-in. This function processes the changes from the transaction, converts them to a specific format (Protobuf or JSON in the case of Debezium plug-in) and writes them on an output stream, which can then be consumed by clients.
The Debezium PostgreSQL connector acts as a PostgreSQL client. When the connector receives changes it transforms the events into Debezium create, update, or delete events that include the LSN of the event. The PostgreSQL connector forwards these change events in records to the Kafka Connect framework, which is running in the same process. The Kafka Connect process asynchronously writes the change event records in the same order in which they were generated to the appropriate Kafka topic.
Periodically, Kafka Connect records the most recent offset in another Kafka topic. The offset indicates source-specific position information that Debezium includes with each event. For the PostgreSQL connector, the LSN recorded in each change event is the offset.
When Kafka Connect gracefully shuts down, it stops the connectors, flushes all event records to Kafka, and records the last offset received from each connector. When Kafka Connect restarts, it reads the last recorded offset for each connector, and starts each connector at its last recorded offset. When the connector restarts, it sends a request to the PostgreSQL server to send the events starting just after that position.
The PostgreSQL connector retrieves schema information as part of the events sent by the logical decoding plug-in. However, the connector does not retrieve information about which columns compose the primary key. The connector obtains this information from the JDBC metadata (side channel). If the primary key definition of a table changes (by adding, removing or renaming primary key columns), there is a tiny period of time when the primary key information from JDBC is not synchronized with the change event that the logical decoding plug-in generates. During this tiny period, a message could be created with an inconsistent key structure. To prevent this inconsistency, update primary key structures as follows:
- Put the database or an application into a read-only mode.
- Let Debezium process all remaining events.
- Stop Debezium.
- Update the primary key definition in the relevant table.
- Put the database or the application into read/write mode.
- Restart Debezium.
PostgreSQL 10+ logical decoding support (pgoutput
)
As of PostgreSQL 10+, there is a logical replication stream mode, called pgoutput
that is natively supported by PostgreSQL. This means that a Debezium PostgreSQL connector can consume that replication stream without the need for additional plug-ins. This is particularly valuable for environments where installation of plug-ins is not supported or not allowed.
For more information, see Setting up PostgreSQL.
2.6.2.7. Default names of Kafka topics that receive Debezium PostgreSQL change event records
By default, the PostgreSQL connector writes change events for all INSERT
, UPDATE
, and DELETE
operations that occur in a table to a single Apache Kafka topic that is specific to that table. The connector uses the following convention to name change event topics:
topicPrefix.schemaName.tableName
The following list provides definitions for the components of the default name:
- topicPrefix
-
The topic prefix as specified by the
topic.prefix
configuration property. - schemaName
- The name of the database schema in which the change event occurred.
- tableName
- The name of the database table in which the change event occurred.
For example, suppose that fulfillment
is the logical server name in the configuration for a connector that is capturing changes in a PostgreSQL installation that has a postgres
database and an inventory
schema that contains four tables: products
, products_on_hand
, customers
, and orders
. The connector would stream records to these four Kafka topics:
-
fulfillment.inventory.products
-
fulfillment.inventory.products_on_hand
-
fulfillment.inventory.customers
-
fulfillment.inventory.orders
Now suppose that the tables are not part of a specific schema but were created in the default public
PostgreSQL schema. The names of the Kafka topics would be:
-
fulfillment.public.products
-
fulfillment.public.products_on_hand
-
fulfillment.public.customers
-
fulfillment.public.orders
The connector applies similar naming conventions to label its transaction metadata topics.
If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing.
2.6.2.8. Debezium PostgreSQL connector-generated events that represent transaction boundaries
Debezium can generate events that represent transaction boundaries and that enrich data change event messages.
Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.
For every transaction BEGIN
and END
, Debezium generates an event that contains the following fields:
status
-
BEGIN
orEND
. id
-
String representation of the unique transaction identifier composed of Postgres transaction ID itself and LSN of given operation separated by colon, i.e. the format is
txID:LSN
. ts_ms
-
The time of a transaction boundary event (
BEGIN
orEND
event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count
(forEND
events)- Total number of events emmitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
elements that indicates the number of events that the connector emits for changes that originate from a data collection.
Example
{ "status": "BEGIN", "id": "571:53195829", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "571:53195832", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "s1.a", "event_count": 1 }, { "data_collection": "s2.a", "event_count": 1 } ] }
Unless overridden via the topic.transaction
option, transaction events are written to the topic named <topic.prefix>
.transaction
.
Change data event enrichment
When transaction metadata is enabled the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
id
- String representation of unique transaction identifier.
total_order
- The absolute position of the event among all events generated by the transaction.
data_collection_order
- The per-data collection position of the event among all events that were emitted by the transaction.
Following is an example of a message:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "ts_us": "1580390884335451", "ts_ns": "1580390884335451325", "transaction": { "id": "571:53195832", "total_order": "1", "data_collection_order": "1" } }
2.6.3. Descriptions of Debezium PostgreSQL connector data change events
The Debezium PostgreSQL connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default behavior is that the connector streams change event records to topics with names that are the same as the event’s originating table.
Starting with Kafka 0.10, Kafka can optionally record the event key and value with the timestamp at which the message was created (recorded by the producer) or written to the log by Kafka.
The PostgreSQL connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the schema and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a schema name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
Details are in the following topics:
2.6.3.1. About keys in Debezium PostgreSQL change events
For a given table, the change event’s key has a structure that contains a field for each column in the primary key of the table at the time the event was created. Alternatively, if the table has REPLICA IDENTITY
set to FULL
or USING INDEX
there is a field for each unique key constraint.
Consider a customers
table defined in the public
database schema and the example of a change event key for that table.
Example table
CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) );
Example change event key
If the topic.prefix
connector configuration property has the value PostgreSQL_server
, every change event for the customers
table while it has this definition has the same key structure, which in JSON looks like this:
{ "schema": { 1 "type": "struct", "name": "PostgreSQL_server.public.customers.Key", 2 "optional": false, 3 "fields": [ 4 { "name": "id", "index": "0", "schema": { "type": "INT32", "optional": "false" } } ] }, "payload": { 5 "id": "1" }, }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-name.table-name.
|
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Specifies each field that is expected in the |
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
Although the column.exclude.list
and column.include.list
connector configuration properties allow you to capture only a subset of table columns, all columns in a primary or unique key are always included in the event’s key.
If the table does not have a primary or unique key, then the change event’s key is null. The rows in a table without a primary or unique key constraint cannot be uniquely identified.
2.6.3.2. About values in Debezium PostgreSQL change events
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers ( id SERIAL, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL, PRIMARY KEY(id) );
The value portion of a change event for a change to this table varies according to the REPLICA IDENTITY
setting and the operation that the event is for.
Details follow in these sections:
Replica identity
REPLICA IDENTITY is a PostgreSQL-specific table-level setting that determines the amount of information that is available to the logical decoding plug-in for UPDATE
and DELETE
events. More specifically, the setting of REPLICA IDENTITY
controls what (if any) information is available for the previous values of the table columns involved, whenever an UPDATE
or DELETE
event occurs.
There are 4 possible values for REPLICA IDENTITY
:
DEFAULT
- The default behavior is thatUPDATE
andDELETE
events contain the previous values for the primary key columns of a table if that table has a primary key. For anUPDATE
event, only the primary key columns with changed values are present.If a table does not have a primary key, the connector does not emit
UPDATE
orDELETE
events for that table. For a table without a primary key, the connector emits only create events. Typically, a table without a primary key is used for appending messages to the end of the table, which means thatUPDATE
andDELETE
events are not useful.-
NOTHING
- Emitted events forUPDATE
andDELETE
operations do not contain any information about the previous value of any table column. -
FULL
- Emitted events forUPDATE
andDELETE
operations contain the previous values of all columns in the table. -
INDEX
index-name - Emitted events forUPDATE
andDELETE
operations contain the previous values of the columns contained in the specified index.UPDATE
events also contain the indexed columns with the updated values.
create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "PostgreSQL_server.inventory.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "PostgreSQL_server.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "int64", "optional": false, "field": "ts_us" }, { "type": "int64", "optional": false, "field": "ts_ns" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "schema" }, { "type": "string", "optional": false, "field": "table" }, { "type": "int64", "optional": true, "field": "txId" }, { "type": "int64", "optional": true, "field": "lsn" }, { "type": "int64", "optional": true, "field": "xmin" } ], "optional": false, "name": "io.debezium.connector.postgresql.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "PostgreSQL_server.inventory.customers.Envelope" 4 }, "payload": { 5 "before": null, 6 "after": { 7 "id": 1, "first_name": "Anne", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 8 "version": "2.7.3.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "ts_us": 1559033904863123, "ts_ns": 1559033904863123000, "snapshot": true, "db": "postgres", "sequence": "[\"24023119\",\"24023128\"]", "schema": "public", "table": "customers", "txId": 555, "lsn": 24023128, "xmin": null }, "op": "c", 9 "ts_ms": 1559033904863, 10 "ts_us": 1559033904863841, 11 "ts_ns": 1559033904863841257 12 } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the row before the event occurred. When the Note
Whether or not this field is available is dependent on the |
7 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
8 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
9 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
10 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1 }, "after": { 2 "id": 1, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "annek@noanswer.org" }, "source": { 3 "version": "2.7.3.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "ts_us": 1559033904863769, "ts_ns": 1559033904863769000, "snapshot": false, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 24023128, "xmin": null }, "op": "u", 4 "ts_ms": 1465584025523, 5 "ts_us": 1465584025523514, 6 "ts_ns": 1465584025523514964, 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that contains values that were in the row before the database commit. In this example, only the primary key column,
For an update event to contain the previous values of all columns in the row, you would have to change the |
2 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
3 |
|
Mandatory field that describes the source metadata for the event. The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a DELETE
event and a tombstone event with the old key for the row, followed by an event with the new key for the row. Details are in the next section.
Primary key updates
An UPDATE
operation that changes a row’s primary key field(s) is known as a primary key change. For a primary key change, in place of sending an UPDATE
event record, the connector sends a DELETE
event record for the old key and a CREATE
event record for the new (updated) key. These events have the usual structure and content, and in addition, each one has a message header related to the primary key change:
-
The
DELETE
event record has__debezium.newkey
as a message header. The value of this header is the new primary key for the updated row. -
The
CREATE
event record has__debezium.oldkey
as a message header. The value of this header is the previous (old) primary key that the updated row had.
delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1 }, "after": null, 2 "source": { 3 "version": "2.7.3.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "ts_us": 1559033904863852, "ts_ns": 1559033904863852000, "snapshot": false, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 46523128, "xmin": null }, "op": "d", 4 "ts_ms": 1465581902461, 5 "ts_us": 1465581902461496, 6 "ts_ns": 1465581902461496187, 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
A delete change event record provides a consumer with the information it needs to process the removal of this row.
For a consumer to be able to process a delete event generated for a table that does not have a primary key, set the table’s REPLICA IDENTITY
to FULL
. When a table does not have a primary key and the table’s REPLICA IDENTITY
is set to DEFAULT
or NOTHING
, a delete event has no before
field.
PostgreSQL connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, the PostgreSQL connector follows a delete event with a special tombstone event that has the same key but a null
value.
truncate events
A truncate change event signals that a table has been truncated. The message key is null
in this case, the message value looks like this:
{ "schema": { ... }, "payload": { "source": { 1 "version": "2.7.3.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "ts_us": 1559033904863112, "ts_ns": 1559033904863112000, "snapshot": false, "db": "postgres", "schema": "public", "table": "customers", "txId": 556, "lsn": 46523128, "xmin": null }, "op": "t", 2 "ts_ms": 1559033904961, 3 "ts_us": 1559033904961654, 4 "ts_ns": 1559033904961654789 5 } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory field that describes the source metadata for the event. In a truncate event value, the
|
2 |
|
Mandatory string that describes the type of operation. The |
3 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. |
In case a single TRUNCATE
statement applies to multiple tables, one truncate change event record for each truncated table will be emitted.
Note that since truncate events represent a change made to an entire table and don’t have a message key, unless you’re working with topics with a single partition, there are no ordering guarantees for the change events pertaining to a table (create, update, etc.) and truncate events for that table. For instance a consumer may receive an update event only after a truncate event for that table, when those events are read from different partitions.
This event type is only supported through the pgoutput
plugin on Postgres 14+ (Postgres Documentation)
A message event signals that a generic logical decoding message has been inserted directly into the WAL typically with the pg_logical_emit_message
function. The message key is a Struct
with a single field named prefix
in this case, carrying the prefix specified when inserting the message. The message value looks like this for transactional messages:
{ "schema": { ... }, "payload": { "source": { 1 "version": "2.7.3.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "ts_us": 1559033904863879, "ts_ns": 1559033904863879000, "snapshot": false, "db": "postgres", "schema": "", "table": "", "txId": 556, "lsn": 46523128, "xmin": null }, "op": "m", 2 "ts_ms": 1559033904961, 3 "ts_us": 1559033904961621, 4 "ts_ns": 1559033904961621379, 5 "message": { 6 "prefix": "foo", "content": "Ymfy" } } }
Unlike other event types, non-transactional messages will not have any associated BEGIN
or END
transaction events. The message value looks like this for non-transactional messages:
{ "schema": { ... }, "payload": { "source": { 1 "version": "2.7.3.Final", "connector": "postgresql", "name": "PostgreSQL_server", "ts_ms": 1559033904863, "ts_us": 1559033904863762, "ts_ns": 1559033904863762000, "snapshot": false, "db": "postgres", "schema": "", "table": "", "lsn": 46523128, "xmin": null }, "op": "m", 2 "ts_ms": 1559033904961, 3 "ts_us": 1559033904961741, 4 "ts_ns": 1559033904961741698, 5 "message": { 6 "prefix": "foo", "content": "Ymfy" } }
Item | Field name | Description |
---|---|---|
1 |
|
Mandatory field that describes the source metadata for the event. In a message event value, the
|
2 |
|
Mandatory string that describes the type of operation. The |
3 |
|
Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task.
For non-transactional message events, the |
4 |
| Field that contains the message metadata
|
2.6.4. How Debezium PostgreSQL connectors map data types
The PostgreSQL connector represents changes to rows with events that are structured like the table in which the row exists. The event contains a field for each column value. How that value is represented in the event depends on the PostgreSQL data type of the column. The following sections describe how the connector maps PostgreSQL data types to a literal type and a semantic type in event fields.
-
literal type describes how the value is literally represented using Kafka Connect schema types:
INT8
,INT16
,INT32
,INT64
,FLOAT32
,FLOAT64
,BOOLEAN
,STRING
,BYTES
,ARRAY
,MAP
, andSTRUCT
. - semantic type describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.
If the default data type conversions do not meet your needs, you can create a custom converter for the connector.
Details are in the following sections:
Basic types
The following table describes how the connector maps basic types.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
| n/a |
|
| n/a |
|
|
|
|
|
|
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
n/a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| n/a |
|
| n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
|
Temporal types
Other than PostgreSQL’s TIMESTAMPTZ
and TIMETZ
data types, which contain time zone information, how temporal types are mapped depends on the value of the time.precision.mode
connector configuration property. The following sections describe these mappings:
time.precision.mode=adaptive
When the time.precision.mode
property is set to adaptive
, the default, the connector determines the literal type and semantic type based on the column’s data type definition. This ensures that events exactly represent the values in the database.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
time.precision.mode=adaptive_time_microseconds
When the time.precision.mode
configuration property is set to adaptive_time_microseconds
, the connector determines the literal type and semantic type for temporal types based on the column’s data type definition. This ensures that events exactly represent the values in the database, except all TIME
fields are captured as microseconds.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
time.precision.mode=connect
When the time.precision.mode
configuration property is set to connect
, the connector uses Kafka Connect logical types. This may be useful when consumers can handle only the built-in Kafka Connect logical types and are unable to handle variable-precision time values. However, since PostgreSQL supports microsecond precision, the events generated by a connector with the connect
time precision mode results in a loss of precision when the database column has a fractional second precision value that is greater than 3.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
TIMESTAMP type
The TIMESTAMP
type represents a timestamp without time zone information. Such columns are converted into an equivalent Kafka Connect value based on UTC. For example, the TIMESTAMP
value "2018-06-20 15:13:16.945104" is represented by an io.debezium.time.MicroTimestamp
with the value "1529507596945104" when time.precision.mode
is not set to connect
.
The timezone of the JVM running Kafka Connect and Debezium does not affect this conversion.
PostgreSQL supports using +/-infinite
values in TIMESTAMP
columns. These special values are converted to timestamps with value 9223372036825200000
in case of positive infinity or -9223372036832400000
in case of negative infinity. This behavior mimics the standard behavior of the PostgreSQL JDBC driver. For reference, see the org.postgresql.PGStatement
interface.
Decimal types
The setting of the PostgreSQL connector configuration property decimal.handling.mode
determines how the connector maps decimal types.
When the decimal.handling.mode
property is set to precise
, the connector uses the Kafka Connect org.apache.kafka.connect.data.Decimal
logical type for all DECIMAL
, NUMERIC
and MONEY
columns. This is the default mode.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
There is an exception to this rule. When the NUMERIC
or DECIMAL
types are used without scale constraints, the values coming from the database have a different (variable) scale for each value. In this case, the connector uses io.debezium.data.VariableScaleDecimal
, which contains both the value and the scale of the transferred value.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
When the decimal.handling.mode
property is set to double
, the connector represents all DECIMAL
, NUMERIC
and MONEY
values as Java double values and encodes them as shown in the following table.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
| |
|
| |
|
|
The last possible setting for the decimal.handling.mode
configuration property is string
. In this case, the connector represents DECIMAL
, NUMERIC
and MONEY
values as their formatted string representation, and encodes them as shown in the following table.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) |
---|---|---|
|
| |
|
| |
|
|
PostgreSQL supports NaN
(not a number) as a special value to be stored in DECIMAL
/NUMERIC
values when the setting of decimal.handling.mode
is string
or double
. In this case, the connector encodes NaN
as either Double.NaN
or the string constant NAN
.
HSTORE type
The setting of the PostgreSQL connector configuration property hstore.handling.mode
determines how the connector maps HSTORE
values.
When the hstore.handling.mode
property is set to json
(the default), the connector represents HSTORE
values as string representations of JSON values and encodes them as shown in the following table. When the hstore.handling.mode
property is set to map
, the connector uses the MAP
schema type for HSTORE
values.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
n/a |
Domain types
PostgreSQL supports user-defined types that are based on other underlying types. When such column types are used, Debezium exposes the column’s representation based on the full type hierarchy.
Capturing changes in columns that use PostgreSQL domain types requires special consideration. When a column is defined to contain a domain type that extends one of the default database types and the domain type defines a custom length or scale, the generated schema inherits that defined length or scale.
When a column is defined to contain a domain type that extends another domain type that defines a custom length or scale, the generated schema does not inherit the defined length or scale because that information is not available in the PostgreSQL driver’s column metadata.
Network address types
PostgreSQL has data types that can store IPv4, IPv6, and MAC addresses. It is better to use these types instead of plain text types to store network addresses. Network address types offer input error checking and specialized operators and functions.
PostgreSQL data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
n/a |
|
|
n/a |
|
|
n/a |
|
|
n/a |
PostGIS types
The PostgreSQL connector supports all PostGIS data types.
PostGIS data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
For format details, see Open Geospatial Consortium Simple Features Access specification. |
|
|
For format details, see Open Geospatial Consortium Simple Features Access specification. |
Toasted values
PostgreSQL has a hard limit on the page size. This means that values that are larger than around 8 KBs need to be stored by using TOAST storage. This impacts replication messages that are coming from the database. Values that were stored by using the TOAST mechanism and that have not been changed are not included in the message, unless they are part of the table’s replica identity. There is no safe way for Debezium to read the missing value out-of-bands directly from the database, as this would potentially lead to race conditions. Consequently, Debezium follows these rules to handle toasted values:
-
Tables with
REPLICA IDENTITY FULL
- TOAST column values are part of thebefore
andafter
fields in change events just like any other column. -
Tables with
REPLICA IDENTITY DEFAULT
- When receiving anUPDATE
event from the database, any unchanged TOAST column value that is not part of the replica identity is not contained in the event. Similarly, when receiving aDELETE
event, no TOAST columns, if any, are in thebefore
field. As Debezium cannot safely provide the column value in this case, the connector returns a placeholder value as defined by the connector configuration property,unavailable.value.placeholder
.
Default values
If a default value is specified for a column in the database schema, the PostgreSQL connector will attempt to propagate this value to the Kafka schema whenever possible. Most common data types are supported, including:
-
BOOLEAN
-
Numeric types (
INT
,FLOAT
,NUMERIC
, etc.) -
Text types (
CHAR
,VARCHAR
,TEXT
, etc.) -
Temporal types (
DATE
,TIME
,INTERVAL
,TIMESTAMP
,TIMESTAMPTZ
) -
JSON
,JSONB
,XML
-
UUID
Note that for temporal types, parsing of the default value is provided by PostgreSQL libraries; therefore, any string representation which is normally supported by PostgreSQL should also be supported by the connector.
In the case that the default value is generated by a function rather than being directly specified in-line, the connector will instead export the equivalent of 0
for the given data type. These values include:
-
FALSE
forBOOLEAN
-
0
with appropriate precision, for numeric types - Empty string for text/XML types
-
{}
for JSON types -
1970-01-01
forDATE
,TIMESTAMP
,TIMESTAMPTZ
types -
00:00
forTIME
-
EPOCH
forINTERVAL
-
00000000-0000-0000-0000-000000000000
forUUID
This support currently extends only to explicit usage of functions. For example, CURRENT_TIMESTAMP(6)
is supported with parentheses, but CURRENT_TIMESTAMP
is not.
Support for the propagation of default values exists primarily to allow for safe schema evolution when using the PostgreSQL connector with a schema registry which enforces compatibility between schema versions. Due to this primary concern, as well as the refresh behaviours of the different plug-ins, the default value present in the Kafka schema is not guaranteed to always be in-sync with the default value in the database schema.
- Default values may appear 'late' in the Kafka schema, depending on when/how a given plugin triggers refresh of the in-memory schema. Values may never appear/be skipped in the Kafka schema if the default changes multiple times in-between refreshes
- Default values may appear 'early' in the Kafka schema, if a schema refresh is triggered while the connector has records waiting to be processed. This is due to the column metadata being read from the database at refresh time, rather than being present in the replication message. This may occur if the connector is behind and a refresh occurs, or on connector start if the connector was stopped for a time while updates continued to be written to the source database.
This behaviour may be unexpected, but it is still safe. Only the schema definition is affected, while the real values present in the message will remain consistent with what was written to the source database.
Custom converters
By default, Debezium does not replicate data from columns with custom data types, such as composite types that are created by using SQL CREATE TYPE
statements. To replicate columns with custom data types, follow the instructions for creating a custom converter, with a few important caveats:
-
Set the
include.unknown.datatypes
property in the connector configuration totrue
. The defaultfalse
setting causes the custom converter to always returnnull
values. The type of value that is passed to the converter depends on the logical decoding output plug-in that is configured for the replication slot.
-
decoderbufs
passes a byte array (byte[]
) representation of the column data. -
pgoutput
passes a string representation of the column data.
-
2.6.5. Setting up PostgreSQL to run a Debezium connector
This release of Debezium supports only the native pgoutput
logical replication stream. To set up PostgreSQL so that it uses the pgoutput
plug-in, you must enable a replication slot, and configure a user with sufficient privileges to perform the replication.
Details are in the following topics:
-
Section 2.6.5.1, “Configuring a replication slot for the Debezium
pgoutput
plug-in” - Section 2.6.5.2, “Setting up PostgreSQL permissions for the Debezium connector”
- Section 2.6.5.3, “Setting privileges to enable Debezium to create PostgreSQL publications”
- Section 2.6.5.4, “Configuring PostgreSQL to allow replication with the Debezium connector host”
- Section 2.6.5.5, “Configuring PostgreSQL to manage Debezium WAL disk space consumption”
- Section 2.6.5.6, “Upgrading PostgreSQL databases that Debezium captures from”
2.6.5.1. Configuring a replication slot for the Debezium pgoutput
plug-in
PostgreSQL’s logical decoding uses replication slots. To configure a replication slot, specify the following in the postgresql.conf
file:
wal_level=logical max_wal_senders=1 max_replication_slots=1
These settings instruct the PostgreSQL server as follows:
-
wal_level
- Use logical decoding with the write-ahead log. -
max_wal_senders
- Use a maximum of one separate process for processing WAL changes. -
max_replication_slots
- Allow a maximum of one replication slot to be created for streaming WAL changes.
Replication slots are guaranteed to retain all WAL entries that are required for Debezium even during Debezium outages. Consequently, it is important to closely monitor replication slots to avoid:
- Too much disk consumption
- Any conditions, such as catalog bloat, that can happen if a replication slot stays unused for too long
For more information, see the PostgreSQL documentation for replication slots.
Familiarity with the mechanics and configuration of the PostgreSQL write-ahead log is helpful for using the Debezium PostgreSQL connector.
2.6.5.2. Setting up PostgreSQL permissions for the Debezium connector
Setting up a PostgreSQL server to run a Debezium connector requires a database user that can perform replications. Replication can be performed only by a database user that has appropriate permissions and only for a configured number of hosts.
Although, by default, superusers have the necessary REPLICATION
and LOGIN
roles, as mentioned in Security, it is best not to provide the Debezium replication user with elevated privileges. Instead, create a Debezium user that has the minimum required privileges.
Prerequisites
- PostgreSQL administrative permissions.
Procedure
To provide a user with replication permissions, define a PostgreSQL role that has at least the
REPLICATION
andLOGIN
permissions, and then grant that role to the user. For example:CREATE ROLE <name> REPLICATION LOGIN;
2.6.5.3. Setting privileges to enable Debezium to create PostgreSQL publications
Debezium streams change events for PostgreSQL source tables from publications that are created for the tables. Publications contain a filtered set of change events that are generated from one or more tables. The data in each publication is filtered based on the publication specification. The specification can be created by the PostgreSQL database administrator or by the Debezium connector. To permit the Debezium PostgreSQL connector to create publications and specify the data to replicate to them, the connector must operate with specific privileges in the database.
There are several options for determining how publications are created. In general, it is best to manually create publications for the tables that you want to capture, before you set up the connector. However, you can configure your environment in a way that permits Debezium to create publications automatically, and to specify the data that is added to them.
Debezium uses include list and exclude list properties to specify how data is inserted in the publication. For more information about the options for enabling Debezium to create publications, see publication.autocreate.mode
.
For Debezium to create a PostgreSQL publication, it must run as a user that has the following privileges:
- Replication privileges in the database to add the table to a publication.
-
CREATE
privileges on the database to add publications. -
SELECT
privileges on the tables to copy the initial table data. Table owners automatically haveSELECT
permission for the table.
To add tables to a publication, the user must be an owner of the table. But because the source table already exists, you need a mechanism to share ownership with the original owner. To enable shared ownership, you create a PostgreSQL replication group, and then add the existing table owner and the replication user to the group.
Procedure
Create a replication group.
CREATE ROLE <replication_group>;
Add the original owner of the table to the group.
GRANT REPLICATION_GROUP TO <original_owner>;
Add the Debezium replication user to the group.
GRANT REPLICATION_GROUP TO <replication_user>;
Transfer ownership of the table to
<replication_group>
.ALTER TABLE <table_name> OWNER TO REPLICATION_GROUP;
For Debezium to specify the capture configuration, the value of publication.autocreate.mode
must be set to filtered
.
2.6.5.4. Configuring PostgreSQL to allow replication with the Debezium connector host
To enable Debezium to replicate PostgreSQL data, you must configure the database to permit replication with the host that runs the PostgreSQL connector. To specify the clients that are permitted to replicate with the database, add entries to the PostgreSQL host-based authentication file, pg_hba.conf
. For more information about the pg_hba.conf
file, see the PostgreSQL documentation.
Procedure
Add entries to the
pg_hba.conf
file to specify the Debezium connector hosts that can replicate with the database host. For example,pg_hba.conf
file example:local replication <youruser> trust 1 host replication <youruser> 127.0.0.1/32 trust 2 host replication <youruser> ::1/128 trust 3
Table 2.152. Descriptions of pg_hba.conf settings Item Description 1
Instructs the server to allow replication for
<youruser>
locally, that is, on the server machine.2
Instructs the server to allow
<youruser>
onlocalhost
to receive replication changes usingIPV4
.3
Instructs the server to allow
<youruser>
onlocalhost
to receive replication changes usingIPV6
.
For more information about network masks, see the PostgreSQL documentation.
2.6.5.5. Configuring PostgreSQL to manage Debezium WAL disk space consumption
In certain cases, it is possible for PostgreSQL disk space consumed by WAL files to spike or increase out of usual proportions. There are several possible reasons for this situation:
The LSN up to which the connector has received data is available in the
confirmed_flush_lsn
column of the server’spg_replication_slots
view. Data that is older than this LSN is no longer available, and the database is responsible for reclaiming the disk space.Also in the
pg_replication_slots
view, therestart_lsn
column contains the LSN of the oldest WAL that the connector might require. If the value forconfirmed_flush_lsn
is regularly increasing and the value ofrestart_lsn
lags then the database needs to reclaim the space.The database typically reclaims disk space in batch blocks. This is expected behavior and no action by a user is necessary.
There are many updates in a database that is being tracked but only a tiny number of updates are related to the table(s) and schema(s) for which the connector is capturing changes. This situation can be easily solved with periodic heartbeat events. Set the
heartbeat.interval.ms
connector configuration property.NoteFor the connector to detect and process events from a heartbeat table, you must add the table to the PostgreSQL publication specified by the publication.name property. If this publication predates your Debezium deployment, the connector uses the publications as defined. If the publication is not already configured to automatically replicate changes
FOR ALL TABLES
in the database, you must explicitly add the heartbeat table to the publication, for example,
ALTER PUBLICATION <publicationName> ADD TABLE <heartbeatTableName>;
The PostgreSQL instance contains multiple databases and one of them is a high-traffic database. Debezium captures changes in another database that is low-traffic in comparison to the other database. Debezium then cannot confirm the LSN as replication slots work per-database and Debezium is not invoked. As WAL is shared by all databases, the amount used tends to grow until an event is emitted by the database for which Debezium is capturing changes. To overcome this, it is necessary to:
-
Enable periodic heartbeat record generation with the
heartbeat.interval.ms
connector configuration property. - Regularly emit change events from the database for which Debezium is capturing changes.
A separate process would then periodically update the table by either inserting a new row or repeatedly updating the same row. PostgreSQL then invokes Debezium, which confirms the latest LSN and allows the database to reclaim the WAL space. This task can be automated by means of the
heartbeat.action.query
connector configuration property.-
Enable periodic heartbeat record generation with the
Setting up multiple connectors for same database server
Debezium uses replication slots to stream changes from a database. These replication slots maintain the current position in form of a LSN (Log Sequence Number) which is pointer to a location in the WAL being consumed by the Debezium connector. This helps PostgreSQL keep the WAL available until it is processed by Debezium. A single replication slot can exist only for a single consumer or process - as different consumer might have different state and may need data from different position.
Since a replication slot can only be used by a single connector, it is essential to create a unique replication slot for each Debezium connector. Although when a connector is not active, Postgres may allow other connector to consume the replication slot - which could be dangerous as it may lead to data loss as a slot will emit each change just once [See More].
In addition to replication slot, Debezium uses publication to stream events when using the pgoutput
plugin. Similar to replication slot, publication is at database level and is defined for a set of tables. Thus, you’ll need a unique publication for each connector, unless the connectors work on same set of tables. For more information about the options for enabling Debezium to create publications, see publication.autocreate.mode
See slot.name
and publication.name
on how to set a unique replication slot name and publication name for each connector.
2.6.5.6. Upgrading PostgreSQL databases that Debezium captures from
When you upgrade the PostgreSQL database that Debezium uses, you must take specific steps to protect against data loss and to ensure that Debezium continues to operate. In general, Debezium is resilient to interruptions caused by network failures and other outages. For example, when a database server that a connector monitors stops or crashes, after the connector re-establishes communication with the PostgreSQL server, it continues to read from the last position recorded by the log sequence number (LSN) offset. The connector retrieves information about the last recorded offset from the Kafka Connect offsets topic, and queries the configured PostgreSQL replication slot for a log sequence number (LSN) with the same value.
For the connector to start and to capture change events from a PostgreSQL database, a replication slot must be present. However, as part of the PostgreSQL upgrade process, replication slots are removed, and the original slots are not restored after the upgrade completes. As a result, when the connector restarts and requests the last known offset from the replication slot, PostgreSQL cannot return the information.
You can create a new replication slot, but you must do more than create a new slot to guard against data loss. A new replication slot can provide the LSNs only for changes the occur after you create the slot; it cannot provide the offsets for events that occurred before the upgrade. When the connector restarts, it first requests the last known offset from the Kafka offsets topic. It then sends a request to the replication slot to return information for the offset retrieved from the offsets topic. But the new replication slot cannot provide the information that the connector needs to resume streaming from the expected position. The connector then skips any existing change events in the log, and only resumes streaming from the most recent position in the log. This can lead to silent data loss: the connector emits no records for the skipped events, and it does not provide any information to indicate that events were skipped.
For guidance about how to perform a PostgreSQL database upgrade so that Debezium can continue to capture events while minimizing the risk of data loss, see the following procedure.
Procedure
- Temporarily stop applications that write to the database, or put them into a read-only mode.
- Back up the database.
- Temporarily disable write access to the database.
- Verify that any changes that occurred in the database before you blocked write operations are saved to the write-ahead log (WAL), and that the WAL LSN is reflected on the replication slot.
-
Provide the connector with enough time to capture all event records that are written to the replication slot.
This step ensures that all change events that occurred before the downtime are accounted for, and that they are saved to Kafka. - Verify that the connector has finished consuming entries from the replication slot by checking the value of the flushed LSN.
Shut down the connector gracefully by stopping Kafka Connect.
Kafka Connect stops the connectors, flushes all event records to Kafka, and records the last offset received from each connector.
NoteAs an alternative to stopping the entire Kafka Connect cluster, you can stop the connector by deleting it. Do not remove the offset topic, because it might be shared by other Kafka connectors. Later, after you restore write access to the database and you are ready to restart the connector, you must recreate the connector.
-
As a PostgreSQL administrator, drop the replication slot on the primary database server. Do not use the
slot.drop.on.stop
property to drop the replication slot. This property is for testing only. - Stop the database.
-
Perform the upgrade using an approved PostgreSQL upgrade procedure, such as
pg_upgrade
, orpg_dump
andpg_restore
. -
(Optional) Use a standard Kafka tool to remove the connector offsets from the offset storage topic.
For an example of how to remove connector offsets, see how to remove connector offsets in the Debezium community FAQ. - Restart the database.
As a PostgreSQL administrator, create a Debezium logical replication slot on the database. You must create the slot before enabling writes to the database. Otherwise, Debezium cannot capture the changes, resulting in data loss.
For information about setting up a replication slot, see Section 2.6.5.1, “Configuring a replication slot for the Debezium
pgoutput
plug-in”.- Verify that the publication that defines the tables for Debezium to capture is still present after the upgrade. If the publication is not available, connect to the database as a PostgreSQL administrator to create a new publication.
-
If it was necessary to create a new publication in the previous step, update the Debezium connector configuration to add the name of the new publication to the
publication.name
property. - In the connector configuration, rename the connector.
-
In the connector configuration, set
slot.name
to the name of the Debezium replication slot. - Verify that the new replication slot is available.
- Restore write access to the database and restart any applications that write to the database.
In the connector configuration, set the
snapshot.mode
property tonever
, and then restart the connector.NoteIf you were unable to verify that Debezium finished reading all database changes in Step 6, you can configure the connector to perform a new snapshot by setting
snapshot.mode=initial
. If necessary, you can confirm whether the connector read all changes from the replication slot by checking the contents of a database backup that was taken immediately before the upgrade.
Additional resources
2.6.6. Deployment of Debezium PostgreSQL connectors
You can use either of the following methods to deploy a Debezium PostgreSQL connector:
Additional resources
2.6.6.1. PostgreSQL connector deployment using Streams for Apache Kafka
Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in.
During the deployment process, you create and use the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. -
A
KafkaConnector
CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying theKafkaConnector
CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The spec.build.output
parameter in the KafkaConnect
CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically.
If you use a KafkaConnect
resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information.
Additional resources
- Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift.
2.6.6.2. Using Streams for Apache Kafka to deploy a Debezium PostgreSQL connector
With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use.
During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect
custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
The newly created container is pushed to the container registry that is specified in .spec.build.output
, and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector
custom resources to start the connectors that are included in the build.
Prerequisites
- You have access to an OpenShift cluster on which the cluster Operator is installed.
- The Streams for Apache Kafka Operator is running.
- An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Kafka Connect is deployed on Streams for Apache Kafka
- You have a Red Hat build of Debezium license.
-
The OpenShift
oc
CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
- To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub
- An account and permissions to create and manage images in the registry.
- To store the build image as a native OpenShift ImageStream
- An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams on OpenShift Container Platform.
Procedure
- Log in to the OpenShift cluster.
Create a Debezium
KafkaConnect
custom resource (CR) for the connector, or modify an existing one. For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifies themetadata.annotations
andspec.build
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
Example 2.42. A
dbz-connect.yaml
file that defines aKafkaConnect
custom resource that includes a Debezium connectorIn the example that follows, the custom resource is configured to download the following artifacts:
- The Debezium PostgreSQL connector archive.
- The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector.
- The Debezium scripting SMT archive and the associated scripting engine that you want to use with the Debezium connector. The SMT archive and scripting language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-postgres artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-postgres/2.7.3.Final-redhat-00001/debezium-connector-postgres-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat-<build-number>/apicurio-registry-distro-connect-converter-2.4.4.Final-redhat-<build-number>.zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ...
Table 2.153. Descriptions of Kafka Connect configuration settings Item Description 1
Sets the
strimzi.io/use-connector-resources
annotation to"true"
to enable the Cluster Operator to useKafkaConnector
resources to configure connectors in this Kafka Connect cluster.2
The
spec.build
configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.3
The
build.output
specifies the registry in which the newly built image is stored.4
Specifies the name and image name for the image output. Valid values for
output.type
aredocker
to push into a container registry such as Docker Hub or Quay, orimagestream
to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying thebuild.output
in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}.5
The
plugins
configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-inname
, and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component.6
The value of
artifacts.type
specifies the file type of the artifact specified in theartifacts.url
. Valid types arezip
,tgz
, orjar
. Debezium connector archives are provided in.zip
file format. Thetype
value must match the type of the file that is referenced in theurl
field.7
The value of
artifacts.url
specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server.8
(Optional) Specifies the artifact
type
andurl
for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter.9
(Optional) Specifies the artifact
type
andurl
for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy.10
(Optional) Specifies the artifact
type
andurl
for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT.ImportantIf you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components
artifacts.url
must specify the location of a JAR file, and the value ofartifacts.type
must also be set tojar
. Invalid values cause the connector fails at runtime.To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries:
-
groovy
-
groovy-jsr223
(scripting agent) -
groovy-json
(module for parsing JSON strings)
As an alternative, the Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript.
Apply the
KafkaConnect
build specification to the OpenShift cluster by entering the following command:oc create -f dbz-connect.yaml
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster.Create a
KafkaConnector
resource to define an instance of each connector that you want to deploy.
For example, create the followingKafkaConnector
CR, and save it aspostgresql-inventory-connector.yaml
Example 2.43.
postgresql-inventory-connector.yaml
file that defines theKafkaConnector
custom resource for a Debezium connectorapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-postgresql 1 spec: class: io.debezium.connector.postgresql.PostgresConnector 2 tasksMax: 1 3 config: 4 database.hostname: postgresql.debezium-postgresql.svc.cluster.local 5 database.port: 5432 6 database.user: debezium 7 database.password: dbz 8 database.dbname: mydatabase 9 topic.prefix: inventory-connector-postgresql 10 table.include.list: public.inventory 11 ...
Table 2.154. Descriptions of connector configuration settings Item Description 1
The name of the connector to register with the Kafka Connect cluster.
2
The name of the connector class.
3
The number of tasks that can operate concurrently.
4
The connector’s configuration.
5
The address of the host database instance.
6
The port number of the database instance.
7
The name of the account that Debezium uses to connect to the database.
8
The password that Debezium uses to connect to the database user account.
9
The name of the database to capture changes from.
10
The topic prefix for the database instance or cluster.
The specified name must be formed only from alphanumeric characters or underscores.
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster.
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector.11
The list of tables from which the connector captures change events.
Create the connector resource by running the following command:
oc create -n <namespace> -f <kafkaConnector>.yaml
For example,
oc create -n debezium -f postgresql-inventory-connector.yaml
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by
spec.config.database.dbname
in theKafkaConnector
CR. After the connector pod is ready, Debezium is running.
You are now ready to verify the Debezium PostgreSQL deployment.
2.6.6.3. Deploying a Debezium PostgreSQL connector by building a custom Kafka Connect container image from a Dockerfile
To deploy a Debezium PostgreSQL connector, you need to build a custom Kafka Connect container image that contains the Debezium connector archive and push this container image to a container registry. You then need to create two custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance. Theimage
property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat Streams for Apache Kafka is deployed. Streams for Apache Kafka offers operators and images that bring Apache Kafka to OpenShift. -
A
KafkaConnector
CR that defines your Debezium PostgreSQL connector. Apply this CR to the same OpenShift instance where you applied theKafkaConnect
CR.
Prerequisites
- PostgreSQL is running and you performed the steps to set up PostgreSQL to run a Debezium connector.
- Streams for Apache Kafka is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Managing Streams for Apache Kafka on OpenShift.
- Podman or Docker is installed.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector.
Procedure
Create the Debezium PostgreSQL container for Kafka Connect:
Create a Dockerfile that uses
registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0
as the base image. For example, from a terminal window, enter the following command:cat <<EOF >debezium-container-for-postgresql.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-postgres/2.7.3.Final-redhat-00001/debezium-connector-postgres-2.7.3.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-postgres-2.7.3.Final-redhat-00001-plugin.zip \ && rm debezium-connector-postgres-2.7.3.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ USER 1001 EOF
Item Description 1
You can specify any file name that you want.
2
Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory.
The command creates a Dockerfile with the name
debezium-container-for-postgresql.yaml
in the current directory.Build the container image from the
debezium-container-for-postgresql.yaml
Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:podman build -t debezium-container-for-postgresql:latest .
docker build -t debezium-container-for-postgresql:latest .
The
build
command builds a container image with the namedebezium-container-for-postgresql
.Push your custom image to a container registry such as
quay.io
or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:podman push <myregistry.io>/debezium-container-for-postgresql:latest
docker push <myregistry.io>/debezium-container-for-postgresql:latest
Create a new Debezium PostgreSQL
KafkaConnect
custom resource (CR). For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: image: debezium-container-for-postgresql 2 ...
Item Description 1
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.2
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply your
KafkaConnect
CR to the OpenShift Kafka instance by running the following command:oc create -f dbz-connect.yaml
This updates your Kafka Connect environment in OpenShift to add a Kafka Connector instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium PostgreSQL connector instance.You configure a Debezium PostgreSQL connector in a
.yaml
file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed. For the complete list of the configuration properties that you can set for the Debezium PostgreSQL connector, see PostgreSQL connector properties.The following example shows an excerpt from a custom resource that configures a Debezium connector that connects to a PostgreSQL server host,
192.168.99.100
, on port5432
. This host has a database namedsampledb
, a schema namedpublic
, andinventory-connector-postgresql
is the server’s logical name.
PostgreSQL
inventory-connector.yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-postgresql 1 labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.postgresql.PostgresConnector tasksMax: 1 2 config: 3 database.hostname: 192.168.99.100 4 database.port: 5432 database.user: debezium database.password: dbz database.dbname: sampledb topic.prefix: inventory-connector-postgresql 5 schema.include.list: public 6 plugin.name: pgoutput 7 ...
Table 2.155. Descriptions of settings in the PostgreSQL inventory-connector.yaml example Item Description 1
The name that is used to register the connector with Kafka Connect.
2
The maximum number of tasks to create for this connector. Because the PostgreSQL connector uses a single connector task read the PostgreSQL server
binlog
, to ensure proper order and event handling, only one task should operate at a time. The Kafka Connect service uses connectors to start one or more tasks to perform the work, and it automatically distributes the running tasks across the cluster of Kafka Connect services. If any services stop or crash, tasks are redistributed to running services.3
The connector’s configuration.
4
The name of the database host that runs the PostgreSQL server. In this example, the database host name is
192.168.99.100
.5
A unique topic prefix. The topic prefix is the logical identifier for the PostgreSQL server or cluster of servers. This string is prefixed to the names of all Kafka topics that receive change event records from the connector.
6
The connector captures changes in only the
public
schema. It is possible to configure the connector to capture changes in only the tables that you choose. For more information, seetable.include.list
.7
The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server. Although the connector only supports use of the
pgoutput
plugin, you must explicitly setplugin.name
topgoutput
.Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in theinventory-connector.yaml
file, you would run the following command:oc apply -f inventory-connector.yaml
This registers
inventory-connector
and the connector starts to run against thesampledb
database as defined in theKafkaConnector
CR.
Results
After the connector starts, it performs a consistent snapshot of the PostgreSQL server databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
2.6.6.4. Verifying that the Debezium PostgreSQL connector is running
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc):
- Verify the connector status.
- Verify that the connector generates topics.
- Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table.
Prerequisites
- A Debezium connector is deployed to Streams for Apache Kafka on OpenShift.
-
The OpenShift
oc
CLI client is installed. - You have access to the OpenShift Container Platform web console.
Procedure
Check the status of the
KafkaConnector
resource by using one of the following methods:From the OpenShift Container Platform web console:
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaConnector
. - From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-postgresql.
- In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc describe KafkaConnector <connector-name> -n <project>
For example,
oc describe KafkaConnector inventory-connector-postgresql -n debezium
The command returns status information that is similar to the following output:
Example 2.44.
KafkaConnector
resource statusName: inventory-connector-postgresql Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-postgresql Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-postgresql.inventory inventory-connector-postgresql.inventory.addresses inventory-connector-postgresql.inventory.customers inventory-connector-postgresql.inventory.geom inventory-connector-postgresql.inventory.orders inventory-connector-postgresql.inventory.products inventory-connector-postgresql.inventory.products_on_hand Events: <none>
Verify that the connector created Kafka topics:
From the OpenShift Container Platform web console.
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaTopic
. -
From the KafkaTopics list, click the name of the topic that you want to check, for example,
inventory-connector-postgresql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d
. - In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc get kafkatopics
The command returns status information that is similar to the following output:
Example 2.45.
KafkaTopic
resource statusNAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-postgresql--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-postgresql.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
Check topic content.
- From a terminal window, enter the following command:
oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=<topic-name>
For example,
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-postgresql.inventory.products_on_hand
The format for specifying the topic name is the same as the
oc describe
command returns in Step 1, for example,inventory-connector-postgresql.inventory.addresses
.For each event in the topic, the command returns information that is similar to the following output:
Example 2.46. Content of a Debezium change event
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory-connector-postgresql.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-postgresql.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-postgresql.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"int64","optional":false,"field":"ts_us"},{"type":"int64","optional":false,"field":"ts_ns"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"int64","optional":true,"field":"ts_us"},{"type":"int64","optional":true,"field":"ts_ns"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"inventory-connector-postgresql.inventory.products_on_hand.Envelope"},"payload":{"before":null,"after":{"product_id":101,"quantity":3},"source":{"version":"2.7.3.Final-redhat-00001","connector":"postgresql","name":"inventory-connector-postgresql","ts_ms":1638985247805,"ts_us":1638985247805000000,"ts_ns":1638985247805000000,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"postgresql-bin.000003","pos":156,"row":0,"thread":null,"query":null},"op":"r","ts_ms":1638985247805,"ts_us":1638985247805102,"ts_ns":1638985247805102588,"transaction":null}}
In the preceding example, the
payload
value shows that the connector snapshot generated a read ("op" ="r"
) event from the tableinventory.products_on_hand
. The"before"
state of theproduct_id
record isnull
, indicating that no previous value exists for the record. The"after"
state shows aquantity
of3
for the item withproduct_id
101
.
2.6.6.5. Descriptions of Debezium PostgreSQL connector configuration properties
The Debezium PostgreSQL connector has many configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:
Required Debezium PostgreSQL connector configuration properties
The following configuration properties are required unless a default value is available.
Property | Default | Description |
---|---|---|
No default | Unique name for the connector. Attempting to register again with the same name will fail. This property is required by all Kafka Connect connectors. | |
No default |
The name of the Java class for the connector. Always use a value of | |
| The maximum number of tasks that should be created for this connector. The PostgreSQL connector always uses a single task and therefore does not use this value, so the default is always acceptable. | |
| The name of the PostgreSQL logical decoding plug-in installed on the PostgreSQL server.
The only supported value is | |
| The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in for a particular database/schema. The server uses this slot to stream events to the Debezium connector that you are configuring. Slot names must conform to PostgreSQL replication slot naming rules, which state: "Each replication slot has a name, which can contain lower-case letters, numbers, and the underscore character." | |
| Whether or not to delete the logical replication slot when the connector stops in a graceful, expected way. The default behavior is that the replication slot remains configured for the connector when the connector stops. When the connector restarts, having the same replication slot enables the connector to start processing where it left off.
Set to | |
|
The name of the PostgreSQL publication created for streaming changes when using This publication is created at start-up if it does not already exist and it includes all tables. Debezium then applies its own include/exclude list filtering, if configured, to limit the publication to change events for the specific tables of interest. The connector user must have superuser permissions to create this publication, so it is usually preferable to create the publication before starting the connector for the first time. If the publication already exists, either for all tables or configured with a subset of tables, Debezium uses the publication as it is defined. | |
No default | IP address or hostname of the PostgreSQL database server. | |
| Integer port number of the PostgreSQL database server. | |
No default | Name of the PostgreSQL database user for connecting to the PostgreSQL database server. | |
No default | Password to use when connecting to the PostgreSQL database server. | |
No default | The name of the PostgreSQL database from which to stream the changes. | |
No default |
Topic prefix that provides a namespace for the particular PostgreSQL database server or cluster in which Debezium is capturing changes. The prefix should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector. Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name. Warning Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. | |
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Any schema name not included in
To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the schema; it does not match substrings that might be present in a schema name. | |
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Any schema whose name is not included in
To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the schema; it does not match substrings that might be present in a schema name. | |
No default |
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you want to capture. When this property is set, the connector captures changes only from the specified tables. Each identifier is of the form schemaName.tableName. By default, the connector captures changes in every non-system table in each schema whose changes are being captured.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the table; it does not match substrings that might be present in a table name. | |
No default |
An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. Each identifier is of the form schemaName.tableName. When this property is set, the connector captures changes from every table that you do not specify.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the table; it does not match substrings that might be present in a table name. | |
No default |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in change event record values. Fully-qualified names for columns are of the form schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the expression is used to match the entire name string of the column; it does not match substrings that might be present in a column name. | |
No default |
An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event record values. Fully-qualified names for columns are of the form schemaName.tableName.columnName.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the expression is used to match the entire name string of the column; it does not match substrings that might be present in a column name. | |
|
Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per Note: Only works when REPLICA IDENTITY of the table is set to FULL | |
|
Time, date, and timestamps can be represented with different kinds of precision: | |
|
Specifies how the connector should handle values for | |
|
Specifies how the connector should handle values for | |
|
Specifies how the connector should handle values for | |
|
Whether to use an encrypted connection to the PostgreSQL server. Options include: | |
No default | The path to the file that contains the SSL certificate for the client. For more information, see the PostgreSQL documentation. | |
No default | The path to the file that contains the SSL private key of the client. For more information, see the PostgreSQL documentation. | |
No default |
The password to access the client private key from the file specified by | |
No default | The path to the file that contains the root certificate(s) against which the server is validated. For more information, see the PostgreSQL documentation. | |
| Enable TCP keep-alive probe to verify that the database connection is still alive. For more information, see the PostgreSQL documentation. | |
|
Controls whether a delete event is followed by a tombstone event. | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set
The fully-qualified name of a column observes the following format: You can specify multiple properties with different lengths in a single configuration. | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration. | |
| n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form <schemaName>.<tableName>.<columnName>.
A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. |
n/a | An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.columnName, or databaseName.schemaName.tableName.columnName. | |
n/a | An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes one of the following formats: databaseName.tableName.typeName, or databaseName.schemaName.tableName.typeName. For the list of PostgreSQL-specific data type names, see the PostgreSQL data type mappings. | |
empty string | A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns.
Each fully-qualified table name is a regular expression in the following format: There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key.
Note that having this property set and | |
all_tables |
Specifies whether and how the connector creates a publication. This setting applies only when the connector streams changes by using the Note To create publications, the connector must access PostgreSQL through a database account that has specific permissions. For more information, see Setting privileges to enable Debezium to create PostgreSQL publications. Specify one of the following values:
| |
empty string |
The setting determines the value for replica identity at table level. schema1.*:FULL,schema2.table2:NOTHING,schema2.table3:INDEX idx_name | |
bytes |
Specifies how binary ( | |
none |
Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
| |
none |
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
For more information, see Avro naming. | |
|
Specifies how many decimal digits should be used when converting Postgres | |
No default | An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you want the connector to capture. By default, the connector captures all logical decoding messages. When this property is set, the connector captures only logical decoding message with the prefixes specified by the property. All other logical decoding messages are excluded. To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix.
If you include this property in the configuration, do not also set the For information about the structure of message events and about their ordering semantics, see message events. | |
No default |
An optional, comma-separated list of regular expressions that match the names of the logical decoding message prefixes that you do not want the connector to capture. When this property is set, the connector does not capture logical decoding messages that use the specified prefixes. All other messages are captured. To match the name of a message prefix, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire message prefix string; the expression does not match substrings that might be present in a prefix.
If you include this property in the configuration, do not also set For information about the structure of message events and about their ordering semantics, see message events. |
Advanced Debezium PostgreSQL connector configuration properties
The following advanced configuration properties have defaults that work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property | Default | Description |
---|---|---|
No default |
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example,
You must set the
For each converter that you configure for a connector, you must also add a
For example, isbn.type: io.debezium.test.IsbnConverter
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter. isbn.schema.name: io.debezium.postgresql.type.Isbn | |
initial |
Specifies the criteria for performing a snapshot when the connector starts:
If there is a previously stored LSN in the Kafka offsets topic, the connector continues streaming changes from that position. If no LSN is stored, the connector starts streaming changes from the point in time when the PostgreSQL logical replication slot was created on the server. Use this snapshot mode only when you know all data of interest is still reflected in the WAL.
For more information, see the table of | |
|
Specifies how the connector holds locks on tables while performing a schema snapshot.
Warning Do not use this mode if schema changes might occur during the snapshot. | |
|
Specifies how the connector queries data while performing a snapshot.
This setting enables you to manage snapshot content in a more flexible manner compared to using the | |
All tables specified in |
An optional, comma-separated list of regular expressions that match the fully-qualified names ( To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. | |
| Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If the connector cannot acquire table locks in this time interval, the snapshot fails. How the connector performs snapshots provides details. | |
No default | Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a comma-separated list of fully-qualified table names in the form
From a "snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM customers.orders WHERE delete_flag = 0 ORDER BY id DESC"
In the resulting snapshot, the connector includes only the records for which | |
|
Specifies how the connector should react to exceptions during processing of events: | |
| Positive integer value that specifies the maximum size of each batch of events that the connector processes. | |
|
Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of | |
|
A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. | |
| Positive integer value that specifies the number of milliseconds the connector should wait for new change events to appear before it starts processing a batch of events. Defaults to 500 milliseconds. | |
|
Specifies connector behavior when the connector encounters a field whose data type is unknown. The default behavior is that the connector omits the field from the change event and logs a warning. Note
Consumers risk backward compatibility issues when | |
No default |
A semicolon separated list of SQL statements that the connector executes when it establishes a JDBC connection to the database. To use a semicolon as a character and not as a delimiter, specify two consecutive semicolons, | |
|
Frequency for sending replication connection status updates to the server, given in milliseconds. | |
|
Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default behavior is that the connector does not send heartbeat messages. | |
No default |
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. | |
|
Specify the conditions that trigger a refresh of the in-memory schema for a table. | |
No default | An interval in milliseconds that the connector should wait before performing a snapshot when the connector starts. If you are starting multiple connectors in a cluster, this property is useful for avoiding snapshot interruptions, which might cause re-balancing of connectors. | |
0 |
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of the | |
| During a snapshot, the connector reads table content in batches of rows. This property specifies the maximum number of rows in a batch. | |
No default |
Semicolon separated list of parameters to pass to the configured logical decoding plug-in. For example, | |
| If connecting to a replication slot fails, this is the maximum number of consecutive attempts to connect. | |
| The number of milliseconds to wait between retry attempts when the connector fails to connect to a replication slot. | |
|
Specifies the constant that the connector provides to indicate that the original value is a toasted value that is not provided by the database. If the setting of | |
|
Determines whether the connector generates events with transaction boundaries and enriches change event envelopes with transaction metadata. Specify | |
|
Determines whether the connector should commit the LSN of the processed records in the source postgres database so that the WAL logs can be deleted. Specify | |
10000 (10 seconds) | The number of milliseconds to wait before restarting a connector after a retriable error occurs. | |
|
A comma-separated list of operation types that will be skipped during streaming. The operations include: | |
No default value |
Fully-qualified name of the data collection that is used to send signals to the connector. | |
source | List of the signaling channel names that are enabled for the connector. By default, the following channels are available:
| |
No default | List of notification channel names that are enabled for the connector. By default, the following channels are available:
| |
1024 | The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. | |
|
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
| |
|
How often, in milliseconds, the XMIN will be read from the replication slot. The XMIN value provides the lower bounds of where a new replication slot could start from. The default value of | |
|
The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to | |
|
Specify the delimiter for topic name, defaults to | |
| The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. | |
|
Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: | |
|
Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: | |
| Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. Important Parallel initial snapshots is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. | |
|
Defines tags that customize MBean object names by adding metadata that provides contextual information. Specify a comma-separated list of key-value pairs. Each key represents a tag for the MBean object name, and the corresponding value represents a value for the key, for example, The connector appends the specified tags to the base MBean object name. Tags can help you to organize and categorize metrics data. You can define tags to identify particular application instances, environments, regions, versions, and so forth. For more information, see Customized MBean names. | |
|
Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
| |
|
Specifies the time, in milliseconds, that the connector waits for a query to complete. Set the value to |
Pass-through PostgreSQL connector configuration properties
The connector supports pass-through properties that enable Debezium to specify custom configuration options for fine-tuning the behavior of the Apache Kafka producer and consumer. For information about the full range of configuration properties for Kafka producers and consumers, see the Kafka documentation.
Pass-through properties for configuring how the PostgreSQL connector interacts with the Kafka signaling topic
Debezium provides a set of signal.*
properties that control how the connector interacts with the Kafka signals topic.
The following table describes the Kafka signal
properties.
Property | Default | Description |
---|---|---|
<topic.prefix>-signal | The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. | |
kafka-signal | The name of the group ID that is used by Kafka consumers. | |
No default | A list of the host and port pairs that the connector uses to establish its initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. | |
| Specifies whether the Kafka consumer writes an offset commit after it reads a message from the signaling topic. The value that you assign to this property determines whether the connector can process requests that the signaling topic receives while the connector is offline. Choose one of the following settings:
|
Pass-through properties for configuring the Kafka consumer client for the signaling channel
The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.*
. For example, the connector passes properties such as signal.consumer.security.protocol=SSL
to the Kafka consumer.
Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer.
Pass-through properties for configuring the PostgreSQL connector sink notification channel
The following table describes properties that you can use to configure the Debezium sink notification
channel.
Property | Default | Description |
---|---|---|
No default |
The name of the topic that receives notifications from Debezium. This property is required when you configure the |
Debezium connector pass-through database driver configuration properties
The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix driver.*
. For example, the connector passes properties such as driver.foobar=false
to the JDBC URL.
Debezium strips the prefixes from the properties before it passes the properties to the database driver.
2.6.7. Monitoring Debezium PostgreSQL connector performance
The Debezium PostgreSQL connector provides two types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide.
- Snapshot metrics provide information about connector operation while performing a snapshot.
- Streaming metrics provide information about connector operation when the connector is capturing changes and streaming change event records.
Debezium monitoring documentation provides details for how to expose these metrics by using JMX.
/ Type: concept .Customized MBean names
Debezium connectors expose metrics via the MBean name for the connector. These metrics, which are specific to each connector instance, provide data about the behavior of the connector’s snapshot, streaming, and schema history processes.
By default, when you deploy a correctly configured connector, Debezium generates a unique MBean name for each of the different connector metrics. To view the metrics for a connector process, you configure your observability stack to monitor its MBean. But these default MBean names depend on the connector configuration; configuration changes can result in changes to the MBean names. A change to the MBean name breaks the linkage between the connector instance and the MBean, disrupting monitoring activity. In this scenario, you must reconfigure the observability stack to use the new MBean name if you want to resume monitoring.
To prevent monitoring disruptions that result from MBean name changes, you can configure custom metrics tags. You configure custom metrics by adding the custom.metric.tags
property to the connector configuration. The property accepts key-value pairs in which each key represents a tag for the MBean object name, and the corresponding value represents the value of that tag. For example: k1=v1,k2=v2
. Debezium appends the specified tags to the MBean name of the connector.
After you configure the custom.metric.tags
property for a connector, you can configure the observability stack to retrieve metrics associated with the specified tags. The observability stack then uses the specified tags, rather than the mutable MBean names to uniquely identify connectors. Later, if Debezium redefines how it constructs MBean names, or if the topic.prefix
in the connector configuration changes, metrics collection is uninterrupted, because the metrics scrape task uses the specified tag patterns to identify the connector.
A further benefit of using custom tags, is that you can use tags that reflect the architecture of your data pipeline, so that metrics are organized in a way that suits you operational needs. For example, you might specify tags with values that declare the type of connector activity, the application context, or the data source, for example, db1-streaming-for-application-abc
. If you specify multiple key-value pairs, all of the specified pairs are appended to the connector’s MBean name.
The following example illustrates how tags modify the default MBean name.
Example 2.47. How custom tags modify the connector MBean name
By default, the PostgreSQL connector uses the following MBean name for streaming metrics:
debezium.postgresql:type=connector-metrics,context=streaming,server=<topic.prefix>
If you set the value of custom.metric.tags
to database=salesdb-streaming,table=inventory
, Debezium generates the following custom MBean name:
debezium.postgresql:type=connector-metrics,context=streaming,server=<topic.prefix>,database=salesdb-streaming,table=inventory
2.6.7.1. Monitoring Debezium during snapshots of PostgreSQL databases
The MBean is debezium.postgres:type=connector-metrics,context=snapshot,server=<topic.prefix>
.
Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
The following table lists the snapshot metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last snapshot event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of events that this connector has seen since last started or reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The total number of tables that are being included in the snapshot. | |
| The number of tables that the snapshot has yet to copy. | |
| Whether the snapshot was started. | |
| Whether the snapshot was paused. | |
| Whether the snapshot was aborted. | |
| Whether the snapshot completed. | |
| The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. | |
| The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. | |
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The connector also provides the following additional snapshot metrics when an incremental snapshot is executed:
Attributes | Type | Description |
---|---|---|
| The identifier of the current snapshot chunk. | |
| The lower bound of the primary key set defining the current chunk. | |
| The upper bound of the primary key set defining the current chunk. | |
| The lower bound of the primary key set of the currently snapshotted table. | |
| The upper bound of the primary key set of the currently snapshotted table. |
2.6.7.2. Monitoring Debezium PostgreSQL connector record streaming
The MBean is debezium.postgres:type=connector-metrics,context=streaming,server=<topic.prefix>
.
The following table lists the streaming metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last streaming event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of data change events reported by the source database since the last connector start, or since a metrics reset. Represents the data change workload for Debezium to process. | |
| The total number of create events processed by the connector since its last start or metrics reset. | |
| The total number of update events processed by the connector since its last start or metrics reset. | |
| The total number of delete events processed by the connector since its last start or metrics reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| Flag that denotes whether the connector is currently connected to the database server. | |
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. | |
| The number of processed transactions that were committed. | |
| The coordinates of the last received event. | |
| Transaction identifier of the last processed transaction. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
2.6.8. How Debezium PostgreSQL connectors handle faults and problems
Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.
Exactly-once delivery of PostgreSQL change event records is a Developer Preview feature only. Developer Preview software is not supported by Red Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.
If a fault does happen then the system does not lose any events. However, while it is recovering from the fault, it’s possible that the connector might emit some duplicate change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.
Details are in the following sections:
Configuration and startup errors
In the following situations, the connector fails when trying to start, reports an error/exception in the log, and stops running:
- The connector’s configuration is invalid.
- The connector cannot successfully connect to PostgreSQL by using the specified connection parameters.
- The connector is restarting from a previously-recorded position in the PostgreSQL WAL (by using the LSN) and PostgreSQL no longer has that history available.
In these cases, the error message has details about the problem and possibly a suggested workaround. After you correct the configuration or address the PostgreSQL problem, restart the connector.
The PostgreSQL connector externally stores the last processed offset in the form of a PostgreSQL LSN. After a connector restarts and connects to a server instance, the connector communicates with the server to continue streaming from that particular offset. This offset is available as long as the Debezium replication slot remains intact. Never drop a replication slot on the primary server or you will lose data. For information about failure cases in which a slot has been removed, see the next section.
Cluster failures
As of release 12, PostgreSQL allows logical replication slots only on primary servers. This means that you can point a Debezium PostgreSQL connector to only the active primary server of a database cluster. Also, replication slots themselves are not propagated to replicas. If the primary server goes down, a new primary must be promoted.
Some managed PostgresSQL services (AWS RDS and GCP CloudSQL for example) implement replication to a standby via disk replication. This means that the replication slot does get replicated and will remain available after a failover.
The new primary must have a replication slot that is configured for use by the pgoutput
plug-in and the database in which you want to capture changes. Only then can you point the connector to the new server and restart the connector.
There are important caveats when failovers occur and you should pause Debezium until you can verify that you have an intact replication slot that has not lost data. After a failover:
- There must be a process that re-creates the Debezium replication slot before allowing the application to write to the new primary. This is crucial. Without this process, your application can miss change events.
- You might need to verify that Debezium was able to read all changes in the slot before the old primary failed.
One reliable method of recovering and verifying whether any changes were lost is to recover a backup of the failed primary to the point immediately before it failed. While this can be administratively difficult, it allows you to inspect the replication slot for any unconsumed changes.
Kafka Connect process stops gracefully
Suppose that Kafka Connect is being run in distributed mode and a Kafka Connect process is stopped gracefully. Prior to shutting down that process, Kafka Connect migrates the process’s connector tasks to another Kafka Connect process in that group. The new connector tasks start processing exactly where the prior tasks stopped. There is a short delay in processing while the connector tasks are stopped gracefully and restarted on the new processes.
Kafka Connect process crashes
If the Kafka Connector process stops unexpectedly, any connector tasks it was running terminate without recording their most recently processed offsets. When Kafka Connect is being run in distributed mode, Kafka Connect restarts those connector tasks on other processes. However, PostgreSQL connectors resume from the last offset that was recorded by the earlier processes. This means that the new replacement tasks might generate some of the same change events that were processed just prior to the crash. The number of duplicate events depends on the offset flush period and the volume of data changes just before the crash.
Because there is a chance that some events might be duplicated during a recovery from failure, consumers should always anticipate some duplicate events. Debezium changes are idempotent, so a sequence of events always results in the same state.
In each change event record, Debezium connectors insert source-specific information about the origin of the event, including the PostgreSQL server’s time of the event, the ID of the server transaction, and the position in the write-ahead log where the transaction changes were written. Consumers can keep track of this information, especially the LSN, to determine whether an event is a duplicate.
Connector is stopped for a duration
If the connector is gracefully stopped, the database can continue to be used. Any changes are recorded in the PostgreSQL WAL. When the connector restarts, it resumes streaming changes where it left off. That is, it generates change event records for all database changes that were made while the connector was stopped.
A properly configured Kafka cluster is able to handle massive throughput. Kafka Connect is written according to Kafka best practices, and given enough resources a Kafka Connect connector can also handle very large numbers of database change events. Because of this, after being stopped for a while, when a Debezium connector restarts, it is very likely to catch up with the database changes that were made while it was stopped. How quickly this happens depends on the capabilities and performance of Kafka and the volume of changes being made to the data in PostgreSQL.
2.7. Debezium connector for SQL Server
The Debezium SQL Server connector captures row-level changes that occur in the schemas of a SQL Server database.
For information about the SQL Server versions that are compatible with this connector, see the Debezium Supported Configurations page.
For details about the Debezium SQL Server connector and its use, see the following topics:
- Section 2.7.1, “Overview of Debezium SQL Server connector”
- Section 2.7.2, “How Debezium SQL Server connectors work”
- Section 2.7.2.11, “Descriptions of Debezium SQL Server connector data change events”
- Section 2.7.2.13, “How Debezium SQL Server connectors map data types”
- Section 2.7.3, “Setting up SQL Server to run a Debezium connector”
- Section 2.7.4, “Deployment of Debezium SQL Server connectors”
- Section 2.7.5, “Refreshing capture tables after a schema change”
- Section 2.7.6, “Monitoring Debezium SQL Server connector performance”
The first time that the Debezium SQL Server connector connects to a SQL Server database or cluster, it takes a consistent snapshot of the schemas in the database. After the initial snapshot is complete, the connector continuously captures row-level changes for INSERT
, UPDATE
, or DELETE
operations that are committed to the SQL Server databases that are enabled for CDC. The connector produces events for each data change operation, and streams them to Kafka topics. The connector streams all of the events for a table to a dedicated Kafka topic. Applications and services can then consume data change event records from that topic.
2.7.1. Overview of Debezium SQL Server connector
The Debezium SQL Server connector is based on the change data capture feature that is available in SQL Server 2016 Service Pack 1 (SP1) and later Standard edition or Enterprise edition. The SQL Server capture process monitors designated databases and tables, and stores the changes into specifically created change tables that have stored procedure facades.
To enable the Debezium SQL Server connector to capture change event records for database operations, you must first enable change data capture on the SQL Server database. CDC must be enabled on both the database and on each table that you want to capture. After you set up CDC on the source database, the connector can capture row-level INSERT
, UPDATE
, and DELETE
operations that occur in the database. The connector writes event records for each source table to a Kafka topic especially dedicated to that table. One topic exists for each captured table. Client applications read the Kafka topics for the database tables that they follow, and can respond to the row-level events they consume from those topics.
The first time that the connector connects to a SQL Server database or cluster, it takes a consistent snapshot of the schemas for all tables for which it is configured to capture changes, and streams this state to Kafka. After the snapshot is complete, the connector continuously captures subsequent row-level changes that occur. By first establishing a consistent view of all of the data, the connector can continue reading without having lost any of the changes that were made while the snapshot was taking place.
The Debezium SQL Server connector is tolerant of failures. As the connector reads changes and produces events, it periodically records the position of events in the database log (LSN / Log Sequence Number). If the connector stops for any reason (including communication failures, network problems, or crashes), after a restart the connector resumes reading the SQL Server CDC tables from the last point that it read.
Offsets are committed periodically. They are not committed at the time that a change event occurs. As a result, following an outage, duplicate events might be generated.
Fault tolerance also applies to snapshots. That is, if the connector stops during a snapshot, the connector begins a new snapshot when it restarts.
2.7.2. How Debezium SQL Server connectors work
To optimally configure and run a Debezium SQL Server connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.
For details about how the connector works, see the following sections:
- Section 2.7.2.1, “How Debezium SQL Server connectors perform database snapshots”
- Section 2.7.2.2, “Ad hoc snapshots”
- Section 2.7.2.3, “Incremental snapshots”
- Section 2.7.2.5, “How Debezium SQL Server connectors read change data tables”
- Section 2.7.2.8, “Default names of Kafka topics that receive Debezium SQL Server change event records”
- Section 2.7.2.10, “How the Debezium SQL Server connector uses the schema change topic”
- Section 2.7.2.11, “Descriptions of Debezium SQL Server connector data change events”
- Section 2.7.2.12, “Debezium SQL Server connector-generated events that represent transaction boundaries”
2.7.2.1. How Debezium SQL Server connectors perform database snapshots
SQL Server CDC is not designed to store a complete history of database changes. For the Debezium SQL Server connector to establish a baseline for the current state of the database, it uses a process called snapshotting. The initial snapshot captures the structure and data of the tables in the database.
You can find more information about snapshots in the following sections:
2.7.2.1.1. Default workflow that the Debezium SQL Server connector uses to perform an initial snapshot
The following workflow lists the steps that Debezium takes to create a snapshot. These steps describe the process for a snapshot when the snapshot.mode
configuration property is set to its default value, which is initial
. You can customize the way that the connector creates snapshots by changing the value of the snapshot.mode
property. If you configure a different snapshot mode, the connector completes the snapshot by using a modified version of this workflow.
- Establish a connection to the database.
-
Determine the tables to be captured. By default, the connector captures all non-system tables. To have the connector capture a subset of tables or table elements, you can set a number of
include
andexclude
properties to filter the data, for example,table.include.list
ortable.exclude.list
. -
Obtain a lock on the SQL Server tables for which CDC is enabled to prevent structural changes from occurring during creation of the snapshot. The level of the lock is determined by the
snapshot.isolation.mode
configuration property. - Read the maximum log sequence number (LSN) position in the server’s transaction log.
Capture the structure of all non-system, or all tables that are designated for capture. The connector persists this information in its internal database schema history topic. The schema history provides information about the structure that is in effect when a change event occurs.
NoteBy default, the connector captures the schema of every table in the database that is in capture mode, including tables that are not configured for capture. If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data. For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see Understanding why initial snapshots capture the schema for all tables.
- Release the locks obtained in Step 3, if necessary. Other database clients can now write to any previously locked tables.
At the LSN position read in Step 4, the connector scans the tables to be captured. During the scan, the connector completes the following tasks:
- Confirms that the table was created before the snapshot began. If the table was created after the snapshot began, the connector skips the table. After the snapshot is complete, and the connector transitions to streaming, it emits change events for any tables that were created after the snapshot began.
-
Produces a
read
event for each row that is captured from a table. Allread
events contain the same LSN position, which is the LSN position that was obtained in step 4. -
Emits each
read
event to the Kafka topic for the table.
- Records the successful completion of the snapshot in the connector offsets.
The resulting initial snapshot captures the current state of each row in the tables that are enabled for CDC. From this baseline state, the connector captures subsequent changes as they occur.
After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts.
After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 4 so that it does not miss any updates.
If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.
Setting | Description |
---|---|
| Perform snapshot on each connector start. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot as described in the default workflow for creating an initial snapshot. After the snapshot completes, the connector begins to stream event records for subsequent database changes. |
| The connector performs a database snapshot and stops before streaming any change event records, not allowing any subsequent change events to be captured. |
|
Deprecated, see |
|
The connector captures the structure of all relevant tables, performing all the steps described in the default snapshot workflow, except that it does not create |
|
Set this option to restore a database schema history topic that is lost or corrupted. After a restart, the connector runs a snapshot that rebuilds the topic from the source tables. You can also set the property to periodically prune a database schema history topic that experiences unexpected growth. + WARNING: Do not use this mode to perform a snapshot if schema changes were committed to the database after the last connector shutdown. |
| After the connector starts, it performs a snapshot only if it detects one of the following circumstances:
|
For more information, see snapshot.mode
in the table of connector configuration properties.
2.7.2.1.2. Description of why initial snapshots capture the schema history for all tables
The initial snapshot that a connector runs captures two types of information:
- Table data
-
Information about
INSERT
,UPDATE
, andDELETE
operations in tables that are named in the connector’stable.include.list
property. - Schema data
- DDL statements that describe the structural changes that are applied to tables. Schema data is persisted to both the internal schema history topic, and to the connector’s schema change topic, if one is configured.
After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture. By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture. Connectors require that the table’s schema is present in the schema history topic before they can capture a table. By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, Debezium prepares the connector to readily capture event data from these tables should that later become necessary. If the initial snapshot does not capture a table’s schema, you must add the schema to the history topic before the connector can capture data from the table.
In some cases, you might want to limit schema capture in the initial snapshot. This can be useful when you want to reduce the time required to complete a snapshot. Or when Debezium connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database.
Additional information
- Capturing data from tables not captured by the initial snapshot (no schema change)
- Capturing data from tables not captured by the initial snapshot (schema change)
-
Setting the
schema.history.internal.store.only.captured.tables.ddl
property to specify the tables from which to capture schema information. -
Setting the
schema.history.internal.store.only.captured.databases.ddl
property to specify the logical databases from which to capture schema changes.
2.7.2.1.3. Capturing data from tables not captured by the initial snapshot (no schema change)
In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot. Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database. If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error.
You might still be able to capture data from the table, but you must perform additional steps to add the table schema.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- No schema changes were applied to the table between the LSNs of the earliest and latest change table entry that the connector reads. For information about capturing data from a new table that has undergone structural changes, see Section 2.1.2.1.4, “Capturing data from tables not captured by the initial snapshot (schema change)”.
Procedure
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
Apply the following changes to the connector configuration:
(Optional) Set the value of
schema.history.internal.store.only.captured.tables.ddl
tofalse
. This setting causes the snapshot to capture the schema for all tables, and guarantees that, in the future, the connector can reconstruct the schema history for all tables.
NoteSnapshots that capture the schema for all tables require more time to complete.
-
Add the tables that you want the connector to capture to
table.include.list
. Set the
snapshot.mode
to one of the following values:initial
-
When you restart the connector, it takes a full snapshot of the database that captures the table data and table structures.
If you select this option, consider setting the value of theschema.history.internal.store.only.captured.tables.ddl
property tofalse
to enable the connector to capture the schema of all tables. schema_only
- When you restart the connector, it takes a snapshot that captures only the table schema. Unlike a full data snapshot, this option does not capture any table data. Use this option if you want to restart the connector more quickly than with a full snapshot.
-
Restart the connector. The connector completes the type of snapshot specified by the
snapshot.mode
. (Optional) If the connector performed a
schema_only
snapshot, after the snapshot completes, initiate an incremental snapshot to capture data from the tables that you added. The connector runs the snapshot while it continues to stream real-time changes from the tables. Running an incremental snapshot captures the following data changes:- For tables that the connector previously captured, the incremental snapsot captures changes that occur while the connector was down, that is, in the interval between the time that the connector was stopped, and the current restart.
- For newly added tables, the incremental snapshot captures all existing table rows.
2.7.2.1.4. Capturing data from tables not captured by the initial snapshot (schema change)
If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change. When Debezium captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event. If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results.
If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available. You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table.
Prerequisites
- You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
- A schema change was applied to the table so that the records to be captured do not have a uniform structure.
Procedure
- Initial snapshot captured the schema for all tables (
store.only.captured.tables.ddl
was set tofalse
) -
Edit the
table.include.list
property to specify the tables that you want to capture. - Restart the connector.
- Initiate an incremental snapshot if you want to capture existing data from the newly added tables.
-
Edit the
- Initial snapshot did not capture the schema for all tables (
store.only.captured.tables.ddl
was set totrue
) If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures:
- Procedure 1: Schema snapshot, followed by incremental snapshot
In this procedure, the connector first performs a schema snapshot. You can then initiate an incremental snapshot to enable the connector to synchronize data.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toschema_only
. -
Edit the
table.include.list
to add the tables that you want to capture.
-
Set the value of the
- Restart the connector.
- Wait for Debezium to capture the schema of the new and existing tables. Data changes that occurred any tables after the connector stopped are not captured.
- To ensure that no data is lost, initiate an incremental snapshot.
- Procedure 2: Initial snapshot, followed by optional incremental snapshot
In this procedure the connector performs a full initial snapshot of the database. As with any initial snapshot, in a database with many large tables, running an initial snapshot can be a time-consuming operation. After the snapshot completes, you can optionally trigger an incremental snapshot to capture any changes that occur while the connector is off-line.
- Stop the connector.
-
Remove the internal database schema history topic that is specified by the
schema.history.internal.kafka.topic property
. Clear the offsets in the configured Kafka Connect
offset.storage.topic
. For more information about how to remove offsets, see the Debezium community FAQ.WarningRemoving offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data. This operation is potentially destructive, and should be performed only as a last resort.
-
Edit the
table.include.list
to add the tables that you want to capture. Set values for properties in the connector configuration as described in the following steps:
-
Set the value of the
snapshot.mode
property toinitial
. -
(Optional) Set
schema.history.internal.store.only.captured.tables.ddl
tofalse
.
-
Set the value of the
- Restart the connector. The connector takes a full database snapshot. After the snapshot completes, the connector transitions to streaming.
- (Optional) To capture any data that changed while the connector was off-line, initiate an incremental snapshot.
2.7.2.2. Ad hoc snapshots
By default, a connector runs an initial snapshot operation only after it starts for the first time. Following this initial snapshot, under normal circumstances, the connector does not repeat the snapshot process. Any future change event data that the connector captures comes in through the streaming process only.
However, in some situations the data that the connector obtained during the initial snapshot might become stale, lost, or incomplete. To provide a mechanism for recapturing table data, Debezium includes an option to perform ad hoc snapshots. You might want to perform an ad hoc snapshot after any of the following changes occur in your Debezium environment:
- The connector configuration is modified to capture a different set of tables.
- Kafka topics are deleted and must be rebuilt.
- Data corruption occurs due to a configuration error or some other problem.
You can re-run a snapshot for a table for which you previously captured a snapshot by initiating a so-called ad-hoc snapshot. Ad hoc snapshots require the use of signaling tables. You initiate an ad hoc snapshot by sending a signal request to the Debezium signaling table.
When you initiate an ad hoc snapshot of an existing table, the connector appends content to the topic that already exists for the table. If a previously existing topic was removed, Debezium can create a topic automatically if automatic topic creation is enabled.
Ad hoc snapshot signals specify the tables to include in the snapshot. The snapshot can capture the entire contents of the database, or capture only a subset of the tables in the database. Also, the snapshot can capture a subset of the contents of the table(s) in the database.
You specify the tables to capture by sending an execute-snapshot
message to the signaling table. Set the type of the execute-snapshot
signal to incremental
or blocking
, and provide the names of the tables to include in the snapshot, as described in the following table:
Field | Default | Value |
---|---|---|
|
|
Specifies the type of snapshot that you want to run. |
| N/A |
An array that contains regular expressions matching the fully-qualified names of the tables to include in the snapshot. |
| N/A |
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
|
| N/A | An optional string that specifies the column name that the connector uses as the primary key of a table during the snapshot process. |
Triggering an ad hoc incremental snapshot
You initiate an ad hoc incremental snapshot by adding an entry with the execute-snapshot
signal type to the signaling table, or by sending a signal message to a Kafka signaling topic. After the connector processes the message, it begins the snapshot operation. The snapshot process reads the first and last primary key values and uses those values as the start and end point for each table. Based on the number of entries in the table, and the configured chunk size, Debezium divides the table into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
For more information, see Incremental snapshots.
Triggering an ad hoc blocking snapshot
You initiate an ad hoc blocking snapshot by adding an entry with the execute-snapshot
signal type to the signaling table or signaling topic. After the connector processes the message, it begins the snapshot operation. The connector temporarily stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the connector resumes streaming.
For more information, see Blocking snapshots.
2.7.2.3. Incremental snapshots
Each SQL Server server or database is configured to use a specific collation, which determines how character data is stored, sorted, compared, and displayed. The sorting rules for some collation sets, such as the SQL Server collations (SQL_*) are not compatible with the Unicode sorting algorithm. In some cases, the incompatible sorting rules can lead to lost data when the connector runs an ad hoc snapshot. For example, if SQL Server is configured to send strings as Unicode (that is, the connection property sendStringParametersAsUnicode
is set to true
), the connector can skip records during the snapshot. To protect against lost data during an ad hoc snapshot, set the value of the driver.sendStringParametersAsUnicode
connection string property to false
.
For more information about using the sendStringParametersAsUnicode
property, see the SQL Server connection properties documentation.
To provide flexibility in managing snapshots, Debezium includes a supplementary snapshot mechanism, known as incremental snapshotting. Incremental snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
In an incremental snapshot, instead of capturing the full state of a database all at once, as in an initial snapshot, Debezium captures each table in phases, in a series of configurable chunks. You can specify the tables that you want the snapshot to capture and the size of each chunk. The chunk size determines the number of rows that the snapshot collects during each fetch operation on the database. The default chunk size for incremental snapshots is 1024 rows.
As an incremental snapshot proceeds, Debezium uses watermarks to track its progress, maintaining a record of each table row that it captures. This phased approach to capturing data provides the following advantages over the standard initial snapshot process:
- You can run incremental snapshots in parallel with streamed data capture, instead of postponing streaming until the snapshot completes. The connector continues to capture near real-time events from the change log throughout the snapshot process, and neither operation blocks the other.
- If the progress of an incremental snapshot is interrupted, you can resume it without losing any data. After the process resumes, the snapshot begins at the point where it stopped, rather than recapturing the table from the beginning.
-
You can run an incremental snapshot on demand at any time, and repeat the process as needed to adapt to database updates. For example, you might re-run a snapshot after you modify the connector configuration to add a table to its
table.include.list
property.
Incremental snapshot process
When you run an incremental snapshot, Debezium sorts each table by primary key and then splits the table into chunks based on the configured chunk size. Working chunk by chunk, it then captures each table row in a chunk. For each row that it captures, the snapshot emits a READ
event. That event represents the value of the row when the snapshot for the chunk began.
As a snapshot proceeds, it’s likely that other processes continue to access the database, potentially modifying table records. To reflect such changes, INSERT
, UPDATE
, or DELETE
operations are committed to the transaction log as per usual. Similarly, the ongoing Debezium streaming process continues to detect these change events and emits corresponding change event records to Kafka.
How Debezium resolves collisions among records with the same primary key
In some cases, the UPDATE
or DELETE
events that the streaming process emits are received out of sequence. That is, the streaming process might emit an event that modifies a table row before the snapshot captures the chunk that contains the READ
event for that row. When the snapshot eventually emits the corresponding READ
event for the row, its value is already superseded. To ensure that incremental snapshot events that arrive out of sequence are processed in the correct logical order, Debezium employs a buffering scheme for resolving collisions. Only after collisions between the snapshot events and the streamed events are resolved does Debezium emit an event record to Kafka.
Snapshot window
To assist in resolving collisions between late-arriving READ
events and streamed events that modify the same table row, Debezium employs a so-called snapshot window. The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified table chunk. Before the snapshot window for a chunk opens, Debezium follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic. But from the moment that the snapshot for a particular chunk opens, until it closes, Debezium performs a de-duplication step to resolve collisions between events that have the same primary key..
For each data collection, the Debezium emits two types of events, and stores the records for them both in a single destination Kafka topic. The snapshot records that it captures directly from a table are emitted as READ
operations. Meanwhile, as users continue to update records in the data collection, and the transaction log is updated to reflect each commit, Debezium emits UPDATE
or DELETE
operations for each change.
As the snapshot window opens, and Debezium begins processing a snapshot chunk, it delivers snapshot records to a memory buffer. During the snapshot windows, the primary keys of the READ
events in the buffer are compared to the primary keys of the incoming streamed events. If no match is found, the streamed event record is sent directly to Kafka. If Debezium detects a match, it discards the buffered READ
event, and writes the streamed record to the destination topic, because the streamed event logically supersede the static snapshot event. After the snapshot window for the chunk closes, the buffer contains only READ
events for which no related transaction log events exist. Debezium emits these remaining READ
events to the table’s Kafka topic.
The connector repeats the process for each snapshot chunk.
Currently, you can use either of the following methods to initiate an incremental snapshot:
The Debezium connector for SQL Server does not support schema changes while an incremental snapshot is running.
2.7.2.3.1. Triggering an incremental snapshot
To initiate an incremental snapshot, you can send an ad hoc snapshot signal to the signaling table on the source database. You submit snapshot signals as SQL INSERT
queries.
After Debezium detects the change in the signaling table, it reads the signal, and runs the requested snapshot operation.
The query that you submit specifies the tables to include in the snapshot, and, optionally, specifies the type of snapshot operation. Debezium currently supports the incremental
and blocking
snapshot types.
To specify the tables to include in the snapshot, provide a data-collections
array that lists the tables, or an array of regular expressions used to match tables, for example,
{"data-collections": ["public.MyFirstTable", "public.MySecondTable"]}
The data-collections
array for an incremental snapshot signal has no default value. If the data-collections
array is empty, Debezium interprets the empty array to mean that no action is required, and it does not perform a snapshot.
If the name of a table that you want to include in a snapshot contains a dot (.
), a space, or some other non-alphanumeric character, you must escape the table name in double quotes.
For example, to include a table that exists in the public
schema in the db1
database, and that has the name My.Table
, use the following format: "db1.public.\"My.Table\""
.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to trigger an incremental snapshot
Send a SQL query to add the ad hoc incremental snapshot request to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example,
INSERT INTO db1.myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'execute-snapshot', 3 '{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], 4 "type":"incremental", 5 "additional-conditions":[{"data-collection": "db1.schema1.table1" ,"filter":"color=\'blue\'"}]}'); 6
The values of the
id
,type
, anddata
parameters in the command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.162. Descriptions of fields in a SQL command for sending an incremental snapshot signal to the signaling table Item Value Description 1
database.schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string. Rather, during the snapshot, Debezium generates its ownid
string as a watermarking signal.3
execute-snapshot
The
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
A required component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to include in the snapshot.
The array lists regular expressions that use the formatdatabase.schema.table
to match the fully-qualified names of the tables. This format is the same as the one that you use to specify the name of the connector’s signaling table.5
incremental
An optional
type
component of thedata
field of a signal that specifies the type of snapshot operation to run.
Valid values areincremental
andblocking
.
If you do not specify a value, the connector defaults to performing an incremental snapshot.6
additional-conditions
An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot.
Each additional condition is an object withdata-collection
andfilter
properties. You can specify different filters for each data collection.
* Thedata-collection
property is the fully-qualified name of the data collection that the filter applies to. For more information about theadditional-conditions
parameter, see Section 2.7.2.3.2, “Running an ad hoc incremental snapshots withadditional-conditions
”.
2.7.2.3.2. Running an ad hoc incremental snapshots with additional-conditions
If you want a snapshot to include only a subset of the content in a table, you can modify the signal request by appending an additional-conditions
parameter to the snapshot signal.
The SQL query for a typical snapshot takes the following form:
SELECT * FROM <tableName> ....
By adding an additional-conditions
parameter, you append a WHERE
condition to the SQL query, as in the following example:
SELECT * FROM <data-collection> WHERE <filter> ....
The following example shows a SQL query to send an ad hoc incremental snapshot request with an additional condition to the signaling table:
INSERT INTO <signalTable> (id, type, data) VALUES ('<id>', '<snapshotType>', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"<snapshotType>","additional-conditions":[{"data-collection": "<fullyQualfiedTableName>", "filter": "<additional-condition>"}]}');
For example, suppose you have a products
table that contains the following columns:
-
id
(primary key) -
color
-
quantity
If you want an incremental snapshot of the products
table to include only the data items where color=blue
, you can use the following SQL statement to trigger the snapshot:
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue"}]}');
The additional-conditions
parameter also enables you to pass conditions that are based on more than one column. For example, using the products
table from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which color=blue
and quantity>10
:
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue AND quantity>10"}]}');
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
Example 2.48. Incremental snapshot event message
{ "before":null, "after": { "pk":"1", "value":"New data" }, "source": { ... "snapshot":"incremental" 1 }, "op":"r", 2 "ts_ms":"1620393591654", "ts_us":"1620393591654547", "ts_ns":"1620393591654547920", "transaction":null }
Item | Field name | Description |
---|---|---|
1 |
|
Specifies the type of snapshot operation to run. |
2 |
|
Specifies the event type. |
2.7.2.3.3. Using the Kafka signaling channel to trigger an incremental snapshot
You can send a message to the configured Kafka topic to request the connector to run an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is execute-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports the |
| N/A |
An array of comma-separated regular expressions that match the fully-qualified names of tables to include in the snapshot. |
| N/A |
An optional array of additional conditions that specifies criteria that the connector evaluates to designate a subset of records to include in a snapshot. |
Example 2.49. An execute-snapshot
Kafka message
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
Ad hoc incremental snapshots with additional-conditions
Debezium uses the additional-conditions
field to select a subset of a table’s content.
Typically, when Debezium runs a snapshot, it runs a SQL query such as:
SELECT * FROM <tableName> ….
When the snapshot request includes an additional-conditions
property, the data-collection
and filter
parameters of the property are appended to the SQL query, for example:
SELECT * FROM <data-collection> WHERE <filter> ….
For example, given a products
table with the columns id
(primary key), color
, and brand
, if you want a snapshot to include only content for which color='blue'
, when you request the snapshot, you could add the additional-conditions
property to filter the content:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue'"}]}}`
You can also use the additional-conditions
property to pass conditions based on multiple columns. For example, using the same products
table as in the previous example, if you want a snapshot to include only the content from the products
table for which color='blue'
, and brand='MyBrand'
, you could send the following request:
Key = `test_connector` Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
2.7.2.3.4. Stopping an incremental snapshot
In some situations, it might be necessary to stop an incremental snapshot. For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations. You can stop a snapshot that is already running by sending a signal to the signaling table on the source database.
You submit a stop snapshot signal to the signaling table by sending it in a SQL INSERT
query. The stop-snapshot signal specifies the type
of the snapshot operation as incremental
, and optionally specifies the tables that you want to omit from the currently running snapshot. After Debezium detects the change in the signaling table, it reads the signal, and stops the incremental snapshot operation if it’s in progress.
Additional resources
You can also stop an incremental snapshot by sending a JSON message to the Kafka signaling topic.
Prerequisites
- A signaling data collection exists on the source database.
-
The signaling data collection is specified in the
signal.data.collection
property.
Using a source signaling channel to stop an incremental snapshot
Send a SQL query to stop the ad hoc incremental snapshot to the signaling table:
INSERT INTO <signalTable> (id, type, data) values ('<id>', 'stop-snapshot', '{"data-collections": ["<fullyQualfiedTableName>","<fullyQualfiedTableName>"],"type":"incremental"}');
For example,
INSERT INTO db1.myschema.debezium_signal (id, type, data) 1 values ('ad-hoc-1', 2 'stop-snapshot', 3 '{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], 4 "type":"incremental"}'); 5
The values of the
id
,type
, anddata
parameters in the signal command correspond to the fields of the signaling table.
The following table describes the parameters in the example:Table 2.165. Descriptions of fields in a SQL command for sending a stop incremental snapshot signal to the signaling table Item Value Description 1
database.schema.debezium_signal
Specifies the fully-qualified name of the signaling table on the source database.
2
ad-hoc-1
The
id
parameter specifies an arbitrary string that is assigned as theid
identifier for the signal request.
Use this string to identify logging messages to entries in the signaling table. Debezium does not use this string.3
stop-snapshot
Specifies
type
parameter specifies the operation that the signal is intended to trigger.
4
data-collections
An optional component of the
data
field of a signal that specifies an array of table names or regular expressions to match table names to remove from the snapshot.
The array lists regular expressions which match tables by their fully-qualified names in the formatdatabase.schema.table
If you omit this component from the
data
field, the signal stops the entire incremental snapshot that is in progress.5
incremental
A required component of the
data
field of a signal that specifies the type of snapshot operation that is to be stopped.
Currently, the only valid option isincremental
.
If you do not specify atype
value, the signal fails to stop the incremental snapshot.
2.7.2.3.5. Using the Kafka signaling channel to stop an incremental snapshot
You can send a signal message to the configured Kafka signaling topic to stop an ad hoc incremental snapshot.
The key of the Kafka message must match the value of the topic.prefix
connector configuration option.
The value of the message is a JSON object with type
and data
fields.
The signal type is stop-snapshot
, and the data
field must have the following fields:
Field | Default | Value |
---|---|---|
|
|
The type of the snapshot to be executed. Currently Debezium supports only the |
| N/A |
An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of table names or regular expressions to match table names to remove from the snapshot. |
The following example shows a typical stop-snapshot
Kafka message:
Key = `test_connector` Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], "type": "INCREMENTAL"}}`
2.7.2.4. Blocking snapshots
To provide more flexibility in managing snapshots, Debezium includes a supplementary ad hoc snapshot mechanism, known as a blocking snapshot. Blocking snapshots rely on the Debezium mechanism for sending signals to a Debezium connector.
A blocking snapshot behaves just like an initial snapshot, except that you can trigger it at run time.
You might want to run a blocking snapshot rather than use the standard initial snapshot process in the following situations:
- You add a new table and you want to complete the snapshot while the connector is running.
- You add a large table, and you want the snapshot to complete in less time than is possible with an incremental snapshot.
Blocking snapshot process
When you run a blocking snapshot, Debezium stops streaming, and then initiates a snapshot of the specified table, following the same process that it uses during an initial snapshot. After the snapshot completes, the streaming is resumed.
Configure snapshot
You can set the following properties in the data
component of a signal:
- data-collections: to specify which tables must be snapshot
additional-conditions: You can specify different filters for different table.
-
The
data-collection
property is the fully-qualified name of the table for which the filter will be applied. -
The
filter
property will have the same value used in thesnapshot.select.statement.overrides
-
The
For example:
{"type": "blocking", "data-collections": ["schema1.table1", "schema1.table2"], "additional-conditions": [{"data-collection": "schema1.table1", "filter": "SELECT * FROM [schema1].[table1] WHERE column1 = 0 ORDER BY column2 DESC"}, {"data-collection": "schema1.table2", "filter": "SELECT * FROM [schema1].[table2] WHERE column2 > 0"}]}
Possible duplicates
A delay might exist between the time that you send the signal to trigger the snapshot, and the time when streaming stops and the snapshot starts. As a result of this delay, after the snapshot completes, the connector might emit some event records that duplicate records captured by the snapshot.
2.7.2.5. How Debezium SQL Server connectors read change data tables
When the connector first starts, it takes a structural snapshot of the structure of the captured tables and persists this information to its internal database schema history topic. The connector then identifies a change table for each source table, and completes the following steps.
- For each change table, the connector read all of the changes that were created between the last stored maximum LSN and the current maximum LSN.
- The connector sorts the changes that it reads in ascending order, based on the values of their commit LSN and change LSN. This sorting order ensures that the changes are replayed by Debezium in the same order in which they occurred in the database.
- The connector passes the commit and change LSNs as offsets to Kafka Connect.
- The connector stores the maximum LSN and restarts the process from Step 1.
After a restart, the connector resumes processing from the last offset (commit and change LSNs) that it read.
The connector is able to detect whether CDC is enabled or disabled for included source tables and adjust its behavior.
2.7.2.6. No maximum LSN recorded in the database
There may be situations when no maximum LSN is recorded in the database because:
- SQL Server Agent is not running
- No changes are recorded in the change table yet
- Database has low activity and the cdc clean up job periodically clears entries from the cdc tables
Out of these possibilities, since a running SQL Server Agent is a prerequisite, No 1. is a real problem (while No 2. and 3. are normal).
In order to mitigate this issue and differentiate between No 1. and the others, a check for the status of the SQL Server Agent is done through the following query "SELECT CASE WHEN dss.[status]=4 THEN 1 ELSE 0 END AS isRunning FROM [#db].sys.dm_server_services dss WHERE dss.[servicename] LIKE N’SQL Server Agent (%';"
. If the SQL Server Agent is not running, an ERROR is written in the log: "No maximum LSN recorded in the database; SQL Server Agent is not running".
The SQL Server Agent running status query requires VIEW SERVER STATE
server permission. If you don’t want to grant this permission to the configured user, you can choose to configure your own query through the database.sqlserver.agent.status.query
property. You can define a function which returns true or 1 if SQL Server Agent is running (false or 0 otherwise) and safely use High-Level permissions without granting them as explained here What minimum permissions do I need to provide to a user so that it can check the status of SQL Server Agent Service? or here Safely and Easily Use High-Level Permissions Without Granting Them to Anyone: Server-level. The configuration of the query property would look like: database.sqlserver.agent.status.query=SELECT [#db].func_is_sql_server_agent_running()
- you need to use [#db]
as placeholder for the database name.
2.7.2.7. Limitations of Debezium SQL Server connector
SQL Server specifically requires the base object to be a table in order to create a change capture instance. As consequence, capturing changes from indexed views (aka. materialized views) is not supported by SQL Server and hence Debezium SQL Server connector.
2.7.2.8. Default names of Kafka topics that receive Debezium SQL Server change event records
By default, the SQL Server connector writes events for all INSERT
, UPDATE
, and DELETE
operations that occur in a table to a single Apache Kafka topic that is specific to that table. The connector uses the following convention to name change event topics: <topicPrefix>.<schemaName>.<tableName>
The following list provides definitions for the components of the default name:
- topicPrefix
-
The logical name of the server, as specified by the
topic.prefix
configuration property. - schemaName
- The name of the database schema in which the change event occurred.
- tableName
- The name of the database table in which the change event occurred.
For example, if fulfillment
is the logical server name, and dbo
is the schema name, and the database contains tables with the names products
, products_on_hand
, customers
, and orders
, the connector would stream change event records to the following Kafka topics:
-
fulfillment.testDB.dbo.products
-
fulfillment.testDB.dbo.products_on_hand
-
fulfillment.testDB.dbo.customers
-
fulfillment.testDB.dbo.orders
The connector applies similar naming conventions to label its internal database schema history topics, schema change topics, and transaction metadata topics.
If the default topic name do not meet your requirements, you can configure custom topic names. To configure custom topic names, you specify regular expressions in the logical topic routing SMT. For more information about using the logical topic routing SMT to customize topic naming, see Topic routing.
2.7.2.9. How Debezium SQL Server connectors handle database schema changes
When a database client queries a database, the client uses the database’s current schema. However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded. Also, a connector cannot necessarily apply the current schema to every event. If an event is relatively old, it’s possible that it was recorded before the current schema was applied.
To ensure correct processing of change events that occur after a schema change, the Debezium SQL Server connector stores a snapshot of the new schema based on the structure in the SQL Server change tables, which mirror the structure of their associated data tables. The connector stores the table schema information, together with the LSN of operations the result in schema changes, in the database schema history Kafka topic. The connector uses the stored schema representation to produce change events that correctly mirror the structure of tables at the time of each insert, update, or delete operation.
When the connector restarts after either a crash or a graceful stop, it resumes reading entries in the SQL Server CDC tables from the last position that it read. Based on the schema information that the connector reads from the database schema history topic, the connector applies the table structures that existed at the position where the connector restarts.
If you update the schema of a Db2 table that is in capture mode, it’s important that you also update the schema of the corresponding change table. You must be a SQL Server database administrator with elevated privileges to update database schema. For more information about updating SQL Server database schema in Debezium environmenbts, see Database schema evolution.
The database schema history topic is for internal connector use only. Optionally, the connector can also emit schema change events to a different topic that is intended for consumer applications.
Additional resources
- Default names for topics that receive Debezium event records.
2.7.2.10. How the Debezium SQL Server connector uses the schema change topic
For each table for which CDC is enabled, the Debezium SQL Server connector stores a history of the schema change events that are applied to tables in the database. The connector writes schema change events to a Kafka topic named <topicPrefix>
, where topicPrefix
is the logical server name that is specified in the topic.prefix
configuration property.
Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
The schema for the schema change event has the following elements:
name
- The name of the schema change event message.
type
- The type of the change event message.
version
- The version of the schema. The version is an integer that is incremented each time the schema is changed.
fields
- The fields that are included in the change event message.
Example: Schema of the SQL Server connector schema change topic
The following example shows a typical schema in JSON format.
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.sqlserver.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "inventory" } }
The payload of a schema change event message includes the following elements:
databaseName
-
The name of the database to which the statements are applied. The value of
databaseName
serves as the message key. tableChanges
-
A structured representation of the entire table schema after the schema change. The
tableChanges
field contains an array that includes entries for each column of the table. Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser.
When the connector is configured to capture a table, it stores the history of the table’s schema changes not only in the schema change topic, but also in an internal database schema history topic. The internal database schema history topic is for connector use only and it is not intended for direct use by consuming applications. Ensure that applications that require notifications about schema changes consume that information only from the schema change topic.
The format of the messages that a connector emits to its schema change topic is in an incubating state and can change without notice.
Debezium emits a message to the schema change topic when the following events occur:
- You enable CDC for a table.
- You disable CDC for a table.
- You alter the structure of a table for which CDC is enabled by following the schema evolution procedure.
Example: Message emitted to the SQL Server connector schema change topic
The following example shows a message in the schema change topic. The message contains a logical representation of the table schema.
{ "schema": { ... }, "payload": { "source": { "version": "2.7.3.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 0, "snapshot": "true", "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": null, "commit_lsn": "00000025:00000d98:00a2", "event_serial_no": null }, "ts_ms": 1588252618953, 1 "databaseName": "testDB", 2 "schemaName": "dbo", "ddl": null, 3 "tableChanges": [ 4 { "type": "CREATE", 5 "id": "\"testDB\".\"dbo\".\"customers\"", 6 "table": { 7 "defaultCharsetName": null, "primaryKeyColumnNames": [ 8 "id" ], "columns": [ 9 { "name": "id", "jdbcType": 4, "nativeType": null, "typeName": "int identity", "typeExpression": "int identity", "charsetName": null, "length": 10, "scale": 0, "position": 1, "optional": false, "autoIncremented": false, "generated": false }, { "name": "first_name", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 2, "optional": false, "autoIncremented": false, "generated": false }, { "name": "last_name", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 3, "optional": false, "autoIncremented": false, "generated": false }, { "name": "email", "jdbcType": 12, "nativeType": null, "typeName": "varchar", "typeExpression": "varchar", "charsetName": null, "length": 255, "scale": null, "position": 4, "optional": false, "autoIncremented": false, "generated": false } ], "attributes": [ 10 { "customAttribute": "attributeValue" } ] } } ] } }
Item | Field name | Description |
---|---|---|
1 |
| Optional field that displays the time at which the connector processed the event. The time is based on the system clock in the JVM running the Kafka Connect task. In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium. |
2 |
| Identifies the database and the schema that contain the change. |
3 |
|
Always |
4 |
| An array of one or more items that contain the schema changes generated by a DDL command. |
5 |
| Describes the kind of change. The value is one of the following:
|
6 |
| Full identifier of the table that was created, altered, or dropped. |
7 |
| Represents table metadata after the applied change. |
8 |
| List of columns that compose the table’s primary key. |
9 |
| Metadata for each column in the changed table. |
10 |
| Custom attribute metadata for each table change. |
In messages that the connector sends to the schema change topic, the key is the name of the database that contains the schema change. In the following example, the payload
field contains the key:
{ "schema": { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "databaseName" } ], "optional": false, "name": "io.debezium.connector.sqlserver.SchemaChangeKey", "version": 1 }, "payload": { "databaseName": "testDB" } }
2.7.2.11. Descriptions of Debezium SQL Server connector data change events
The Debezium SQL Server connector generates a data change event for each row-level INSERT
, UPDATE
, and DELETE
operation. Each event contains a key and a value. The structure of the key and the value depends on the table that was changed.
Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events may change over time, which can be difficult for consumers to handle. To address this, each event contains the schema for its content or, if you are using a schema registry, a schema ID that a consumer can use to obtain the schema from the registry. This makes each event self-contained.
The following skeleton JSON shows the basic four parts of a change event. However, how you configure the Kafka Connect converter that you choose to use in your application determines the representation of these four parts in change events. A schema
field is in a change event only when you configure the converter to produce it. Likewise, the event key and event payload are in a change event only if you configure a converter to produce it. If you use the JSON converter and you configure it to produce all four basic change event parts, change events have this structure:
{ "schema": { 1 ... }, "payload": { 2 ... }, "schema": { 3 ... }, "payload": { 4 ... }, }
Item | Field name | Description |
---|---|---|
1 |
|
The first |
2 |
|
The first |
3 |
|
The second |
4 |
|
The second |
By default, the connector streams change event records to topics with names that are the same as the event’s originating table. For more information, see topic names.
The SQL Server connector ensures that all Kafka Connect schema names adhere to the Avro schema name format. This means that the logical server name must start with a Latin letter or an underscore, that is, a-z, A-Z, or _. Each remaining character in the logical server name and each character in the database and table names must be a Latin letter, a digit, or an underscore, that is, a-z, A-Z, 0-9, or \_. If there is an invalid character it is replaced with an underscore character.
This can lead to unexpected conflicts if the logical server name, a database name, or a table name contains invalid characters, and the only characters that distinguish names from one another are invalid and thus replaced with underscores.
For details about change events, see the following topics:
2.7.2.11.1. About keys in Debezium SQL Server change events
A change event’s key contains the schema for the changed table’s key and the changed row’s actual key. Both the schema and its corresponding payload contain a field for each column in the changed table’s primary key (or unique key constraint) at the time the connector created the event.
Consider the following customers
table, which is followed by an example of a change event key for this table.
Example table
CREATE TABLE customers ( id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE );
Example change event key
Every change event that captures a change to the customers
table has the same event key schema. For as long as the customers
table has the previous definition, every change event that captures a change to the customers
table has the following key structure, which in JSON, looks like this:
{ "schema": { 1 "type": "struct", "fields": [ 2 { "type": "int32", "optional": false, "field": "id" } ], "optional": false, 3 "name": "server1.testDB.dbo.customers.Key" 4 }, "payload": { 5 "id": 1004 } }
Item | Field name | Description |
---|---|---|
1 |
|
The schema portion of the key specifies a Kafka Connect schema that describes what is in the key’s |
2 |
|
Specifies each field that is expected in the |
3 |
|
Indicates whether the event key must contain a value in its |
4 |
|
Name of the schema that defines the structure of the key’s payload. This schema describes the structure of the primary key for the table that was changed. Key schema names have the format connector-name.database-schema-name.table-name.
|
5 |
|
Contains the key for the row for which this change event was generated. In this example, the key, contains a single |
2.7.2.11.2. About values in Debezium SQL Server change events
The value in a change event is a bit more complicated than the key. Like the key, the value has a schema
section and a payload
section. The schema
section contains the schema that describes the Envelope
structure of the payload
section, including its nested fields. Change events for operations that create, update or delete data all have a value payload with an envelope structure.
Consider the same sample table that was used to show an example of a change event key:
CREATE TABLE customers ( id INTEGER IDENTITY(1001,1) NOT NULL PRIMARY KEY, first_name VARCHAR(255) NOT NULL, last_name VARCHAR(255) NOT NULL, email VARCHAR(255) NOT NULL UNIQUE );
The value portion of a change event for a change to this table is described for each event type.
create events
The following example shows the value portion of a change event that the connector generates for an operation that creates data in the customers
table:
{ "schema": { 1 "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "server1.dbo.testDB.customers.Value", 2 "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "server1.dbo.testDB.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": false, "field": "version" }, { "type": "string", "optional": false, "field": "connector" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "ts_ms" }, { "type": "int64", "optional": false, "field": "ts_us" }, { "type": "int64", "optional": false, "field": "ts_ns" }, { "type": "boolean", "optional": true, "default": false, "field": "snapshot" }, { "type": "string", "optional": false, "field": "db" }, { "type": "string", "optional": false, "field": "schema" }, { "type": "string", "optional": false, "field": "table" }, { "type": "string", "optional": true, "field": "change_lsn" }, { "type": "string", "optional": true, "field": "commit_lsn" }, { "type": "int64", "optional": true, "field": "event_serial_no" } ], "optional": false, "name": "io.debezium.connector.sqlserver.Source", 3 "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "server1.dbo.testDB.customers.Envelope" 4 }, "payload": { 5 "before": null, 6 "after": { 7 "id": 1005, "first_name": "john", "last_name": "doe", "email": "john.doe@example.org" }, "source": { 8 "version": "2.7.3.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 1559729468470, "ts_us": 1559729468470000, "ts_ns": 1559729468470000000, "snapshot": false, "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": "00000027:00000758:0003", "commit_lsn": "00000027:00000758:0005", "event_serial_no": "1" }, "op": "c", 9 "ts_ms": 1559729471739, 10 "ts_ms": 1559729471739876, 11 "ts_ms": 1559729471739876149 12 } }
Item | Field name | Description |
---|---|---|
1 |
| The value’s schema, which describes the structure of the value’s payload. A change event’s value schema is the same in every change event that the connector generates for a particular table. |
2 |
|
In the |
3 |
|
|
4 |
|
|
5 |
|
The value’s actual data. This is the information that the change event is providing. |
6 |
|
An optional field that specifies the state of the row before the event occurred. When the |
7 |
|
An optional field that specifies the state of the row after the event occurred. In this example, the |
8 |
| Mandatory field that describes the source metadata for the event. This field contains information that you can use to compare this event with other events, with regard to the origin of the events, the order in which the events occurred, and whether events were part of the same transaction. The source metadata includes:
|
9 |
|
Mandatory string that describes the type of operation that caused the connector to generate the event. In this example,
|
10 |
|
Optional field that displays the time at which the connector processed the event. In the event message envelope, the time is based on the system clock in the JVM running the Kafka Connect task. |
update events
The value of a change event for an update in the sample customers
table has the same schema as a create event for that table. Likewise, the event value’s payload has the same structure. However, the event value payload contains different values in an update event. Here is an example of a change event value in an event that the connector generates for an update in the customers
table:
{ "schema": { ... }, "payload": { "before": { 1 "id": 1005, "first_name": "john", "last_name": "doe", "email": "john.doe@example.org" }, "after": { 2 "id": 1005, "first_name": "john", "last_name": "doe", "email": "noreply@example.org" }, "source": { 3 "version": "2.7.3.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 1559729995937, "ts_us": 1559729995937000, "ts_ns": 1559729995937000000, "snapshot": false, "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": "00000027:00000ac0:0002", "commit_lsn": "00000027:00000ac0:0007", "event_serial_no": "2" }, "op": "u", 4 "ts_ms": 1559729998706, 5 "ts_us": 1559729998706318, 6 "ts_ns": 1559729998706318547 7 } }
Item | Field name | Description |
---|---|---|
1 |
|
An optional field that specifies the state of the row before the event occurred. In an update event value, the |
2 |
|
An optional field that specifies the state of the row after the event occurred. You can compare the |
3 |
|
Mandatory field that describes the source metadata for the event. The
The
|
4 |
|
Mandatory string that describes the type of operation. In an update event value, the |
5 |
|
Optional field that displays the time at which the connector processed the event. In the event message envelope, the time is based on the system clock in the JVM running the Kafka Connect task. |
Updating the columns for a row’s primary/unique key changes the value of the row’s key. When a key changes, Debezium outputs three events: a delete event and a tombstone event with the old key for the row, followed by a create event with the new key for the row.
delete events
The value in a delete change event has the same schema
portion as create and update events for the same table. The payload
portion in a delete event for the sample customers
table looks like this:
{ "schema": { ... }, }, "payload": { "before": { <> "id": 1005, "first_name": "john", "last_name": "doe", "email": "noreply@example.org" }, "after": null, 1 "source": { 2 "version": "2.7.3.Final", "connector": "sqlserver", "name": "server1", "ts_ms": 1559730445243, "ts_us": 1559730445243000, "ts_ns": 1559730445243000000, "snapshot": false, "db": "testDB", "schema": "dbo", "table": "customers", "change_lsn": "00000027:00000db0:0005", "commit_lsn": "00000027:00000db0:0007", "event_serial_no": "1" }, "op": "d", 3 "ts_ms": 1559730450205, 4 "ts_us": 1559730450205387, 5 "ts_ns": 1559730450205387492 6 } }
Item | Field name | Description |
---|---|---|
1 |
|
Optional field that specifies the state of the row before the event occurred. In a delete event value, the |
2 |
|
Optional field that specifies the state of the row after the event occurred. In a delete event value, the |
3 |
|
Mandatory field that describes the source metadata for the event. In a delete event value, the
|
4 |
|
Mandatory string that describes the type of operation. The |
5 |
|
Optional field that displays the time at which the connector processed the event. In the event message envelope, the time is based on the system clock in the JVM running the Kafka Connect task. |
SQL Server connector events are designed to work with Kafka log compaction. Log compaction enables removal of some older messages as long as at least the most recent message for every key is kept. This lets Kafka reclaim storage space while ensuring that the topic contains a complete data set and can be used for reloading key-based state.
Tombstone events
When a row is deleted, the delete event value still works with log compaction, because Kafka can remove all earlier messages that have that same key. However, for Kafka to remove all messages that have that same key, the message value must be null
. To make this possible, after Debezium’s SQL Server connector emits a delete event, the connector emits a special tombstone event that has the same key but a null
value.
2.7.2.12. Debezium SQL Server connector-generated events that represent transaction boundaries
Debezium can generate events that represent transaction boundaries and that enrich data change event messages.
Debezium registers and receives metadata only for transactions that occur after you deploy the connector. Metadata for transactions that occur before you deploy the connector is not available.
Database transactions are represented by a statement block that is enclosed between the BEGIN
and END
keywords. Debezium generates transaction boundary events for the BEGIN
and END
delimiters in every transaction. Transaction boundary events contain the following fields:
status
-
BEGIN
orEND
. id
- String representation of the unique transaction identifier.
ts_ms
-
The time of a transaction boundary event (
BEGIN
orEND
event) at the data source. If the data source does not provide Debezium with the event time, then the field instead represents the time at which Debezium processes the event. event_count
(forEND
events)- Total number of events emmitted by the transaction.
data_collections
(forEND
events)-
An array of pairs of
data_collection
andevent_count
elements that indicates the number of events that the connector emits for changes that originate from a data collection.
There is no way for Debezium to reliably identify when a transaction has ended. The transaction END
marker is thus emitted only after the first event of another transaction arrives. This can lead to the delayed delivery of END
marker in case of a low-traffic system.
The following example shows a typical transaction boundary message:
Example: SQL Server connector transaction boundary event
{ "status": "BEGIN", "id": "00000025:00000d08:0025", "ts_ms": 1486500577125, "event_count": null, "data_collections": null } { "status": "END", "id": "00000025:00000d08:0025", "ts_ms": 1486500577691, "event_count": 2, "data_collections": [ { "data_collection": "testDB.dbo.testDB.tablea", "event_count": 1 }, { "data_collection": "testDB.dbo.testDB.tableb", "event_count": 1 } ] }
Unless overridden via the topic.transaction
option, transaction events are written to the topic named <topic.prefix>
.transaction
.
2.7.2.12.1. Change data event enrichment
When transaction metadata is enabled, the data message Envelope
is enriched with a new transaction
field. This field provides information about every event in the form of a composite of fields:
id
- String representation of unique transaction identifier
total_order
- The absolute position of the event among all events generated by the transaction
data_collection_order
- The per-data collection position of the event among all events that were emitted by the transaction
The following example shows what a typical message looks like:
{ "before": null, "after": { "pk": "2", "aa": "1" }, "source": { ... }, "op": "c", "ts_ms": "1580390884335", "ts_us": "1580390884335172", "ts_ns": "1580390884335172574", "transaction": { "id": "00000025:00000d08:0025", "total_order": "1", "data_collection_order": "1" } }
2.7.2.13. How Debezium SQL Server connectors map data types
The Debezium SQL Server connector represents changes to table row data by producing events that are structured like the table in which the row exists. Each event contains fields to represent the column values for the row. The way in which an event represents the column values for an operation depends on the SQL data type of the column. In the event, the connector maps the fields for each SQL Server data type to both a literal type and a semantic type.
The connector can map SQL Server data types to both literal and semantic types.
- Literal type
-
Describes how the value is literally represented by using Kafka Connect schema types, namely
INT8
,INT16
,INT32
,INT64
,FLOAT32
,FLOAT64
,BOOLEAN
,STRING
,BYTES
,ARRAY
,MAP
, andSTRUCT
. - Semantic type
- Describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.
If the default data type conversions do not meet your needs, you can create a custom converter for the connector.
For more information about data type mappings, see the following sections:
Basic types
The following table shows how the connector maps basic SQL Server data types.
SQL Server data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
| n/a |
|
|
|
|
|
|
Other data type mappings are described in the following sections.
If present, a column’s default value is propagated to the corresponding field’s Kafka Connect schema. Change messages will contain the field’s default value (unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema.
Temporal values
Other than SQL Server’s DATETIMEOFFSET
data type (which contain time zone information), the other temporal types depend on the value of the time.precision.mode
configuration property. When the time.precision.mode
configuration property is set to adaptive
(the default), then the connector will determine the literal type and semantic type for the temporal types based on the column’s data type definition so that events exactly represent the values in the database:
SQL Server data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When the time.precision.mode
configuration property is set to connect
, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. On the other hand, since SQL Server supports tenth of microsecond precision, the events generated by a connector with the connect
time precision mode will result in a loss of precision when the database column has a fractional second precision value greater than 3:
SQL Server data type | Literal type (schema type) | Semantic type (schema name) and Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Timestamp values
The DATETIME
, SMALLDATETIME
and DATETIME2
types represent a timestamp without time zone information. Such columns are converted into an equivalent Kafka Connect value based on UTC. So for instance the DATETIME2
value "2018-06-20 15:13:16.945104" is represented by a io.debezium.time.MicroTimestamp
with the value "1529507596945104".
Note that the timezone of the JVM running Kafka Connect and Debezium does not affect this conversion.
Decimal values
Debezium connectors handle decimals according to the setting of the decimal.handling.mode
connector configuration property.
- decimal.handling.mode=precise
Table 2.174. Mappings when decimal.handling.mode=precise SQL Server type Literal type (schema type) Semantic type (schema name) NUMERIC[(P[,S])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.DECIMAL[(P[,S])]
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.SMALLMONEY
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.MONEY
BYTES
org.apache.kafka.connect.data.Decimal
Thescale
schema parameter contains an integer that represents how many digits the decimal point shifted.- decimal.handling.mode=double
Table 2.175. Mappings when decimal.handling.mode=double SQL Server type Literal type Semantic type NUMERIC[(M[,D])]
FLOAT64
n/a
DECIMAL[(M[,D])]
FLOAT64
n/a
SMALLMONEY[(M[,D])]
FLOAT64
n/a
MONEY[(M[,D])]
FLOAT64
n/a
- decimal.handling.mode=string
Table 2.176. Mappings when decimal.handling.mode=string SQL Server type Literal type Semantic type NUMERIC[(M[,D])]
STRING
n/a
DECIMAL[(M[,D])]
STRING
n/a
SMALLMONEY[(M[,D])]
STRING
n/a
MONEY[(M[,D])]
STRING
n/a
2.7.3. Setting up SQL Server to run a Debezium connector
For Debezium to capture change events from SQL Server tables, a SQL Server administrator with the necessary privileges must first run a query to enable CDC on the database. The administrator must then enable CDC for each table that you want Debezium to capture.
By default, JDBC connections to Microsoft SQL Server are protected by SSL encryption. If SSL is not enabled for a SQL Server database, or if you want to connect to the database without using SSL, you can disable SSL by setting the value of the database.encrypt
property in connector configuration to false
.
For details about setting up SQL Server for use with the Debezium connector, see the following sections:
- Section 2.7.3.1, “Enabling CDC on the SQL Server database”
- Section 2.7.3.2, “Enabling CDC on a SQL Server table”
- Section 2.7.3.3, “Verifying that the user has access to the CDC table”
- Section 2.7.3.4, “SQL Server on Azure”
- Section 2.7.3.5, “Effect of SQL Server capture job agent configuration on server load and latency”
- Section 2.7.3.6, “SQL Server capture job agent configuration parameters”
After CDC is applied, it captures all of the INSERT
, UPDATE
, and DELETE
operations that are committed to the tables for which CDC is enabled. The Debezium connector can then capture these events and emit them to Kafka topics.
2.7.3.1. Enabling CDC on the SQL Server database
Before you can enable CDC for a table, you must enable it for the SQL Server database. A SQL Server administrator enables CDC by running a system stored procedure. System stored procedures can be run by using SQL Server Management Studio, or by using Transact-SQL.
Prerequisites
- You are a member of the sysadmin fixed server role for the SQL Server.
- You are a db_owner of the database.
- The SQL Server Agent is running.
The SQL Server CDC feature processes changes that occur in user-created tables only. You cannot enable CDC on the SQL Server master
database.
Procedure
- From the View menu in SQL Server Management Studio, click Template Explorer.
- In the Template Browser, expand SQL Server Templates.
- Expand Change Data Capture > Configuration and then click Enable Database for CDC.
-
In the template, replace the database name in the
USE
statement with the name of the database that you want to enable for CDC. Run the stored procedure
sys.sp_cdc_enable_db
to enable the database for CDC.After the database is enabled for CDC, a schema with the name
cdc
is created, along with a CDC user, metadata tables, and other system objects.The following example shows how to enable CDC for the database
MyDB
:Example: Enabling a SQL Server database for the CDC template
USE MyDB GO EXEC sys.sp_cdc_enable_db GO
2.7.3.2. Enabling CDC on a SQL Server table
A SQL Server administrator must enable change data capture on the source tables that you want to Debezium to capture. The database must already be enabled for CDC. To enable CDC on a table, a SQL Server administrator runs the stored procedure sys.sp_cdc_enable_table
for the table. The stored procedures can be run by using SQL Server Management Studio, or by using Transact-SQL. SQL Server CDC must be enabled for every table that you want to capture.
Prerequisites
- CDC is enabled on the SQL Server database.
- The SQL Server Agent is running.
-
You are a member of the
db_owner
fixed database role for the database.
Procedure
- From the View menu in SQL Server Management Studio, click Template Explorer.
- In the Template Browser, expand SQL Server Templates.
- Expand Change Data Capture > Configuration, and then click Enable Table Specifying Filegroup Option.
-
In the template, replace the table name in the
USE
statement with the name of the table that you want to capture. Run the stored procedure
sys.sp_cdc_enable_table
.The following example shows how to enable CDC for the table
MyTable
:Example: Enabling CDC for a SQL Server table
USE MyDB GO EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'MyTable', //<.> @role_name = N'MyRole', //<.> @filegroup_name = N'MyDB_CT',//<.> @supports_net_changes = 0 GO
<.> Specifies the name of the table that you want to capture. <.> Specifies a role
MyRole
to which you can add users to whom you want to grantSELECT
permission on the captured columns of the source table. Users in thesysadmin
ordb_owner
role also have access to the specified change tables. Set the value of@role_name
toNULL
, to allow only members in thesysadmin
ordb_owner
to have full access to captured information. <.> Specifies thefilegroup
where SQL Server places the change table for the captured table. The namedfilegroup
must already exist. It is best not to locate change tables in the samefilegroup
that you use for source tables.
2.7.3.3. Verifying that the user has access to the CDC table
A SQL Server administrator can run a system stored procedure to query a database or table to retrieve its CDC configuration information. The stored procedures can be run by using SQL Server Management Studio, or by using Transact-SQL.
Prerequisites
-
You have
SELECT
permission on all of the captured columns of the capture instance. Members of thedb_owner
database role can view information for all of the defined capture instances. - You have membership in any gating roles that are defined for the table information that the query includes.
Procedure
- From the View menu in SQL Server Management Studio, click Object Explorer.
- From the Object Explorer, expand Databases, and then expand your database object, for example, MyDB.
- Expand Programmability > Stored Procedures > System Stored Procedures.
Run the
sys.sp_cdc_help_change_data_capture
stored procedure to query the table.Queries should not return empty results.
The following example runs the stored procedure
sys.sp_cdc_help_change_data_capture
on the databaseMyDB
:Example: Querying a table for CDC configuration information
USE MyDB; GO EXEC sys.sp_cdc_help_change_data_capture GO
The query returns configuration information for each table in the database that is enabled for CDC and that contains change data that the caller is authorized to access. If the result is empty, verify that the user has privileges to access both the capture instance and the CDC tables.
2.7.3.4. SQL Server on Azure
The Debezium SQL Server connector can be used with SQL Server on Azure. Refer to this example for configuring CDC for SQL Server on Azure and using it with Debezium.
2.7.3.5. Effect of SQL Server capture job agent configuration on server load and latency
When a database administrator enables change data capture for a source table, the capture job agent begins to run. The agent reads new change event records from the transaction log and replicates the event records to a change data table. Between the time that a change is committed in the source table, and the time that the change appears in the corresponding change table, there is always a small latency interval. This latency interval represents a gap between when changes occur in the source table and when they become available for Debezium to stream to Apache Kafka.
Ideally, for applications that must respond quickly to changes in data, you want to maintain close synchronization between the source and change tables. You might imagine that running the capture agent to continuously process change events as rapidly as possible might result in increased throughput and reduced latency — populating change tables with new event records as soon as possible after the events occur, in near real time. However, this is not necessarily the case. There is a performance penalty to pay in the pursuit of more immediate synchronization. Each time that the capture job agent queries the database for new event records, it increases the CPU load on the database host. The additional load on the server can have a negative effect on overall database performance, and potentially reduce transaction efficiency, especially during times of peak database use.
It’s important to monitor database metrics so that you know if the database reaches the point where the server can no longer support the capture agent’s level of activity. If you notice performance problems, there are SQL Server capture agent settings that you can modify to help balance the overall CPU load on the database host with a tolerable degree of latency.
2.7.3.6. SQL Server capture job agent configuration parameters
On SQL Server, parameters that control the behavior of the capture job agent are defined in the SQL Server table msdb.dbo.cdc_jobs
. If you experience performance issues while running the capture job agent, adjust capture jobs settings to reduce CPU load by running the sys.sp_cdc_change_job
stored procedure and supplying new values.
Specific guidance about how to configure SQL Server capture job agent parameters is beyond the scope of this documentation.
The following parameters are the most significant for modifying capture agent behavior for use with the Debezium SQL Server connector:
pollinginterval
- Specifies the number of seconds that the capture agent waits between log scan cycles.
- A higher value reduces the load on the database host and increases latency.
-
A value of
0
specifies no wait between scans. -
The default value is
5
.
maxtrans
-
Specifies the maximum number of transactions to process during each log scan cycle. After the capture job processes the specified number of transactions, it pauses for the length of time that the
pollinginterval
specifies before the next scan begins. - A lower value reduces the load on the database host and increases latency.
-
The default value is
500
.
-
Specifies the maximum number of transactions to process during each log scan cycle. After the capture job processes the specified number of transactions, it pauses for the length of time that the
maxscans
-
Specifies a limit on the number of scan cycles that the capture job can attempt in capturing the full contents of the database transaction log. If the
continuous
parameter is set to1
, the job pauses for the length of time that thepollinginterval
specifies before it resumes scanning. - A lower values reduces the load on the database host and increases latency.
-
The default value is
10
.
-
Specifies a limit on the number of scan cycles that the capture job can attempt in capturing the full contents of the database transaction log. If the
Additional resources
- For more information about capture agent parameters, see the SQL Server documentation.
2.7.4. Deployment of Debezium SQL Server connectors
You can use either of the following methods to deploy a Debezium SQL Server connector:
Additional resources
2.7.4.1. SQL Server connector deployment using Streams for Apache Kafka
Beginning with Debezium 1.7, the preferred method for deploying a Debezium connector is to use Streams for Apache Kafka to build a Kafka Connect container image that includes the connector plug-in.
During the deployment process, you create and use the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image. -
A
KafkaConnector
CR that provides details that include information the connector uses to access the source database. After Streams for Apache Kafka starts the Kafka Connect pod, you start the connector by applying theKafkaConnector
CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy. For each connector plug-in, you can also specify other components that you want to make available for deployment. For example, you can add Apicurio Registry artifacts, or the Debezium scripting component. When Streams for Apache Kafka builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The spec.build.output
parameter in the KafkaConnect
CR specifies where to store the resulting Kafka Connect container image. Container images can be stored in a Docker registry, or in an OpenShift ImageStream. To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect. ImageStreams are not created automatically.
If you use a KafkaConnect
resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors. You can still use the REST API to retrieve information.
Additional resources
- Configuring Kafka Connect in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Building a new container image automatically in Deploying and Managing Streams for Apache Kafka on OpenShift.
2.7.4.2. Using Streams for Apache Kafka to deploy a Debezium SQL Server connector
With earlier versions of Streams for Apache Kafka, to deploy Debezium connectors on OpenShift, you were required to first build a Kafka Connect image for the connector. The current preferred method for deploying connectors on OpenShift is to use a build configuration in Streams for Apache Kafka to automatically build a Kafka Connect container image that includes the Debezium connector plug-ins that you want to use.
During the build process, the Streams for Apache Kafka Operator transforms input parameters in a KafkaConnect
custom resource, including Debezium connector definitions, into a Kafka Connect container image. The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
The newly created container is pushed to the container registry that is specified in .spec.build.output
, and is used to deploy a Kafka Connect cluster. After Streams for Apache Kafka builds the Kafka Connect image, you create KafkaConnector
custom resources to start the connectors that are included in the build.
Prerequisites
- You have access to an OpenShift cluster on which the cluster Operator is installed.
- The Streams for Apache Kafka Operator is running.
- An Apache Kafka cluster is deployed as documented in Deploying and Managing Streams for Apache Kafka on OpenShift.
- Kafka Connect is deployed on Streams for Apache Kafka
- You have a Red Hat build of Debezium license.
-
The OpenShift
oc
CLI client is installed or you have access to the OpenShift Container Platform web console. Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
- To store the build image in an image registry, such as Red Hat Quay.io or Docker Hub
- An account and permissions to create and manage images in the registry.
- To store the build image as a native OpenShift ImageStream
- An ImageStream resource is deployed to the cluster for storing new container images. You must explicitly create an ImageStream for the cluster. ImageStreams are not available by default. For more information about ImageStreams, see Managing image streams in the OpenShift Container Platform documentation.
Procedure
- Log in to the OpenShift cluster.
Create a Debezium
KafkaConnect
custom resource (CR) for the connector, or modify an existing one. For example, create aKafkaConnect
CR with the namedbz-connect.yaml
that specifies themetadata.annotations
andspec.build
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
Example 2.50. A
dbz-connect.yaml
file that defines aKafkaConnect
custom resource that includes a Debezium connectorIn the example that follows, the custom resource is configured to download the following artifacts:
- The Debezium SQL Server connector archive.
- The Red Hat build of Apicurio Registry archive. The Apicurio Registry is an optional component. Add the Apicurio Registry component only if you intend to use Avro serialization with the connector.
- The Debezium scripting SMT archive and the associated scripting engine that you want to use with the Debezium connector. The SMT archive and scripting language dependencies are optional components. Add these components only if you intend to use the Debezium content-based routing SMT or filter SMT.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: debezium-kafka-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: version: 3.6.0 build: 2 output: 3 type: imagestream 4 image: debezium-streams-connect:latest plugins: 5 - name: debezium-connector-sqlserver artifacts: - type: zip 6 url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-sqlserver/2.7.3.Final-redhat-00001/debezium-connector-sqlserver-2.7.3.Final-redhat-00001-plugin.zip 7 - type: zip url: https://maven.repository.redhat.com/ga/io/apicurio/apicurio-registry-distro-connect-converter/2.4.4.Final-redhat-<build-number>/apicurio-registry-distro-connect-converter-2.4.4.Final-redhat-<build-number>.zip 8 - type: zip url: https://maven.repository.redhat.com/ga/io/debezium/debezium-scripting/2.7.3.Final-redhat-00001/debezium-scripting-2.7.3.Final-redhat-00001.zip 9 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy/3.0.11/groovy-3.0.11.jar 10 - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-jsr223/3.0.11/groovy-jsr223-3.0.11.jar - type: jar url: https://repo1.maven.org/maven2/org/apache/groovy/groovy-json3.0.11/groovy-json-3.0.11.jar bootstrapServers: debezium-kafka-cluster-kafka-bootstrap:9093 ...
Table 2.177. Descriptions of Kafka Connect configuration settings Item Description 1
Sets the
strimzi.io/use-connector-resources
annotation to"true"
to enable the Cluster Operator to useKafkaConnector
resources to configure connectors in this Kafka Connect cluster.2
The
spec.build
configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.3
The
build.output
specifies the registry in which the newly built image is stored.4
Specifies the name and image name for the image output. Valid values for
output.type
aredocker
to push into a container registry such as Docker Hub or Quay, orimagestream
to push the image to an internal OpenShift ImageStream. To use an ImageStream, an ImageStream resource must be deployed to the cluster. For more information about specifying thebuild.output
in the KafkaConnect configuration, see the Streams for Apache Kafka Build schema reference in {NameConfiguringStreamsOpenShift}.5
The
plugins
configuration lists all of the connectors that you want to include in the Kafka Connect image. For each entry in the list, specify a plug-inname
, and information for about the artifacts that are required to build the connector. Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector. For example, you can add Service Registry artifacts, or the Debezium scripting component.6
The value of
artifacts.type
specifies the file type of the artifact specified in theartifacts.url
. Valid types arezip
,tgz
, orjar
. Debezium connector archives are provided in.zip
file format. Thetype
value must match the type of the file that is referenced in theurl
field.7
The value of
artifacts.url
specifies the address of an HTTP server, such as a Maven repository, that stores the file for the connector artifact. Debezium connector artifacts are available in the Red Hat Maven repository. The OpenShift cluster must have access to the specified server.8
(Optional) Specifies the artifact
type
andurl
for downloading the Apicurio Registry component. Include the Apicurio Registry artifact, only if you want the connector to use Apache Avro to serialize event keys and values with the Red Hat build of Apicurio Registry, instead of using the default JSON converter.9
(Optional) Specifies the artifact
type
andurl
for the Debezium scripting SMT archive to use with the Debezium connector. Include the scripting SMT only if you intend to use the Debezium content-based routing SMT or filter SMT To use the scripting SMT, you must also deploy a JSR 223-compliant scripting implementation, such as groovy.10
(Optional) Specifies the artifact
type
andurl
for the JAR files of a JSR 223-compliant scripting implementation, which is required by the Debezium scripting SMT.ImportantIf you use Streams for Apache Kafka to incorporate the connector plug-in into your Kafka Connect image, for each of the required scripting language components
artifacts.url
must specify the location of a JAR file, and the value ofartifacts.type
must also be set tojar
. Invalid values cause the connector fails at runtime.To enable use of the Apache Groovy language with the scripting SMT, the custom resource in the example retrieves JAR files for the following libraries:
-
groovy
-
groovy-jsr223
(scripting agent) -
groovy-json
(module for parsing JSON strings)
As an alternative, the Debezium scripting SMT also supports the use of the JSR 223 implementation of GraalVM JavaScript.
Apply the
KafkaConnect
build specification to the OpenShift cluster by entering the following command:oc create -f dbz-connect.yaml
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster. The connector artifacts that you listed in the configuration are available in the cluster.Create a
KafkaConnector
resource to define an instance of each connector that you want to deploy.
For example, create the followingKafkaConnector
CR, and save it assqlserver-inventory-connector.yaml
Example 2.51.
sqlserver-inventory-connector.yaml
file that defines theKafkaConnector
custom resource for a Debezium connectorapiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: labels: strimzi.io/cluster: debezium-kafka-connect-cluster name: inventory-connector-sqlserver 1 spec: class: io.debezium.connector.sqlserver.SqlServerConnector 2 tasksMax: 1 3 config: 4 schema.history.internal.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092 schema.history.internal.kafka.topic: schema-changes.inventory database.hostname: sqlserver.debezium-sqlserver.svc.cluster.local 5 database.port: 1433 6 database.user: debezium 7 database.password: dbz 8 topic.prefix: inventory-connector-sqlserver 9 table.include.list: dbo.customers 10 ...
Table 2.178. Descriptions of connector configuration settings Item Description 1
The name of the connector to register with the Kafka Connect cluster.
2
The name of the connector class.
3
The number of tasks that can operate concurrently.
4
The connector’s configuration.
5
The address of the host database instance.
6
The port number of the database instance.
7
The name of the account that Debezium uses to connect to the database.
8
The password that Debezium uses to connect to the database user account.
9
The topic prefix for the database instance or cluster.
The specified name must be formed only from alphanumeric characters or underscores.
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster.
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the Avro connector.10
The list of tables from which the connector captures change events.
Create the connector resource by running the following command:
oc create -n <namespace> -f <kafkaConnector>.yaml
For example,
oc create -n debezium -f sqlserver-inventory-connector.yaml
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by
spec.config.database.dbname
in theKafkaConnector
CR. After the connector pod is ready, Debezium is running.
You are now ready to verify the Debezium SQL Server deployment.
2.7.4.3. Deploying a Debezium SQL Server connector by building a custom Kafka Connect container image from a Dockerfile
To deploy a Debezium SQL Server connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive, and then push this container image to a container registry. You then need to create the following custom resources (CRs):
-
A
KafkaConnect
CR that defines your Kafka Connect instance. Theimage
property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat Streams for Apache Kafka is deployed. Streams for Apache Kafka offers operators and images that bring Apache Kafka to OpenShift. -
A
KafkaConnector
CR that defines your Debezium SQL Server connector. Apply this CR to the same OpenShift instance where you apply theKafkaConnect
CR.
Prerequisites
- SQL Server is running and you completed the steps to set up SQL Server to work with a Debezium connector.
- Streams for Apache Kafka is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Managing Streams for Apache Kafka on OpenShift
- Podman or Docker is installed.
-
You have an account and permissions to create and manage containers in the container registry (such as
quay.io
ordocker.io
) to which you plan to add the container that will run your Debezium connector.
Procedure
Create the Debezium SQL Server container for Kafka Connect:
Create a Dockerfile that uses
registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0
as the base image. For example, from a terminal window, enter the following command:cat <<EOF >debezium-container-for-sqlserver.yaml 1 FROM registry.redhat.io/amq-streams-kafka-35-rhel8:2.5.0 USER root:root RUN mkdir -p /opt/kafka/plugins/debezium 2 RUN cd /opt/kafka/plugins/debezium/ \ && curl -O https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-sqlserver/2.7.3.Final-redhat-00001/debezium-connector-sqlserver-2.7.3.Final-redhat-00001-plugin.zip \ && unzip debezium-connector-sqlserver-2.7.3.Final-redhat-00001-plugin.zip \ && rm debezium-connector-sqlserver-2.7.3.Final-redhat-00001-plugin.zip RUN cd /opt/kafka/plugins/debezium/ USER 1001 EOF
Item Description 1
You can specify any file name that you want.
2
Specifies the path to your Kafka Connect plug-ins directory. If your Kafka Connect plug-ins directory is in a different location, replace this path with the actual path of your directory.
The command creates a Dockerfile with the name
debezium-container-for-sqlserver.yaml
in the current directory.Build the container image from the
debezium-container-for-sqlserver.yaml
Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:podman build -t debezium-container-for-sqlserver:latest .
docker build -t debezium-container-for-sqlserver:latest .
The preceding commands build a container image with the name
debezium-container-for-sqlserver
.Push your custom image to a container registry, such as quay.io or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:
podman push <myregistry.io>/debezium-container-for-sqlserver:latest
docker push <myregistry.io>/debezium-container-for-sqlserver:latest
Create a new Debezium SQL Server KafkaConnect custom resource (CR). For example, create a
KafkaConnect
CR with the namedbz-connect.yaml
that specifiesannotations
andimage
properties. The following example shows an excerpt from adbz-connect.yaml
file that describes aKafkaConnect
custom resource.
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 1 spec: #... image: debezium-container-for-sqlserver 2 ...
Item Description 1
metadata.annotations
indicates to the Cluster Operator thatKafkaConnector
resources are used to configure connectors in this Kafka Connect cluster.2
spec.image
specifies the name of the image that you created to run your Debezium connector. This property overrides theSTRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
variable in the Cluster Operator.Apply the
KafkaConnect
CR to the OpenShift Kafka Connect environment by entering the following command:oc create -f dbz-connect.yaml
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector.
Create a
KafkaConnector
custom resource that configures your Debezium SQL Server connector instance.You configure a Debezium SQL Server connector in a
.yaml
file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.The following example configures a Debezium connector that connects to a SQL server host,
192.168.99.100
, on port1433
. This host has a database namedtestDB
, a table with the namecustomers
, andinventory-connector-sqlserver
is the server’s logical name.SQL Server
inventory-connector.yaml
apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: inventory-connector-sqlserver 1 labels: strimzi.io/cluster: my-connect-cluster annotations: strimzi.io/use-connector-resources: 'true' spec: class: io.debezium.connector.sqlserver.SqlServerConnector 2 config: database.hostname: 192.168.99.100 3 database.port: 1433 4 database.user: debezium 5 database.password: dbz 6 topic.prefix: inventory-connector-sqlserver 7 table.include.list: dbo.customers 8 schema.history.internal.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 9 schema.history.internal.kafka.topic: schemahistory.fullfillment 10 database.ssl.truststore: path/to/trust-store 11 database.ssl.truststore.password: password-for-trust-store 12
Table 2.179. Descriptions of connector configuration settings Item Description 1
The name of our connector when we register it with a Kafka Connect service.
2
The name of this SQL Server connector class.
3
The address of the SQL Server instance.
4
The port number of the SQL Server instance.
5
The name of the SQL Server user.
6
The password for the SQL Server user.
7
The topic prefix for the SQL Server instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro converter is used.
8
The connector captures changes from the
dbo.customers
table only.9
The list of Kafka brokers that this connector will use to write and recover DDL statements to the database schema history topic.
10
The name of the database schema history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers.
11
The path to the SSL truststore that stores the server’s signer certificates. This property is required unless database encryption is disabled (
database.encrypt=false
).12
The SSL truststore password. This property is required unless database encryption is disabled (
database.encrypt=false
).Create your connector instance with Kafka Connect. For example, if you saved your
KafkaConnector
resource in theinventory-connector.yaml
file, you would run the following command:oc apply -f inventory-connector.yaml
The preceding command registers
inventory-connector
and the connector starts to run against thetestDB
database as defined in theKafkaConnector
CR.
Verifying that the Debezium SQL Server connector is running
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture. Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
To verify that the connector is running, you perform the following operations from the OpenShift Container Platform web console, or through the OpenShift CLI tool (oc):
- Verify the connector status.
- Verify that the connector generates topics.
- Verify that topics are populated with events for read operations ("op":"r") that the connector generates during the initial snapshot of each table.
Prerequisites
- A Debezium connector is deployed to Streams for Apache Kafka on OpenShift.
-
The OpenShift
oc
CLI client is installed. - You have access to the OpenShift Container Platform web console.
Procedure
Check the status of the
KafkaConnector
resource by using one of the following methods:From the OpenShift Container Platform web console:
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaConnector
. - From the KafkaConnectors list, click the name of the connector that you want to check, for example inventory-connector-sqlserver.
- In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc describe KafkaConnector <connector-name> -n <project>
For example,
oc describe KafkaConnector inventory-connector-sqlserver -n debezium
The command returns status information that is similar to the following output:
Example 2.52.
KafkaConnector
resource statusName: inventory-connector-sqlserver Namespace: debezium Labels: strimzi.io/cluster=debezium-kafka-connect-cluster Annotations: <none> API Version: kafka.strimzi.io/v1beta2 Kind: KafkaConnector ... Status: Conditions: Last Transition Time: 2021-12-08T17:41:34.897153Z Status: True Type: Ready Connector Status: Connector: State: RUNNING worker_id: 10.131.1.124:8083 Name: inventory-connector-sqlserver Tasks: Id: 0 State: RUNNING worker_id: 10.131.1.124:8083 Type: source Observed Generation: 1 Tasks Max: 1 Topics: inventory-connector-sqlserver.inventory inventory-connector-sqlserver.inventory.addresses inventory-connector-sqlserver.inventory.customers inventory-connector-sqlserver.inventory.geom inventory-connector-sqlserver.inventory.orders inventory-connector-sqlserver.inventory.products inventory-connector-sqlserver.inventory.products_on_hand Events: <none>
Verify that the connector created Kafka topics:
From the OpenShift Container Platform web console.
-
Navigate to Home
Search. -
On the Search page, click Resources to open the Select Resource box, and then type
KafkaTopic
. -
From the KafkaTopics list, click the name of the topic that you want to check, for example,
inventory-connector-sqlserver.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d
. - In the Conditions section, verify that the values in the Type and Status columns are set to Ready and True.
-
Navigate to Home
From a terminal window:
Enter the following command:
oc get kafkatopics
The command returns status information that is similar to the following output:
Example 2.53.
KafkaTopic
resource statusNAME CLUSTER PARTITIONS REPLICATION FACTOR READY connect-cluster-configs debezium-kafka-cluster 1 1 True connect-cluster-offsets debezium-kafka-cluster 25 1 True connect-cluster-status debezium-kafka-cluster 5 1 True consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True inventory-connector-sqlserver--a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True inventory-connector-sqlserver.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True inventory-connector-sqlserver.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True inventory-connector-sqlserver.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True inventory-connector-sqlserver.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True inventory-connector-sqlserver.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True inventory-connector-sqlserver.inventory.products_on_hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True schema-changes.inventory debezium-kafka-cluster 1 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
Check topic content.
- From a terminal window, enter the following command:
oc exec -n <project> -it <kafka-cluster> -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=<topic-name>
For example,
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ > --bootstrap-server localhost:9092 \ > --from-beginning \ > --property print.key=true \ > --topic=inventory-connector-sqlserver.inventory.products_on_hand
The format for specifying the topic name is the same as the
oc describe
command returns in Step 1, for example,inventory-connector-sqlserver.inventory.addresses
.For each event in the topic, the command returns information that is similar to the following output:
Example 2.54. Content of a Debezium change event
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory-connector-sqlserver.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-sqlserver.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory-connector-sqlserver.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"int64","optional":false,"field":"ts_us"},{"type":"int64","optional":false,"field":"ts_ns"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.sqlserver.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"int64","optional":true,"field":"ts_us"},{"type":"int64","optional":true,"field":"ts_ns"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"inventory-connector-sqlserver.inventory.products_on_hand.Envelope"},"payload":{"before":null,"after":{"product_id":101,"quantity":3},"source":{"version":"2.7.3.Final-redhat-00001","connector":"sqlserver","name":"inventory-connector-sqlserver","ts_ms":1638985247805,"ts_us":1638985247805000000,"ts_ns":1638985247805000000,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"sqlserver-bin.000003","pos":156,"row":0,"thread":null,"query":null},"op":"r","ts_ms":1638985247805,"ts_us":1638985247805102,"ts_ns":1638985247805102588,"transaction":null}}
In the preceding example, the
payload
value shows that the connector snapshot generated a read ("op" ="r"
) event from the tableinventory.products_on_hand
. The"before"
state of theproduct_id
record isnull
, indicating that no previous value exists for the record. The"after"
state shows aquantity
of3
for the item withproduct_id
101
.
For the complete list of the configuration properties that you can set for the Debezium SQL Server connector, see SQL Server connector properties.
Results
When the connector starts, it performs a consistent snapshot of the SQL Server databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming the change event records to Kafka topics.
2.7.4.4. Descriptions of Debezium SQL Server connector configuration properties
The Debezium SQL Server connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values.
Information about the properties is organized as follows:
- Required connector configuration properties
- Advanced connector configuration properties
- Database schema history connector configuration properties that control how Debezium processes events that it reads from the database schema history topic.
Pass-through SQL Server connector configuration properties
- Pass-through database schema history properties for configuring producer and consumer clients
- Pass-through Kafka signals configuration properties
- Pass-through Kafka signals consumer client configuration properties
- Pass-through sink notification configuration properties
- Pass-through database driver configuration properties
Required Debezium SQL Server connector configuration properties
The following configuration properties are required unless a default value is available.
Property | Default | Description |
---|---|---|
No default | Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.) | |
No default |
The name of the Java class for the connector. Always use a value of | |
| Specifies the maximum number of tasks that the connector can use to capture data from the database instance. | |
No default | IP address or hostname of the SQL Server database server. | |
|
Integer port number of the SQL Server database server. If both | |
No default | Username to use when connecting to the SQL Server database server. Can be omitted when using Kerberos authentication, which can be configured using pass-through properties. | |
No default | Password to use when connecting to the SQL Server database server. | |
No default |
Specifies the instance name of the SQL Server named instance. If both | |
No default |
Topic prefix that provides a namespace for the SQL Server database server that you want Debezium to capture. The prefix should be unique across all other connectors, since it is used as the prefix for all Kafka topic names that receive records from this connector. Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name. Warning Do not change the value of this property. If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value. The connector is also unable to recover its database schema history topic. | |
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Any schema name not included in
To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. | |
No default |
An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Any schema whose name is not included in
To match the name of a schema, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. | |
No default |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables that you want Debezium to capture. By default, the connector captures all non-system tables for the designated schemas. When this property is set, the connector captures changes only from the specified tables. Each identifier is of the form schemaName.tableName.
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. | |
No default |
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for the tables that you want to exclude from being captured. Debezium captures all tables that are not included in
To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. | |
empty string |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in the change event message values. Fully-qualified names for columns are of the form schemaName.tableName.columnName. Note Each change event record that Debezium emits for a table includes an event key that contains fields for each column in the table’s primary key or unique key. To ensure that event keys are generated correctly, if you set this property, be sure to explicitly list the primary key columns of any captured tables.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. | |
empty string |
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values. Fully-qualified names for columns are of the form schemaName.tableName.columnName. Note that primary key columns are always included in the event’s key, also if excluded from the value.
To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name. | |
|
Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per | |
| n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form `<schemaName>.<tableName>.<columnName>`.
A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation. column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts. |
|
Time, date, and timestamps can be represented with different kinds of precision, including: | |
|
Specifies how the connector should handle values for | |
|
Boolean value that specifies whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change is recorded with a key that contains the database name and a value that is a JSON structure that describes the schema update. This is independent of how the connector internally records database schema history. The default is | |
|
Controls whether a delete event is followed by a tombstone event. | |
n/a |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want to truncate the data in a set of columns when it exceeds the number of characters specified by the length in the property name. Set
The fully-qualified name of a column observes the following format: You can specify multiple properties with different lengths in a single configuration. | |
n/a Fully-qualified names for columns are of the form schemaName.tableName.columnName. |
An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data. Set The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. You can specify multiple properties with different lengths in a single configuration. | |
n/a | An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata. When this property is set, the connector adds the following fields to the schema of event records:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes the following format: schemaName.tableName.columnName. | |
n/a | An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database. When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
These parameters propagate a column’s original type name and length (for variable-width types), respectively.
The fully-qualified name of a column observes the following format: schemaName.tableName.typeName. For the list of SQL Server-specific data type names, see the SQL Server data type mappings. | |
n/a | A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables.
By default, Debezium uses the primary key column of a table as the message key for records that it emits. In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns.
Each fully-qualified table name is a regular expression in the following format: There is no limit to the number of columns that you use to create custom message keys. However, it’s best to use the minimum number that are required to specify a unique key. | |
bytes |
Specifies how binary ( | |
none |
Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
| |
none |
Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings:
For more information, see Avro naming. |
Advanced SQL Server connector configuration properties
The following advanced configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector’s configuration.
Property | Default | Description |
---|---|---|
No default |
Enumerates a comma-separated list of the symbolic names of the custom converter instances that the connector can use. For example,
You must set the
For each converter that you configure for a connector, you must also add a
For example, isbn.type: io.debezium.test.IsbnConverter
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter. To associate any additional configuration parameter with a converter, prefix the parameter names with the symbolic name of the converter. For example, isbn.schema.name: io.debezium.sqlserver.type.Isbn | |
initial | A mode for taking an initial snapshot of the structure and optionally data of captured tables. Once the snapshot is complete, the connector will continue reading change events from the database’s redo logs. The following values are supported:
| |
exclusive | Controls whether and for how long the connector holds a table lock. Table locks prevent certain types of changes table operations from occurring while the connector performs a snapshot. You can set the following values:
| |
|
Specifies how the connector queries data while performing a snapshot.
This setting enables you to manage snapshot content in a more flexible manner compared to using the | |
All tables specified in |
An optional, comma-separated list of regular expressions that match the fully-qualified names ( To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. | |
repeatable_read | Mode to control which transaction isolation level is used and how long the connector locks tables that are designated for capture. The following values are supported:
The
Mode choice also affects data consistency. Only | |
|
Specifies how the connector should react to exceptions during processing of events. | |
| Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 500 milliseconds, or 0.5 second. | |
|
Positive integer value that specifies the maximum number of records that the blocking queue can hold. When Debezium reads events streamed from the database, it places the events in the blocking queue before it writes them to Kafka. The blocking queue can provide backpressure for reading change events from the database in cases where the connector ingests messages faster than it can write them to Kafka, or when Kafka becomes unavailable. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of | |
|
A long integer value that specifies the maximum volume of the blocking queue in bytes. By default, volume limits are not specified for the blocking queue. To specify the number of bytes that the queue can consume, set this property to a positive long value. | |
| Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. | |
|
Controls how frequently heartbeat messages are sent. | |
No default |
Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. | |
No default |
An interval in milli-seconds that the connector should wait before taking a snapshot after starting up; | |
0 |
Specifies the time, in milliseconds, that the connector delays the start of the streaming process after it completes a snapshot. Setting a delay interval helps to prevent the connector from restarting snapshots in the event that a failure occurs immediately after the snapshot completes, but before the streaming process begins. Set a delay value that is higher than the value of the | |
| Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector will read the table contents in multiple batches of this size. Defaults to 2000. | |
No default | Specifies the number of rows that will be fetched for each database round-trip of a given query. Defaults to the JDBC driver’s default fetch size. | |
|
An integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If table locks cannot be acquired in this time interval, the snapshot will fail (also see snapshots). | |
No default | Specifies the table rows to include in a snapshot. Use the property if you want a snapshot to include only a subset of the rows in a table. This property affects snapshots only. It does not apply to events that the connector reads from the log.
The property contains a comma-separated list of fully-qualified table names in the form
From a "snapshot.select.statement.overrides": "customer.orders", "snapshot.select.statement.overrides.customer.orders": "SELECT * FROM customers.orders WHERE delete_flag = 0 ORDER BY id DESC"
In the resulting snapshot, the connector includes only the records for which | |
|
When set to | |
10000 (10 seconds) | The number of milli-seconds to wait before restarting a connector after a retriable error occurs. | |
|
A comma-separated list of operation types that will be skipped during streaming. The operations include: | |
No default value |
Fully-qualified name of the data collection that is used to send signals to the connector. | |
source | List of the signaling channel names that are enabled for the connector. By default, the following channels are available:
| |
No default | List of notification channel names that are enabled for the connector. By default, the following channels are available:
| |
|
Allow schema changes during an incremental snapshot. When enabled the connector will detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. | |
| The maximum number of rows that the connector fetches and reads into memory during an incremental snapshot chunk. Increasing the chunk size provides greater efficiency, because the snapshot runs fewer snapshot queries of a greater size. However, larger chunk sizes also require more memory to buffer the snapshot data. Adjust the chunk size to a value that provides the best performance in your environment. | |
|
Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes.
| |
500 |
Specifies the maximum number of transactions per iteration to be used to reduce the memory footprint when streaming changes from multiple tables in a database. When set to | |
| Uses OPTION(RECOMPILE) query option to all SELECT statements used during an incremental snapshot. This can help to solve parameter sniffing issues that may occur but can cause increased CPU load on the source database, depending on the frequency of query execution. | |
|
The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to | |
|
Specify the delimiter for topic name, defaults to | |
| The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection. | |
|
Controls the name of the topic to which the connector sends heartbeat messages. The topic name has this pattern: | |
|
Controls the name of the topic to which the connector sends transaction metadata messages. The topic name has this pattern: For more information, see Transaction Metadata. | |
| Specifies the number of threads that the connector uses when performing an initial snapshot. To enable parallel initial snapshots, set the property to a value greater than 1. In a parallel initial snapshot, the connector processes multiple tables concurrently. Important Parallel initial snapshots is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. | |
|
Defines tags that customize MBean object names by adding metadata that provides contextual information. Specify a comma-separated list of key-value pairs. Each key represents a tag for the MBean object name, and the corresponding value represents a value for the key, for example, The connector appends the specified tags to the base MBean object name. Tags can help you to organize and categorize metrics data. You can define tags to identify particular application instances, environments, regions, versions, and so forth. For more information, see Customized MBean names. | |
|
Specifies how the connector responds after an operation that results in a retriable error, such as a connection error.
| |
| Controls how the connector queries CDC data. The following modes are supported:
| |
|
Specifies the time, in milliseconds, that the connector waits for a query to complete. Set the value to |
Debezium SQL Server connector database schema history configuration properties
Debezium provides a set of schema.history.internal.*
properties that control how the connector interacts with the schema history topic.
The following table describes the schema.history.internal
properties for configuring the Debezium connector.
Property | Default | Description |
---|---|---|
No default | The full name of the Kafka topic where the connector stores the database schema history. | |
No default | A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while fetching cluster information using Kafka admin client. | |
| An integer value that specifies the maximum number of milliseconds the connector should wait while create kafka history topic using Kafka admin client. | |
|
The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is | |
|
A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is | |
|
A Boolean value that specifies whether the connector records schema structures from all tables in a schema or database, or only from tables that are designated for capture.
| |
|
A Boolean value that specifies whether the connector records schema structures from all logical databases in the database instance.
|
Pass-through SQL Server connector configuration properties
The connector supports pass-through properties that enable Debezium to specify custom configuration options for fine-tuning the behavior of the Apache Kafka producer and consumer. For information about the full range of configuration properties for Kafka producers and consumers, see the Kafka documentation.
Pass-through properties for configuring how producer and consumer clients interact with schema history topics
Debezium relies on an Apache Kafka producer to write schema changes to database schema history topics. Similarly, it relies on a Kafka consumer to read from database schema history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the schema.history.internal.producer.*
and schema.history.internal.consumer.*
prefixes. The pass-through producer and consumer database schema history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:
schema.history.internal.producer.security.protocol=SSL schema.history.internal.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.producer.ssl.keystore.password=test1234 schema.history.internal.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.producer.ssl.truststore.password=test1234 schema.history.internal.producer.ssl.key.password=test1234 schema.history.internal.consumer.security.protocol=SSL schema.history.internal.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks schema.history.internal.consumer.ssl.keystore.password=test1234 schema.history.internal.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks schema.history.internal.consumer.ssl.truststore.password=test1234 schema.history.internal.consumer.ssl.key.password=test1234
Debezium strips the prefix from the property name before it passes the property to the Kafka client.
For more information about Kafka producer configuration properties and Kafka consumer configuration properties, see the Apache Kafka documentation .
Pass-through properties for configuring how the SQL Server connector interacts with the Kafka signaling topic
Debezium provides a set of signal.*
properties that control how the connector interacts with the Kafka signals topic.
The following table describes the Kafka signal
properties.
Property | Default | Description |
---|---|---|
<topic.prefix>-signal | The name of the Kafka topic that the connector monitors for ad hoc signals. Note If automatic topic creation is disabled, you must manually create the required signaling topic. A signaling topic is required to preserve signal ordering. The signaling topic must have a single partition. | |
kafka-signal | The name of the group ID that is used by Kafka consumers. | |
No default | A list of the host and port pairs that the connector uses to establish its initial connection to the Kafka cluster. Each pair references the Kafka cluster that is used by the Debezium Kafka Connect process. | |
| An integer value that specifies the maximum number of milliseconds that the connector waits when polling signals. | |
| Specifies whether the Kafka consumer writes an offset commit after it reads a message from the signaling topic. The value that you assign to this property determines whether the connector can process requests that the signaling topic receives while the connector is offline. Choose one of the following settings:
|
Pass-through properties for configuring the Kafka consumer client for the signaling channel
The Debezium connector provides for pass-through configuration of the signals Kafka consumer. Pass-through signals properties begin with the prefix signals.consumer.*
. For example, the connector passes properties such as signal.consumer.security.protocol=SSL
to the Kafka consumer.
Debezium strips the prefixes from the properties before it passes the properties to the Kafka signals consumer.
Pass-through properties for configuring the SQL Server connector sink notification channel
The following table describes properties that you can use to configure the Debezium sink notification
channel.
Property | Default | Description |
---|---|---|
No default |
The name of the topic that receives notifications from Debezium. This property is required when you configure the |
Debezium connector pass-through database driver configuration properties
The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix driver.*
. For example, the connector passes properties such as driver.foobar=false
to the JDBC URL.
Debezium strips the prefixes from the properties before it passes the properties to the database driver.
2.7.5. Refreshing capture tables after a schema change
When change data capture is enabled for a SQL Server table, as changes occur in the table, event records are persisted to a capture table on the server. If you introduce a change in the structure of the source table change, for example, by adding a new column, that change is not dynamically reflected in the change table. For as long as the capture table continues to use the outdated schema, the Debezium connector is unable to emit data change events for the table correctly. You must intervene to refresh the capture table to enable the connector to resume processing change events.
Because of the way that CDC is implemented in SQL Server, you cannot use Debezium to update capture tables. To refresh capture tables, one must be a SQL Server database operator with elevated privileges. As a Debezium user, you must coordinate tasks with the SQL Server database operator to complete the schema refresh and restore streaming to Kafka topics.
You can use one of the following methods to update capture tables after a schema change:
- Offline schema updates require you to stop the Debezium connector before you can update capture tables.
- Online schema updates can update capture tables while the Debezium connector is running.
There are advantages and disadvantages to using each type of procedure.
Whether you use the online or offline update method, you must complete the entire schema update process before you apply subsequent schema updates on the same source table. The best practice is to execute all DDLs in a single batch so the procedure can be run only once.
Some schema changes are not supported on source tables that have CDC enabled. For example, if CDC is enabled on a table, SQL Server does not allow you to change the schema of the table if you renamed one of its columns or changed the column type.
After you change a column in a source table from NULL
to NOT NULL
or vice versa, the SQL Server connector cannot correctly capture the changed information until after you create a new capture instance. If you do not create a new capture table after a change to the column designation, change event records that the connector emits do not correctly indicate whether the column is optional. That is, columns that were previously defined as optional (or NULL
) continue to be, despite now being defined as NOT NULL
. Similarly, columns that had been defined as required (NOT NULL
), retain that designation, although they are now defined as NULL
.
After you rename a table using sp_rename
function, it will continue to emit changes under the old source table name until the connector is restarted. Upon restart of the connector, it will emit changes under the new source table name.
2.7.5.1. Running an offline update after a schema change
Offline schema updates provide the safest method for updating capture tables. However, offline updates might not be feasible for use with applications that require high-availability.
Prerequisites
- An update was committed to the schema of a SQL Server table that has CDC enabled.
- You are a SQL Server database operator with elevated privileges.
Procedure
- Suspend the application that updates the database.
- Wait for the Debezium connector to stream all unstreamed change event records.
- Stop the Debezium connector.
- Apply all changes to the source table schema.
-
Create a new capture table for the update source table using
sys.sp_cdc_enable_table
procedure with a unique value for parameter@capture_instance
. - Resume the application that you suspended in Step 1.
- Start the Debezium connector.
-
After the Debezium connector starts streaming from the new capture table, drop the old capture table by running the stored procedure
sys.sp_cdc_disable_table
with the parameter@capture_instance
set to the old capture instance name.
2.7.5.2. Running an online update after a schema change
The procedure for completing an online schema updates is simpler than the procedure for running an offline schema update, and you can complete it without requiring any downtime in application and data processing. However, with online schema updates, a potential processing gap can occur after you update the schema in the source database, but before you create the new capture instance. During that interval, change events continue to be captured by the old instance of the change table, and the change data that is saved to the old table retains the structure of the earlier schema. So, for example, if you added a new column to a source table, change events that are produced before the new capture table is ready, do not contain a field for the new column. If your application does not tolerate such a transition period, it is best to use the offline schema update procedure.
Prerequisites
- An update was committed to the schema of a SQL Server table that has CDC enabled.
- You are a SQL Server database operator with elevated privileges.
Procedure
- Apply all changes to the source table schema.
-
Create a new capture table for the update source table by running the
sys.sp_cdc_enable_table
stored procedure with a unique value for the parameter@capture_instance
. -
When Debezium starts streaming from the new capture table, you can drop the old capture table by running the
sys.sp_cdc_disable_table
stored procedure with the parameter@capture_instance
set to the old capture instance name.
Example: Running an online schema update after a database schema change
The following example shows how to complete an online schema update in the change table after the column phone_number
is added to the customers
source table.
Modify the schema of the
customers
source table by running the following query to add thephone_number
field:ALTER TABLE customers ADD phone_number VARCHAR(32);
Create the new capture instance by running the
sys.sp_cdc_enable_table
stored procedure.EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'customers', @role_name = NULL, @supports_net_changes = 0, @capture_instance = 'dbo_customers_v2'; GO
Insert new data into the
customers
table by running the following query:INSERT INTO customers(first_name,last_name,email,phone_number) VALUES ('John','Doe','john.doe@example.com', '+1-555-123456'); GO
The Kafka Connect log reports on configuration updates through entries similar to the following message:
connect_1 | 2019-01-17 10:11:14,924 INFO || Multiple capture instances present for the same table: Capture instance "dbo_customers" [sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_CT, startLsn=00000024:00000d98:0036, changeTableObjectId=1525580473, stopLsn=00000025:00000ef8:0048] and Capture instance "dbo_customers_v2" [sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource] connect_1 | 2019-01-17 10:11:14,924 INFO || Schema will be changed for ChangeTable [captureInstance=dbo_customers_v2, sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource] ... connect_1 | 2019-01-17 10:11:33,719 INFO || Migrating schema to ChangeTable [captureInstance=dbo_customers_v2, sourceTableId=testDB.dbo.customers, changeTableId=testDB.cdc.dbo_customers_v2_CT, startLsn=00000025:00000ef8:0048, changeTableObjectId=1749581271, stopLsn=NULL] [io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource]
Eventually, the
phone_number
field is added to the schema and its value appears in messages written to the Kafka topic.... { "type": "string", "optional": true, "field": "phone_number" } ... "after": { "id": 1005, "first_name": "John", "last_name": "Doe", "email": "john.doe@example.com", "phone_number": "+1-555-123456" },
Drop the old capture instance by running the
sys.sp_cdc_disable_table
stored procedure.EXEC sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'dbo_customers', @capture_instance = 'dbo_customers'; GO
2.7.6. Monitoring Debezium SQL Server connector performance
The Debezium SQL Server connector provides three types of metrics that are in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect provide. The connector provides the following metrics:
- Snapshot metrics for monitoring the connector when performing snapshots.
- Streaming metrics for monitoring the connector when reading CDC table data.
- Schema history metrics for monitoring the status of the connector’s schema history.
For information about how to expose the preceding metrics through JMX, see the Debezium monitoring documentation.
2.7.6.1. Customized names for SQL Server connector snapshot and streaming MBean objects
Debezium connectors expose metrics via the MBean name for the connector. These metrics, which are specific to each connector instance, provide data about the behavior of the connector’s snapshot, streaming, and schema history processes.
By default, when you deploy a correctly configured connector, Debezium generates a unique MBean name for each of the different connector metrics. To view the metrics for a connector process, you configure your observability stack to monitor its MBean. But these default MBean names depend on the connector configuration; configuration changes can result in changes to the MBean names. A change to the MBean name breaks the linkage between the connector instance and the MBean, disrupting monitoring activity. In this scenario, you must reconfigure the observability stack to use the new MBean name if you want to resume monitoring.
To prevent monitoring disruptions that result from MBean name changes, you can configure custom metrics tags. You configure custom metrics by adding the custom.metric.tags
property to the connector configuration. The property accepts key-value pairs in which each key represents a tag for the MBean object name, and the corresponding value represents the value of that tag. For example: k1=v1,k2=v2
. Debezium appends the specified tags to the MBean name of the connector.
After you configure the custom.metric.tags
property for a connector, you can configure the observability stack to retrieve metrics associated with the specified tags. The observability stack then uses the specified tags, rather than the mutable MBean names to uniquely identify connectors. Later, if Debezium redefines how it constructs MBean names, or if the topic.prefix
in the connector configuration changes, metrics collection is uninterrupted, because the metrics scrape task uses the specified tag patterns to identify the connector.
A further benefit of using custom tags, is that you can use tags that reflect the architecture of your data pipeline, so that metrics are organized in a way that suits you operational needs. For example, you might specify tags with values that declare the type of connector activity, the application context, or the data source, for example, db1-streaming-for-application-abc
. If you specify multiple key-value pairs, all of the specified pairs are appended to the connector’s MBean name.
The following example illustrates how tags modify the default MBean name.
Example 2.55. How custom tags modify the connector MBean name
By default, the SQL Server connector uses the following MBean name for streaming metrics:
debezium.sqlserver:type=connector-metrics,context=streaming,server=<topic.prefix>
If you set the value of custom.metric.tags
to database=salesdb-streaming,table=inventory
, Debezium generates the following custom MBean name:
debezium.sqlserver:type=connector-metrics,context=streaming,server=<topic.prefix>,database=salesdb-streaming,table=inventory
2.7.6.2. Debezium SQL Server connector snapshot metrics
The MBean is debezium.sql_server:type=connector-metrics,server=<topic.prefix>,task=<task.id>,context=snapshot
.
Snapshot metrics are not exposed unless a snapshot operation is active, or if a snapshot has occurred since the last connector start.
The following table lists the snapshot metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last snapshot event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of events that this connector has seen since last started or reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop. | |
| The total number of tables that are being included in the snapshot. | |
| The number of tables that the snapshot has yet to copy. | |
| Whether the snapshot was started. | |
| Whether the snapshot was paused. | |
| Whether the snapshot was aborted. | |
| Whether the snapshot completed. | |
| The total number of seconds that the snapshot has taken so far, even if not complete. Includes also time when snapshot was paused. | |
| The total number of seconds that the snapshot was paused. If the snapshot was paused several times, the paused time adds up. | |
| Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
The connector also provides the following additional snapshot metrics when an incremental snapshot is executed:
Attributes | Type | Description |
---|---|---|
| The identifier of the current snapshot chunk. | |
| The lower bound of the primary key set defining the current chunk. | |
| The upper bound of the primary key set defining the current chunk. | |
| The lower bound of the primary key set of the currently snapshotted table. | |
| The upper bound of the primary key set of the currently snapshotted table. |
2.7.6.3. Debezium SQL Server connector streaming metrics
The MBean is debezium.sql_server:type=connector-metrics,server=<topic.prefix>,task=<task.id>,context=streaming
.
The following table lists the streaming metrics that are available.
Attributes | Type | Description |
---|---|---|
| The last streaming event that the connector has read. | |
| The number of milliseconds since the connector has read and processed the most recent event. | |
| The total number of data change events reported by the source database since the last connector start, or since a metrics reset. Represents the data change workload for Debezium to process. | |
| The total number of create events processed by the connector since its last start or metrics reset. | |
| The total number of update events processed by the connector since its last start or metrics reset. | |
| The total number of delete events processed by the connector since its last start or metrics reset. | |
| The number of events that have been filtered by include/exclude list filtering rules configured on the connector. | |
| The list of tables that are captured by the connector. | |
| The length the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop. | |
| Flag that denotes whether the connector is currently connected to the database server. | |
| The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incorporate any differences between the clocks on the machines where the database server and the connector are running. | |
| The number of processed transactions that were committed. | |
| The coordinates of the last received event. | |
| Transaction identifier of the last processed transaction. | |
|
The maximum buffer of the queue in bytes. This metric is available if | |
| The current volume, in bytes, of records in the queue. |
2.7.6.4. Debezium SQL Server connector schema history metrics
The MBean is debezium.sql_server:type=connector-metrics,context=schema-history,server=<topic.prefix>
.
The following table lists the schema history metrics that are available.
Attributes | Type | Description |
---|---|---|
|
One of | |
| The time in epoch seconds at what recovery has started. | |
| The number of changes that were read during recovery phase. | |
| the total number of schema changes applied during recovery and runtime. | |
| The number of milliseconds that elapsed since the last change was recovered from the history store. | |
| The number of milliseconds that elapsed since the last change was applied. | |
| The string representation of the last change recovered from the history store. | |
| The string representation of the last applied change. |