Release Notes for Red Hat build of Debezium 3.2.4


Red Hat build of Debezium 3.2.4

What's new in Red Hat build of Debezium

Red Hat build of Debezium Documentation Team

Abstract

Describes the Red Hat build of Debezium product and provides the latest details on what's new in this release.

Chapter 1. Debezium 3.2.4 release notes

Debezium is a distributed change data capture platform that captures row-level changes that occur in database tables and then passes corresponding change event records to Apache Kafka topics. Applications can read these change event streams and access the change events in the order in which they occurred.

Debezium is built on Apache Kafka and is deployed and integrated with Streams for Apache Kafka on OpenShift Container Platform or on Red Hat Enterprise Linux.

The following topics provide release details:

1.1. Debezium database connectors

Debezium provides source connectors and sink connectors that are based on Kafka Connect. Connectors are available for the following common databases:

Source connectors
  • Db2
  • Informix (Developer Preview)
  • MariaDB
  • MongoDB
  • MySQL
  • Oracle
  • PostgreSQL
  • SQL Server
Sink connectors
  • JDBC sink connector
  • MongoDB sink connector (Developer Preview)

1.2. Connector usage notes

Db2
  • The Debezium Db2 connector does not include the Db2 JDBC driver (jcc-11.5.0.0.jar). See the Db2 connector deployment instructions for information about how to deploy the necessary JDBC driver.
  • The Db2 connector requires the use of the abstract syntax notation (ASN) libraries, which are available as a standard part of Db2 for Linux.
  • To use the ASN libraries, you must have a license for IBM InfoSphere Data Replication (IIDR). You do not have to install IIDR to use the libraries.
Oracle
PostgreSQL
  • To use the Debezium PostgreSQL connector you must use the pgoutput logical decoding output plug-in, which is the default for PostgreSQL versions 10 and later.

1.3. Debezium supported configurations

For information about Debezium supported configurations, including information about supported database versions, see the Debezium 3.2.4 Supported configurations page.

1.4. Debezium installation options

You can install Debezium with Streams for Apache Kafka on OpenShift or on Red Hat Enterprise Linux:

1.5. Debezium 3.2.4 features and improvements

For information about changes that were introduced in the previous Debezium release, see the Debezium 3.0.8 Release Notes.

1.6. Breaking changes

Breaking changes represent significant differences in connector behavior or require configuration changes that are not compatible with earlier Debezium versions.

For a list of breaking changes in the previous release, see the Debezium 3.0.8 Release Notes.

Debezium 3.2.4 introduces breaking changes that affect the following components:

For information about breaking changes in the previous Debezium release, see the Debezium 3.0.8 Release Notes.

1.6.1. Breaking changes relevant to all connectors

The following breaking changes apply to all connectors:

Java 17 is required
  • All Debezium connectors require a runtime baseline of Java 17.
  • If you use Java 11 with new connectors, Kafka Connect silently fails to find the connector. The connector does not report any bytecode errors
  • To run Debezium Server, a runtime baseline of Java 21 is required.
Kafka 4.0 support

Debezium is now built and tested using Apache Kafka 4.0

For details about compatibility between Debezium, Streams for Apache Kafka, and Kafka, check the Debezium supported configurations document.

DBZ-8875

Event source block is versioned

Debezium change events contain a source information block that includes attributes that describe the origin of a change event. The source information block is a Kafka Struct data type, and can be versioned. However, in earlier Debezium versions, no version information was associated with the block.

Beginning in this release, the source information block is now versioned, and its initial version is set to 1 Future changes will increment the version value.

DBZ-8499

Note

If you use a schema registry in your Debezium environment, you can expect this change to result in schema compatibility issues.

1.6.2. MariaDB/MySQL connector breaking changes

Schema history no longer stores certain DDL statements

In earlier releases, the connector stored certain DDL events, such as TRUNCATE and REPLACE, in the internal Debezium schema history topic. Debezium does not require these types of statements to represent schema evolution. Beginning with this release, the connector no longer captures these DDL events in the internal schema history topic.

DBZ-9085

Improved missing log position validation

In past release, when you set the snapshot.mode of the Debezium MySQL connector to a value other than when_needed, if the connector could not find the binary log position, it logged a warning and reported that it would resume from the last available position in the logs. However, after the snapshot completed and the connector transitioned to the streaming phase, it immediately failed, reporting that the binary log position could not be located.

Beginning with this release, for consistency with the behavior of other connectors, the connector reports any error that it detects during the validation phase.

DBZ-9118

1.6.3. Oracle connector breaking changes

Removed Oracle LogMiner JMX metrics

In this release, a number of Oracle LogMiner JMX metrics that were deprecated in an earlier release are no longer available. Most of the removed metrics are replaced with new metrics, but in once case a metric is removed with no replacement. Refer to the following table to learn more about the status of the removed metrics.

Expand

Removed JMX Metric

Replacement

CurrentRedoLogFileName

CurrentLogFileNames

RedoLogStatus

RedoLogStatuses

SwitchCounter

LogSwitchCount

FetchingQueryCount

FetchQueryCount

HoursToKeepTransactionInBuffer

MillisecondsToKeepTransactionsInBuffer

TotalProcessingTimeInMilliseconds

TotalBathcProcessingTimeInMilliseconds

RegisteredDmlCount

TotalChangesCount

MillisecondsToSleepBetweenMiningQuery

SleepTimeInMilliseconds

NetworkConnectionProblemsCounter

Removed with no replacement.

Review your monitoring and observability settings and adjust any that rely on metrics that are no longer available

DBZ-8647

Change to reselect column post processor behavior for Oracle LOB columns

Beginning in this release, the ReselectColumnsPostProcessor reselects Oracle LOB columns even if you configure the connector so that it does not not emit values for these columns (that is, lob.enabled is set to false). Because LOB columns store large amounts of data, mining these columns during the streaming phase increases the load on the database, the connector, and on the network, contributing to a decrease in performance. Rather than mining LOB content directly from the redo logs during the streaming phase, by using the column reselection process, you can retrieve LOB data separately, improving control and efficiency by only populating the data when you actually need it.

DBZ-8653

Query timeout now applies to Oracle LogMiner queries

When the Oracle connector executes its initial query to fetch data from LogMiner, database.query.timeout.ms connector configuration property will control the duration of the query before the query is canceled. When upgrading, check the connector metric MaxDurationOfFetchQueryInMilliseconds to determine whether this new property may need adjustments. By default, the timeout is 10 minutes, but can be disabled when set to 0.

DBZ-8830

TLS using JKS

The Debezium for Oracle connector supports using JKS to configure TLS connections. To use JKS, special connector configurations are required to provide the right information to the Oracle JDBC driver. This release adds documentation that describes the process for using JKS to configure TLS connectons.

DBZ-8788

1.6.4. PostgreSQL connector breaking changes

Sparse vector logical type renamed

The PostgreSQL extension vector (also known as pgvector) provides an implementation of a variety of vector data types, including one called sparsevec. A sparse vector is one that stores only the populated key and value entries within the vector. Unpopulated fields are ignored to reduce the amount of space that is required to represent the data set.

The Debezium 3.0.8 release introduced the SparseVector logical type, io.Debezium.data.SparseVector. With this release, the name changes to io.Debezium.data.SparseDoubleVector.

Note

If you previously worked with SparseVector logical types, examine your code to verify that it recognizes the new logical type name.

DBZ-8585

1.7. General availability features

Debezium 3.2.4 provides new features for the following connectors:

1.7.1. Features promoted to General Availability

The following features are promoted from Technology Preview to General Availability (GA) in the Debezium 3.2.4 release. For information about other GA features, see Section 1.6, “Breaking changes”.

1.7.1.1. MySQL connector features promoted to GA
Support for MySQL 9
Beginning in this release, you can use the Debezium connector for MySQL with MySQL 9.
Support for MySQL vector data types

In this release, the Debezium MySQL grammar provides support for processing vector functions. With this change, the Debezium MySQL connector is able to process the new VECTOR(n) data type that is available in MySQL 9.0. For more information about how Debezium processes Vector types, see the MySQL connector documentation.

DBZ-8157

1.7.1.2. Oracle connector features promoted to GA
Compatibility with Oracle Database 23ai

This release of the Debezium Oracle connector provides support for Oracle 23ai databases. However, the connector does not support the following features that are available in Oracle 23ai:

  • Table Value Constructors
  • Javascript-based store procedures
  • Domain data types
  • Updates that use JOIN conditions
  • Boolean data types
  • Vector data types
Support for Oracle EXTENDED string sizes

In Oracle 12c you can set the database parameter max_string_size to EXTENDED to enable the use of extended strings that increase the maximum size of character data types from 4000 bytes to 32K. When you enable the use of extended strings, you do not have to use CLOB-based operations to work with character data up to 32K. Instead, you can use the same syntax that you use with character data that is 4000 bytes or less.

In this release, the Oracle connector can capture changes directly from the transaction logs data for databases that use extended strings. Because extended strings are effectively CLOB operations, to mine these column types, you must set lob.enabled to true.

For more information about connector support for extended strings size, see the Oracle connector documentation in the Debezium User Guide.

Important

When Oracle is configured to use EXTENDED string sizes, LogMiner can sometimes fail to escape single quotes within extended string fields. As a result, values of these fields can be truncated, resulting in invalid SQL statements, which the Oracle connector is unable to parse.
For more information, see DBZ-8034.

To mitigate this problem, you can configure the connector to relax single-quote detection by setting the following property to true:
internal.log.mining.sql.relaxed.quote.detection

Although this internal setting can be useful in resolving some instances of the problem, its use is not currently a supported feature.

DBZ-8039

Support for PostgreSQL 17 failover replication slots

PostgreSQL 17 adds support for failover slots, which are replication slots that are automatically synchronized to a standby server.

When you create a replication slot on the primary PostgreSQL server in a cluster, you can configure it to be replicated to a failover replica. A PostgreSQL administrator can then manually synchronize failover replication slots by calling the pg_sync_replication_slots() function, or can configure automatic synchronization by setting the value of the sync_replication_slots parameter to true. When automatic synchronization is enabled, if a failover occurs, Debezium can immediately switch over to consuming events from the failover slot on the replica, and thus not miss any events.

To enable Debezium to consume events from a failover slot, set the value of the slot.failover property in the Debezium PostgreSQL connector configuration to true. This feature is available only if you configure Debezium to connect to the primary server in a cluster that runs PostgreSQL 17 or greater. Failover replication slots are not created for databases that run earlier PostgreSQL releases.

For more information, see Supported topologies and slot.failover in the Debezium PostgreSQL connector documentation.

DBZ-8412

1.7.2. JDBC sink connector GA features

JDBC connector support for vector data types

In this release, the Debezium JDBC sink connector provides support for processing some vector data types. For details about how the connector maps vector data type, see the JDBC connector documentation.

DBZ-8571

1.7.3. Oracle connector GA features

  • Support for EXTENDED string sizes
  • Support for Oracle 23

For more information, see the entries for these features in Features promoted to General availability.

1.7.4. PostgreSQL connector GA features

Support for PostgreSQL pgvector data types

PostgreSQL 15 introduced the pgvector extension, which provides the following data types:

  • vector
  • halfvec
  • sparsevec

Beginning in Debezium 3.2.4, the PostgreSQL connector supports streaming of events that use these pgvector data types. After you enable the pgvector extension in the database, no further configuration is required for Debezium to convert vector values. When the connector emits a change event record for an operation that involves one of these data types, it converts each vector type in the source to a semantic type, according to the mappings in following list:

vector

Mapped to an ARRAY of numeric values.

halfvec

Mapped to an ARRAY of numeric values.

sparsevec

Mapped to a Struct that contains the following members:

  • The number of dimensions in the vector.
  • An index that represents the position of vector elements.

For more information see, Documentation for PostgreSQL connector pgvector types

DBZ-8121

1.8. Technology Preview features

The following Technology Preview features are available in Debezium 3.2.4:

Important

Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope.

In this release the following features are promoted from Developer Preview to Technology Preview:

MongoDB sink connector (Technology Preview)

In this release, the the earlier MongoDB sink connector Developer Preview feature is promoted to Technology Preview. The Debezium MongoDB sink connector differs from other vendor implementations in that it can ingest raw change events emitted by Debezium connectors without first applying an event flattening transformation. The MongoDB sink connector can take advantage of native Debezium source connector features, such as column type propagation, enabling you to potentially reduce the processing footprint of your data pipeline, and simplify its configuration.

Unlike the JDBC sink relational connector that requires an additional plug-in to be installed to use it, the MongoDB sink connector is bundled alongside the MongoDB source connector in the same artifact. So if you install the Debezium 3.2.4 MongoDB source connector, you also have the MongoDB sink connector.

Minimal configuration is required to get started with the MongoDB sink connector, for example:

{
  "connector.class": "io.debezium.connector.mongodb.MongoDbSinkConnector",
  "connection.string": "...",
  "topics": "topic1,topic2",
  "sink.database": "targetdb"
}
Copy to Clipboard Toggle word wrap

The following configuration properties are mandatory:

connection.string

Provides the details for connecting to the MongoDB sink database.

sink.database

Provides the name of the target database where the changes will be written.

topics

Specifies a comma-separated list of regular expressions that describe the topics from which the sink connector reads event records.

For more information, see the Debezium MongoDB sink connector documentation.

DBZ-8339

Exactly-once delivery for PostgreSQL streaming (Technology Preview)

In this release, the Developer Preview of the exactly-once semantics for the PostgreSQL connector is promoted a Technology Preview. Exactly-once delivery for PostgreSQL applies only to the streaming phase; exactly-once delivery does not apply to snapshots.

Debezium was designed to provide at-least-once delivery with a goal of ensuring that connectors capture all change events that occur in the monitored sources. In KIP-618 the Apache Kafka community proposes a solution to address problems that occur when a producer retries a message. Source connectors sometimes resend batches of events to the Kafka broker, even after the broker previously committed the batch. This situation can result in duplicate events being sent to consumers (sink connectors), which can cause problems for consumers that do not handle duplicates easily.

No connector configuration changes are required to enable exactly-once delivery. However, exactly-once delivery must be configured in your Kafka Connect worker configuration. For information about setting the required Kafka Connect configuration properties, see KIP-618.

Note

To set exactly.once.support to required in the Kafka worker configuration, all connectors in the Kafka Connect cluster must supports exactly-once delivery, If you attempt to set this option in a cluster in which workers do not consistently support exactly-once delivery, the connectors that do not support this feature fail validation at start-up.

MongoDB source connector collection-scoped streaming (Technology Preview)

This feature is promoted from Developer Preview to Technology Preview.

In previous versions of the Debezium MongoDB source connector, change streams could be opened against the deployment and database scopes, which was not always ideal for restrictive permission environments. Debezium 3.2.4 introduces a new change stream mode in which the connector operates within the scope of a single collection only, allowing for such granular permissive configurations.

To limit the capture scope of a MongoDB connector to a specific collection, set the value of the capture.scope property in the connector configuration to collection. Use this setting when you intend for a connector to capture changes only from a single MongoDB collection.

The following limitation applies to the use of this feature:

  • If you set the value of the capture.scope property to collection, the connector cannot use the default source signaling channel. Enabling the source channel for a connector is required to permit processing of incremental snapshot signals, including signals sent via the Kafka, JMX, or File channels. Thus, if you set the value of the capture-scope property in the connector configuration to collection, the connector cannot perform incremental snapshots.

    DBZ-7760

In this release, the following new feature is available as a Technology Preview:

Oracle connector unbuffered LogMiner adapter (Technology Preview)

In this release, when you configure the Oracle connector to use the native Oracle LogMiner API to read and stream changes from database transaction logs, you can specify that the connector streams committed changes only. In this committed changes mode, the connector forwards change event records immediately to destination topics without first buffering them. Because the connector does not maintain an internal transaction buffer, it requires less connector memory than the default buffered logminer setting. However the source database experiences a corresponding increase in its processing load and memory requirements. To set this option, assign the value logminer_unbuffered to the Oracle connector property, database.connection.adapter.

DBZ-9351

In this release, the following features that were introduced in earlier releases remain in Technology Preview:

Use of XML data types with the Oracle connector (Technology Preview)

The Debezium Oracle connector can process the XMLType data type that Oracle uses for handling XML data in the database.

For information about the mappings that the connector uses for the Oracle XMLTYPE, see XML types in the connector documentation.

CloudEvents converter (Technology Preview)
The CloudEvents converter emits change event records that conform to the CloudEvents specification. The CloudEvents change event envelope can be JSON or Avro and each envelope type supports JSON or Avro as the data format. For more information, see CloudEvents converter.
Custom converters (Technology Preview)
In cases where the default data type conversions do not meet your needs, you can create custom converters to use with a connector. For more information, see Custom-developed converters.

1.9. Developer Preview features

Debezium 3.2.4 includes several Developer Preview features.

Important

Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.

For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.

The following new Developer Preview features are available in this release:

This release also includes the following Developer Preview features, which were introduced in earlier releases:

  • Debezium Server
  • Informix source connector

For information about these features, see Section 1.9.1, “Previously available Developer Preview features”.

Debezium AI embeddings SMT (Developer Preview)

To assist you in preparing unstructured text content for use in large language models, use the Debezium AI SMT to convert text captured by a connector into into a numeric representation. The SMT can use the following model providers to generate text embeddings:

  • Hugging Face
  • Ollama
  • ONNX MiniLM
  • Voyage AI

For more information about configuring and using the AI embeddings SMT, see the Debezium community documentation.

DBZ-8702

OpenLineage integration (Developer Preview)

You can integrate Debezium with OpenLineage to track how data moves through the various systems, transformations, and processes in your data pipeline. The integration with OpenLineage maps events in your data pipeline to artifacts in the OpenLineage data model to provides visibility into where data originates, how it moves, and what dependencies exist in the data pipeline.

For more information about how to enable and configure the OpenLineage integration, see the Debezium community documentation.

DBZ-9110

Debezium Quarkus extension (Developer Preview)

The Debezium Quarkus outbox extension assists Quarkus applications in implementing the outbox pattern to share data asynchronously among microservices. The extension works in parallel with a Debezium connector that is configured to use the Outbox Event router SMT to monitor the outbox table.

When an application processes a change that modifies a record in its database, the change triggers the Quarkus extension, which exports a change data record that describes the event to an outbox table. The SMT that monitors the outbox table detects the change, and routes the change event message to a configured Kafka topic. Other microservices in the environment can then consume event messages asynchronously from Kafka, and process the event data as input.

For more information about how to configure and use the Quarkus outbox extension, see the Debezium community documentation.

Web assembly scripting support (TinyGo) (Developer Preview)

Beginning with this release, you can use the TinyGo implementation of the Go language to compile scripts for use in the content-based routing and filtering single message transformations. For more information see the documentation for the topic routing and filtering SMTs.

DBZ-8586, DBZ-8737

The following Developer Preview features were introduced in earlier releases:

Debezium Server (Developer Preview)
This release continues to make the Debezium Server available as a Developer Preview feature. The Debezium Server is a ready-to-use application that streams change events from a data source directly to a configured Kafka or Redis data sink. Unlike the generally available Debezium connectors, the deployment of Debezium Server has no dependencies on Apache Kafka Connect. For more information about the Debezium Server Developer Preview, see the Debezium User Guide
Informix source connector (Developer Preview)

In this release, the Debezium source connector for Informix is available as a Developer Preview feature.

The Debezium Informix source connector captures row-level changes from tables in an IBM Informix database. The connector is based on the Debezium Db2 connector, and uses the Informix Change Data Capture API for Java to capture transactional data. This API captures data from the current logical log and processes transactions sequentially.

The connector is compatible with Informix Database 12 and 14, and with version 4.50.11 of the Informix JDBC driver.

Important

Due to licensing requirements, the Debezium Informix connector archive does not include the Informix JDBC driver or the Change Stream client that Debezium requires to connect to an Informix database. Before you can use the connector, you must obtain the driver and the client library and add them to your connector environment.

Prerequisites
  • A database administrator must enable full row logging for the database and otherwise prepare the database and the database server for using the Change Data Capture API.
  • You have a copy of the Informix JDBC driver in your connector environment. The driver is available from Maven Central.
  • You have installed the Informix Change Streams API for Java.
    The Change Streams API for Java is packaged as part of the Informix JDBC installation, and is also available on Maven Central alongside the latest JDBC drivers. The API is required to enable CDC on the database.
Additional resources
MySQL parallel schema snapshots (Developer Preview)

The ability to use parallel schema snapshots with the MySQL connector continues to be provided as a Developer Preview feature.

To improve snapshot performance, the MySQL connector implements parallelization to concurrently snapshot change events and generate schema events for tables. By running snapshots and generating schema events in parallel, the connector reduces the time required to capture the schema for many tables in your database.

DBZ-6472

MariaDB and MySQL parallel initial snapshots (Developer Preview)

The Debezium initial snapshot for MySQL has always been single-threaded. This limitation primarily stems from the complexities of ensuring data consistency across multiple transactions.

In this release, you can configure a MySQL connector to use multiple threads to execute table-level snapshots in parallel.

In order to take advantage of this new feature, add the snapshot.max.threads property to the connector configuration, and set the property to a value greater than 1.

Example 1.1. Example configuration using parallel snapshots

snapshot.max.threads=4
Copy to Clipboard Toggle word wrap

Based on the configuration in the preceding example, the connector can snapshot a maximum of four tables at once. If the number of tables to snapshot is greater than four, after a thread finishes processing one of the first four tables, it then finds the next table in the queue and begins to perform a snapshot. The process continues until the connector finishes performing snapshots on all of the designated tables.

For more information, see snapshot.max.threads in the Debezium User Guide.

DBZ-823

Ingesting changes from a logical standby (Developer Preview)

The ability for the Oracle connector to capture changes from a logical standby continues to be available as a Developer Preview feature. When the Debezium connector for Oracle connects to a production or primary database, it uses an internal flush table to manage the flush cycles of the Oracle Log Writer Buffer (LGWR) process. The flush process requires that the user account through which the connector accesses the database has permission to create and write to this flush table. If stand-by database policies restrict data manipulation or prohibit write operations, the connector cannot write to the flush table.

To support an Oracle read-only logical stand-by database, Debezium introduces a property to disable the creation and management of the flush table. You can use this feature with both Oracle Standalone and Oracle RAC installations.

To enable the Oracle connector to use a read-only logical stand-by, add the following connector option:

internal.log.mining.read.only=true
Copy to Clipboard Toggle word wrap

For more information, see the Oracle connector documentation in the Debezium User Guide.

Using the XStream adapter to ingest changes (Developer Preview)

By default, the Debezium Oracle connector uses an adapter that connects to the native Oracle LogMiner utiltiy to retrieve data from the redo logs. In database environments that support XStream, you can instead configure the connector to ingest change events through the Oracle XStream API.

For information about configuring Debezium to use the Oracle XStream adapter, see the Debezium Oracle connector documentation.

1.10. Other updates in this release

This Debezium 3.2.4 release provides multiple other feature updates and fixes. For a complete list, see the Debezium 3.2.4 Enhancement Advisory (RHEA-2025:154266-01).

1.11. Deprecated features

The following features are deprecated and will be removed in a future release:

Deprecation of schema_only and schema_only_recovery snapshot modes

The following snapshot modes are scheduled for removal:

  • The schema_only_recovery mode is deprecated and is replaced by the recovery mode.
  • The schema_only mode is deprecated and is replaced by the no_data mode.

    Important

    The current release continues to include the deprecated snapshot modes, but they are scheduled for removal in a future release. To prepare for their removal, adjust any scripts, configurations, and processes that depend on these deprecated modes.

    For information about features that were deprecated in the previous release, see the 3.0.8 Release Notes

1.12. Known issues

The following known issues are present in Debezium 3.2.4:

SQL Server connector fails to resume incremental snapshot after a restart

If the connector restarts, or if a re-balance occurs while an incremental snapshot is in progress, the snapshot process ends and cannot be resumed. If you examine the offsets after the failure, you’ll find that during the snapshot the connector does not consistently record offsets, with some offset commits omittng information about the tables to snapshot. The absent table information prevents the connector from resuming the snapshot.

To recover from the failure, perform a full re-snapshot of the affected tables. Afterwards, you can restart your incremental snapshot for all tables.

Note

Running a full initial snapshot can be time-consuming and place a significant load on the source database, especially if the database contains large tables. If you do not need to capture all of the existing historical data, you can reduce the load on the database by setting the snapshot.mode in the connector configuration to no_data. When you apply this setting, the resulting snapshot does not copy the existing table contents. Instead, the connector reads the table schemas, and then transitions to streaming, capturing only those changes that occur in the source data after streaming begins.

DBZ-9533

Parallel initial snapshot exception when snapshot for one table is slow

In some environments, failures can occur when a SQL-based connector is configured to run parallel initial snapshots. This occurs because the time required to complete a snapshot for one table is significantly greater than the time required to complete snapshots for other tables. As a result, the threads for the completed snapshots remain idle for an extended interval, so that the connector is unable to gracefully close the connection. The inability to close the connection can result in an incomplete snapshot, even if all data is sent successfully. Environments are more susceptible to this problem if they include a network device that terminates idle connections to the database, such as a load balancer or firewall.

If you experience this problem, revert the value of snapshot.max.threads to 1.

DBZ-7932

Negative binlog position values for MariaDB
A new binary log format was introduced in MariaDB 11.4. Debezium cannot consume events in this new format. If Debezium connects to a database that runs MariaDB 11.4 or later, you must set the MariaDB server variable binlog_legacy_event_pos to 1 (ON) to ensure that the connector can consume events in the new format. If you leave this variable at its default setting (0, or OFF), after a connector restart, Debezium might not be able to find the resume point.

DBZ-8755

Apicurio registry endless rebalance loop on Kafka
For more information, see Apicurio registry 2.4.3 and 2.4.4 causes endless rebalance loop on Kafka

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat