Release Notes for Red Hat build of Debezium 3.2.4
What's new in Red Hat build of Debezium
Abstract
Chapter 1. Debezium 3.2.4 release notes Copy linkLink copied to clipboard!
Debezium is a distributed change data capture platform that captures row-level changes that occur in database tables and then passes corresponding change event records to Apache Kafka topics. Applications can read these change event streams and access the change events in the order in which they occurred.
Debezium is built on Apache Kafka and is deployed and integrated with Streams for Apache Kafka on OpenShift Container Platform or on Red Hat Enterprise Linux.
The following topics provide release details:
- Section 1.1, “Debezium database connectors”
- Section 1.3, “Debezium supported configurations”
- Section 1.4, “Debezium installation options”
- Section 1.5, “Debezium 3.2.4 features and improvements”
- Section 1.6, “Breaking changes”
- Section 1.7, “General availability features”
- Section 1.8, “Technology Preview features”
- Section 1.9, “Developer Preview features”
- Section 1.10, “Other updates in this release”
- Section 1.11, “Deprecated features”
- Section 1.12, “Known issues”
1.1. Debezium database connectors Copy linkLink copied to clipboard!
Debezium provides source connectors and sink connectors that are based on Kafka Connect. Connectors are available for the following common databases:
- Source connectors
- Db2
- Informix (Developer Preview)
- MariaDB
- MongoDB
- MySQL
- Oracle
- PostgreSQL
- SQL Server
- Sink connectors
- JDBC sink connector
- MongoDB sink connector (Developer Preview)
1.2. Connector usage notes Copy linkLink copied to clipboard!
- Db2
-
The Debezium Db2 connector does not include the Db2 JDBC driver (
jcc-11.5.0.0.jar). See the Db2 connector deployment instructions for information about how to deploy the necessary JDBC driver. - The Db2 connector requires the use of the abstract syntax notation (ASN) libraries, which are available as a standard part of Db2 for Linux.
- To use the ASN libraries, you must have a license for IBM InfoSphere Data Replication (IIDR). You do not have to install IIDR to use the libraries.
-
The Debezium Db2 connector does not include the Db2 JDBC driver (
- Oracle
-
The Debezium Oracle connector does not include the Oracle JDBC driver (
21.15.0.0). See the Oracle connector deployment instructions for information about how to deploy the necessary JDBC driver.
-
The Debezium Oracle connector does not include the Oracle JDBC driver (
- PostgreSQL
-
To use the Debezium PostgreSQL connector you must use the
pgoutputlogical decoding output plug-in, which is the default for PostgreSQL versions 10 and later.
-
To use the Debezium PostgreSQL connector you must use the
1.3. Debezium supported configurations Copy linkLink copied to clipboard!
For information about Debezium supported configurations, including information about supported database versions, see the Debezium 3.2.4 Supported configurations page.
1.4. Debezium installation options Copy linkLink copied to clipboard!
You can install Debezium with Streams for Apache Kafka on OpenShift or on Red Hat Enterprise Linux:
1.5. Debezium 3.2.4 features and improvements Copy linkLink copied to clipboard!
For information about changes that were introduced in the previous Debezium release, see the Debezium 3.0.8 Release Notes.
1.6. Breaking changes Copy linkLink copied to clipboard!
Breaking changes represent significant differences in connector behavior or require configuration changes that are not compatible with earlier Debezium versions.
For a list of breaking changes in the previous release, see the Debezium 3.0.8 Release Notes.
Debezium 3.2.4 introduces breaking changes that affect the following components:
For information about breaking changes in the previous Debezium release, see the Debezium 3.0.8 Release Notes.
1.6.1. Breaking changes relevant to all connectors Copy linkLink copied to clipboard!
The following breaking changes apply to all connectors:
- Java 17 is required
- All Debezium connectors require a runtime baseline of Java 17.
- If you use Java 11 with new connectors, Kafka Connect silently fails to find the connector. The connector does not report any bytecode errors
- To run Debezium Server, a runtime baseline of Java 21 is required.
- Kafka 4.0 support
Debezium is now built and tested using Apache Kafka 4.0
For details about compatibility between Debezium, Streams for Apache Kafka, and Kafka, check the Debezium supported configurations document.
- Event
sourceblock is versioned Debezium change events contain a
sourceinformation block that includes attributes that describe the origin of a change event. The source information block is a KafkaStructdata type, and can be versioned. However, in earlier Debezium versions, no version information was associated with the block.Beginning in this release, the
sourceinformation block is now versioned, and its initial version is set to1Future changes will increment the version value.NoteIf you use a schema registry in your Debezium environment, you can expect this change to result in schema compatibility issues.
1.6.2. MariaDB/MySQL connector breaking changes Copy linkLink copied to clipboard!
- Schema history no longer stores certain DDL statements
In earlier releases, the connector stored certain DDL events, such as
TRUNCATEandREPLACE, in the internal Debezium schema history topic. Debezium does not require these types of statements to represent schema evolution. Beginning with this release, the connector no longer captures these DDL events in the internal schema history topic.- Improved missing log position validation
In past release, when you set the
snapshot.modeof the Debezium MySQL connector to a value other thanwhen_needed, if the connector could not find the binary log position, it logged a warning and reported that it would resume from the last available position in the logs. However, after the snapshot completed and the connector transitioned to the streaming phase, it immediately failed, reporting that the binary log position could not be located.Beginning with this release, for consistency with the behavior of other connectors, the connector reports any error that it detects during the validation phase.
1.6.3. Oracle connector breaking changes Copy linkLink copied to clipboard!
- Removed Oracle LogMiner JMX metrics
In this release, a number of Oracle LogMiner JMX metrics that were deprecated in an earlier release are no longer available. Most of the removed metrics are replaced with new metrics, but in once case a metric is removed with no replacement. Refer to the following table to learn more about the status of the removed metrics.
Expand Removed JMX Metric
Replacement
CurrentRedoLogFileNameCurrentLogFileNamesRedoLogStatusRedoLogStatusesSwitchCounterLogSwitchCountFetchingQueryCountFetchQueryCountHoursToKeepTransactionInBufferMillisecondsToKeepTransactionsInBufferTotalProcessingTimeInMillisecondsTotalBathcProcessingTimeInMillisecondsRegisteredDmlCountTotalChangesCountMillisecondsToSleepBetweenMiningQuerySleepTimeInMillisecondsNetworkConnectionProblemsCounterRemoved with no replacement.
Review your monitoring and observability settings and adjust any that rely on metrics that are no longer available
- Change to reselect column post processor behavior for Oracle LOB columns
Beginning in this release, the
ReselectColumnsPostProcessorreselects Oracle LOB columns even if you configure the connector so that it does not not emit values for these columns (that is,lob.enabledis set tofalse). Because LOB columns store large amounts of data, mining these columns during the streaming phase increases the load on the database, the connector, and on the network, contributing to a decrease in performance. Rather than mining LOB content directly from the redo logs during the streaming phase, by using the column reselection process, you can retrieve LOB data separately, improving control and efficiency by only populating the data when you actually need it.
- Query timeout now applies to Oracle LogMiner queries
When the Oracle connector executes its initial query to fetch data from LogMiner,
database.query.timeout.msconnector configuration property will control the duration of the query before the query is canceled. When upgrading, check the connector metricMaxDurationOfFetchQueryInMillisecondsto determine whether this new property may need adjustments. By default, the timeout is 10 minutes, but can be disabled when set to 0.- TLS using JKS
The Debezium for Oracle connector supports using JKS to configure TLS connections. To use JKS, special connector configurations are required to provide the right information to the Oracle JDBC driver. This release adds documentation that describes the process for using JKS to configure TLS connectons.
1.6.4. PostgreSQL connector breaking changes Copy linkLink copied to clipboard!
- Sparse vector logical type renamed
The PostgreSQL extension
vector(also known aspgvector) provides an implementation of a variety of vector data types, including one calledsparsevec. A sparse vector is one that stores only the populated key and value entries within the vector. Unpopulated fields are ignored to reduce the amount of space that is required to represent the data set.The Debezium 3.0.8 release introduced the
SparseVectorlogical type,io.Debezium.data.SparseVector. With this release, the name changes toio.Debezium.data.SparseDoubleVector.NoteIf you previously worked with
SparseVectorlogical types, examine your code to verify that it recognizes the new logical type name.
1.7. General availability features Copy linkLink copied to clipboard!
Debezium 3.2.4 provides new features for the following connectors:
1.7.1. Features promoted to General Availability Copy linkLink copied to clipboard!
The following features are promoted from Technology Preview to General Availability (GA) in the Debezium 3.2.4 release. For information about other GA features, see Section 1.6, “Breaking changes”.
1.7.1.1. MySQL connector features promoted to GA Copy linkLink copied to clipboard!
- Support for MySQL 9
- Beginning in this release, you can use the Debezium connector for MySQL with MySQL 9.
- Support for MySQL vector data types
In this release, the Debezium MySQL grammar provides support for processing vector functions. With this change, the Debezium MySQL connector is able to process the new
VECTOR(n)data type that is available in MySQL 9.0. For more information about how Debezium processes Vector types, see the MySQL connector documentation.
1.7.1.2. Oracle connector features promoted to GA Copy linkLink copied to clipboard!
- Compatibility with Oracle Database 23ai
This release of the Debezium Oracle connector provides support for Oracle 23ai databases. However, the connector does not support the following features that are available in Oracle 23ai:
- Table Value Constructors
- Javascript-based store procedures
- Domain data types
-
Updates that use
JOINconditions - Boolean data types
- Vector data types
- Support for Oracle EXTENDED string sizes
In Oracle 12c you can set the database parameter
max_string_sizetoEXTENDEDto enable the use of extended strings that increase the maximum size of character data types from 4000 bytes to 32K. When you enable the use of extended strings, you do not have to use CLOB-based operations to work with character data up to 32K. Instead, you can use the same syntax that you use with character data that is 4000 bytes or less.In this release, the Oracle connector can capture changes directly from the transaction logs data for databases that use extended strings. Because extended strings are effectively CLOB operations, to mine these column types, you must set
lob.enabledtotrue.For more information about connector support for extended strings size, see the Oracle connector documentation in the Debezium User Guide.
ImportantWhen Oracle is configured to use EXTENDED string sizes, LogMiner can sometimes fail to escape single quotes within extended string fields. As a result, values of these fields can be truncated, resulting in invalid SQL statements, which the Oracle connector is unable to parse.
For more information, see DBZ-8034.To mitigate this problem, you can configure the connector to relax single-quote detection by setting the following property to
true:internal.log.mining.sql.relaxed.quote.detection
Although this internal setting can be useful in resolving some instances of the problem, its use is not currently a supported feature.
1.7.1.3. PostgreSQL connector features promoted to GA Copy linkLink copied to clipboard!
- Support for PostgreSQL 17 failover replication slots
PostgreSQL 17 adds support for failover slots, which are replication slots that are automatically synchronized to a standby server.
When you create a replication slot on the primary PostgreSQL server in a cluster, you can configure it to be replicated to a failover replica. A PostgreSQL administrator can then manually synchronize failover replication slots by calling the
pg_sync_replication_slots()function, or can configure automatic synchronization by setting the value of thesync_replication_slotsparameter totrue. When automatic synchronization is enabled, if a failover occurs, Debezium can immediately switch over to consuming events from the failover slot on the replica, and thus not miss any events.To enable Debezium to consume events from a failover slot, set the value of the
slot.failoverproperty in the Debezium PostgreSQL connector configuration totrue. This feature is available only if you configure Debezium to connect to the primary server in a cluster that runs PostgreSQL 17 or greater. Failover replication slots are not created for databases that run earlier PostgreSQL releases.For more information, see Supported topologies and
slot.failoverin the Debezium PostgreSQL connector documentation.
1.7.2. JDBC sink connector GA features Copy linkLink copied to clipboard!
- JDBC connector support for vector data types
In this release, the Debezium JDBC sink connector provides support for processing some vector data types. For details about how the connector maps vector data type, see the JDBC connector documentation.
1.7.3. Oracle connector GA features Copy linkLink copied to clipboard!
- Support for EXTENDED string sizes
- Support for Oracle 23
For more information, see the entries for these features in Features promoted to General availability.
1.7.4. PostgreSQL connector GA features Copy linkLink copied to clipboard!
- Support for PostgreSQL pgvector data types
PostgreSQL 15 introduced the
pgvectorextension, which provides the following data types:-
vector -
halfvec -
sparsevec
-
Beginning in Debezium 3.2.4, the PostgreSQL connector supports streaming of events that use these pgvector data types. After you enable the pgvector extension in the database, no further configuration is required for Debezium to convert vector values. When the connector emits a change event record for an operation that involves one of these data types, it converts each vector type in the source to a semantic type, according to the mappings in following list:
|
|
Mapped to an |
|
|
Mapped to an |
|
|
Mapped to a
|
For more information see, Documentation for PostgreSQL connector pgvector types
1.8. Technology Preview features Copy linkLink copied to clipboard!
The following Technology Preview features are available in Debezium 3.2.4:
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope.
1.8.1. Features promoted from Developer Preview to Technology Preview Copy linkLink copied to clipboard!
In this release the following features are promoted from Developer Preview to Technology Preview:
- MongoDB sink connector (Technology Preview)
In this release, the the earlier MongoDB sink connector Developer Preview feature is promoted to Technology Preview. The Debezium MongoDB sink connector differs from other vendor implementations in that it can ingest raw change events emitted by Debezium connectors without first applying an event flattening transformation. The MongoDB sink connector can take advantage of native Debezium source connector features, such as column type propagation, enabling you to potentially reduce the processing footprint of your data pipeline, and simplify its configuration.
Unlike the JDBC sink relational connector that requires an additional plug-in to be installed to use it, the MongoDB sink connector is bundled alongside the MongoDB source connector in the same artifact. So if you install the Debezium 3.2.4 MongoDB source connector, you also have the MongoDB sink connector.
Minimal configuration is required to get started with the MongoDB sink connector, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following configuration properties are mandatory:
connection.stringProvides the details for connecting to the MongoDB sink database.
sink.databaseProvides the name of the target database where the changes will be written.
topicsSpecifies a comma-separated list of regular expressions that describe the topics from which the sink connector reads event records.
For more information, see the Debezium MongoDB sink connector documentation.
- Exactly-once delivery for PostgreSQL streaming (Technology Preview)
In this release, the Developer Preview of the exactly-once semantics for the PostgreSQL connector is promoted a Technology Preview. Exactly-once delivery for PostgreSQL applies only to the streaming phase; exactly-once delivery does not apply to snapshots.
Debezium was designed to provide at-least-once delivery with a goal of ensuring that connectors capture all change events that occur in the monitored sources. In KIP-618 the Apache Kafka community proposes a solution to address problems that occur when a producer retries a message. Source connectors sometimes resend batches of events to the Kafka broker, even after the broker previously committed the batch. This situation can result in duplicate events being sent to consumers (sink connectors), which can cause problems for consumers that do not handle duplicates easily.
No connector configuration changes are required to enable exactly-once delivery. However, exactly-once delivery must be configured in your Kafka Connect worker configuration. For information about setting the required Kafka Connect configuration properties, see KIP-618.
NoteTo set
exactly.once.supporttorequiredin the Kafka worker configuration, all connectors in the Kafka Connect cluster must supports exactly-once delivery, If you attempt to set this option in a cluster in which workers do not consistently support exactly-once delivery, the connectors that do not support this feature fail validation at start-up.
- MongoDB source connector collection-scoped streaming (Technology Preview)
This feature is promoted from Developer Preview to Technology Preview.
In previous versions of the Debezium MongoDB source connector, change streams could be opened against the deployment and database scopes, which was not always ideal for restrictive permission environments. Debezium 3.2.4 introduces a new change stream mode in which the connector operates within the scope of a single collection only, allowing for such granular permissive configurations.
To limit the capture scope of a MongoDB connector to a specific collection, set the value of the
capture.scopeproperty in the connector configuration tocollection. Use this setting when you intend for a connector to capture changes only from a single MongoDB collection.The following limitation applies to the use of this feature:
If you set the value of the
capture.scopeproperty tocollection, the connector cannot use the defaultsourcesignaling channel. Enabling thesourcechannel for a connector is required to permit processing of incremental snapshot signals, including signals sent via the Kafka, JMX, or File channels. Thus, if you set the value of thecapture-scopeproperty in the connector configuration tocollection, the connector cannot perform incremental snapshots.
1.8.2. New features available for Technology Preview Copy linkLink copied to clipboard!
In this release, the following new feature is available as a Technology Preview:
- Oracle connector unbuffered LogMiner adapter (Technology Preview)
In this release, when you configure the Oracle connector to use the native Oracle LogMiner API to read and stream changes from database transaction logs, you can specify that the connector streams committed changes only. In this committed changes mode, the connector forwards change event records immediately to destination topics without first buffering them. Because the connector does not maintain an internal transaction buffer, it requires less connector memory than the default buffered
logminersetting. However the source database experiences a corresponding increase in its processing load and memory requirements. To set this option, assign the valuelogminer_unbufferedto the Oracle connector property,database.connection.adapter.
1.8.2.1. Previously available Technology Preview features Copy linkLink copied to clipboard!
In this release, the following features that were introduced in earlier releases remain in Technology Preview:
- Use of XML data types with the Oracle connector (Technology Preview)
The Debezium Oracle connector can process the
XMLTypedata type that Oracle uses for handling XML data in the database.For information about the mappings that the connector uses for the Oracle XMLTYPE, see XML types in the connector documentation.
- CloudEvents converter (Technology Preview)
-
The CloudEvents converter emits change event records that conform to the CloudEvents specification. The CloudEvents change event envelope can be JSON or Avro and each envelope type supports JSON or Avro as the
dataformat. For more information, see CloudEvents converter.
- Custom converters (Technology Preview)
- In cases where the default data type conversions do not meet your needs, you can create custom converters to use with a connector. For more information, see Custom-developed converters.
1.9. Developer Preview features Copy linkLink copied to clipboard!
Debezium 3.2.4 includes several Developer Preview features.
Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.
The following new Developer Preview features are available in this release:
This release also includes the following Developer Preview features, which were introduced in earlier releases:
- Debezium Server
- Informix source connector
For information about these features, see Section 1.9.1, “Previously available Developer Preview features”.
- Debezium AI embeddings SMT (Developer Preview)
To assist you in preparing unstructured text content for use in large language models, use the Debezium AI SMT to convert text captured by a connector into into a numeric representation. The SMT can use the following model providers to generate text embeddings:
- Hugging Face
- Ollama
- ONNX MiniLM
- Voyage AI
For more information about configuring and using the AI embeddings SMT, see the Debezium community documentation.
- OpenLineage integration (Developer Preview)
You can integrate Debezium with OpenLineage to track how data moves through the various systems, transformations, and processes in your data pipeline. The integration with OpenLineage maps events in your data pipeline to artifacts in the OpenLineage data model to provides visibility into where data originates, how it moves, and what dependencies exist in the data pipeline.
For more information about how to enable and configure the OpenLineage integration, see the Debezium community documentation.
- Debezium Quarkus extension (Developer Preview)
The Debezium Quarkus outbox extension assists Quarkus applications in implementing the outbox pattern to share data asynchronously among microservices. The extension works in parallel with a Debezium connector that is configured to use the Outbox Event router SMT to monitor the outbox table.
When an application processes a change that modifies a record in its database, the change triggers the Quarkus extension, which exports a change data record that describes the event to an outbox table. The SMT that monitors the outbox table detects the change, and routes the change event message to a configured Kafka topic. Other microservices in the environment can then consume event messages asynchronously from Kafka, and process the event data as input.
For more information about how to configure and use the Quarkus outbox extension, see the Debezium community documentation.
- Web assembly scripting support (TinyGo) (Developer Preview)
Beginning with this release, you can use the TinyGo implementation of the Go language to compile scripts for use in the content-based routing and filtering single message transformations. For more information see the documentation for the topic routing and filtering SMTs.
1.9.1. Previously available Developer Preview features Copy linkLink copied to clipboard!
The following Developer Preview features were introduced in earlier releases:
- Debezium Server (Developer Preview)
- This release continues to make the Debezium Server available as a Developer Preview feature. The Debezium Server is a ready-to-use application that streams change events from a data source directly to a configured Kafka or Redis data sink. Unlike the generally available Debezium connectors, the deployment of Debezium Server has no dependencies on Apache Kafka Connect. For more information about the Debezium Server Developer Preview, see the Debezium User Guide
- Informix source connector (Developer Preview)
In this release, the Debezium source connector for Informix is available as a Developer Preview feature.
The Debezium Informix source connector captures row-level changes from tables in an IBM Informix database. The connector is based on the Debezium Db2 connector, and uses the Informix Change Data Capture API for Java to capture transactional data. This API captures data from the current logical log and processes transactions sequentially.
The connector is compatible with Informix Database 12 and 14, and with version 4.50.11 of the Informix JDBC driver.
ImportantDue to licensing requirements, the Debezium Informix connector archive does not include the Informix JDBC driver or the Change Stream client that Debezium requires to connect to an Informix database. Before you can use the connector, you must obtain the driver and the client library and add them to your connector environment.
- Prerequisites
- A database administrator must enable full row logging for the database and otherwise prepare the database and the database server for using the Change Data Capture API.
- You have a copy of the Informix JDBC driver in your connector environment. The driver is available from Maven Central.
-
You have installed the Informix Change Streams API for Java.
The Change Streams API for Java is packaged as part of the Informix JDBC installation, and is also available on Maven Central alongside the latest JDBC drivers. The API is required to enable CDC on the database.
- Additional resources
- MySQL parallel schema snapshots (Developer Preview)
The ability to use parallel schema snapshots with the MySQL connector continues to be provided as a Developer Preview feature.
To improve snapshot performance, the MySQL connector implements parallelization to concurrently snapshot change events and generate schema events for tables. By running snapshots and generating schema events in parallel, the connector reduces the time required to capture the schema for many tables in your database.
- MariaDB and MySQL parallel initial snapshots (Developer Preview)
The Debezium initial snapshot for MySQL has always been single-threaded. This limitation primarily stems from the complexities of ensuring data consistency across multiple transactions.
In this release, you can configure a MySQL connector to use multiple threads to execute table-level snapshots in parallel.
In order to take advantage of this new feature, add the
snapshot.max.threadsproperty to the connector configuration, and set the property to a value greater than1.Example 1.1. Example configuration using parallel snapshots
snapshot.max.threads=4
snapshot.max.threads=4Copy to Clipboard Copied! Toggle word wrap Toggle overflow Based on the configuration in the preceding example, the connector can snapshot a maximum of four tables at once. If the number of tables to snapshot is greater than four, after a thread finishes processing one of the first four tables, it then finds the next table in the queue and begins to perform a snapshot. The process continues until the connector finishes performing snapshots on all of the designated tables.
For more information, see
snapshot.max.threadsin the Debezium User Guide.
- Ingesting changes from a logical standby (Developer Preview)
The ability for the Oracle connector to capture changes from a logical standby continues to be available as a Developer Preview feature. When the Debezium connector for Oracle connects to a production or primary database, it uses an internal flush table to manage the flush cycles of the Oracle Log Writer Buffer (LGWR) process. The flush process requires that the user account through which the connector accesses the database has permission to create and write to this flush table. If stand-by database policies restrict data manipulation or prohibit write operations, the connector cannot write to the flush table.
To support an Oracle read-only logical stand-by database, Debezium introduces a property to disable the creation and management of the flush table. You can use this feature with both Oracle Standalone and Oracle RAC installations.
To enable the Oracle connector to use a read-only logical stand-by, add the following connector option:
internal.log.mining.read.only=true
internal.log.mining.read.only=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the Oracle connector documentation in the Debezium User Guide.
- Using the XStream adapter to ingest changes (Developer Preview)
By default, the Debezium Oracle connector uses an adapter that connects to the native Oracle LogMiner utiltiy to retrieve data from the redo logs. In database environments that support XStream, you can instead configure the connector to ingest change events through the Oracle XStream API.
For information about configuring Debezium to use the Oracle XStream adapter, see the Debezium Oracle connector documentation.
1.10. Other updates in this release Copy linkLink copied to clipboard!
This Debezium 3.2.4 release provides multiple other feature updates and fixes. For a complete list, see the Debezium 3.2.4 Enhancement Advisory (RHEA-2025:154266-01).
1.11. Deprecated features Copy linkLink copied to clipboard!
The following features are deprecated and will be removed in a future release:
- Deprecation of schema_only and schema_only_recovery snapshot modes
The following snapshot modes are scheduled for removal:
-
The
schema_only_recoverymode is deprecated and is replaced by therecoverymode. The
schema_onlymode is deprecated and is replaced by theno_datamode.ImportantThe current release continues to include the deprecated snapshot modes, but they are scheduled for removal in a future release. To prepare for their removal, adjust any scripts, configurations, and processes that depend on these deprecated modes.
For information about features that were deprecated in the previous release, see the 3.0.8 Release Notes
-
The
1.12. Known issues Copy linkLink copied to clipboard!
The following known issues are present in Debezium 3.2.4:
- SQL Server connector fails to resume incremental snapshot after a restart
If the connector restarts, or if a re-balance occurs while an incremental snapshot is in progress, the snapshot process ends and cannot be resumed. If you examine the offsets after the failure, you’ll find that during the snapshot the connector does not consistently record offsets, with some offset commits omittng information about the tables to snapshot. The absent table information prevents the connector from resuming the snapshot.
To recover from the failure, perform a full re-snapshot of the affected tables. Afterwards, you can restart your incremental snapshot for all tables.
NoteRunning a full initial snapshot can be time-consuming and place a significant load on the source database, especially if the database contains large tables. If you do not need to capture all of the existing historical data, you can reduce the load on the database by setting the
snapshot.modein the connector configuration tono_data. When you apply this setting, the resulting snapshot does not copy the existing table contents. Instead, the connector reads the table schemas, and then transitions to streaming, capturing only those changes that occur in the source data after streaming begins.
- Parallel initial snapshot exception when snapshot for one table is slow
In some environments, failures can occur when a SQL-based connector is configured to run parallel initial snapshots. This occurs because the time required to complete a snapshot for one table is significantly greater than the time required to complete snapshots for other tables. As a result, the threads for the completed snapshots remain idle for an extended interval, so that the connector is unable to gracefully close the connection. The inability to close the connection can result in an incomplete snapshot, even if all data is sent successfully. Environments are more susceptible to this problem if they include a network device that terminates idle connections to the database, such as a load balancer or firewall.
If you experience this problem, revert the value ofsnapshot.max.threadsto1.- Negative binlog position values for MariaDB
-
A new binary log format was introduced in MariaDB 11.4. Debezium cannot consume events in this new format. If Debezium connects to a database that runs MariaDB 11.4 or later, you must set the MariaDB server variable
binlog_legacy_event_posto1(ON) to ensure that the connector can consume events in the new format. If you leave this variable at its default setting (0, or OFF), after a connector restart, Debezium might not be able to find the resume point.
- Apicurio registry endless rebalance loop on Kafka
- For more information, see Apicurio registry 2.4.3 and 2.4.4 causes endless rebalance loop on Kafka