이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 6. Debezium Connector for Oracle (Developer Preview)


Debezium’s Oracle connector captures and records row-level changes that occur in databases on an Oracle server, including tables that are added while the connector is running. You can configure the connector to emit change events for specific subsets of schemas and tables, or to ignore, mask, or truncate values in specific columns.

Debezium ingests change events from Oracle by using the native LogMiner database package .

Important

The Debezium Oracle connector is a Developer Preview feature. Developer Preview features provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. A Developer Preview feature is not supported with Red Hat production service-level agreements (SLAs) and it might not be functionally complete; therefore, Red Hat does not recommend implementing any Developer Preview features in production environments. If you need assistance with this feature, you can engage with the Debezium community.

Information and procedures for using a Debezium Oracle connector are organized as follows:

6.1. How Debezium Oracle connectors work

To optimally configure and run a Debezium Oracle connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.

Details are in the following topics:

6.1.1. How Debezium Oracle connectors perform database snapshots

Typically, the redo logs on an Oracle server are configured to not retain the complete history of the database. As a result, the Debezium Oracle connector cannot retrieve the entire history of the database from the logs. To enable the connector to establish a baseline for the current state of the database, the first time that the connector starts, it performs an initial consistent snapshot of the database.

You can customize the way that the connector creates snapshots by setting the value of the snapshot.mode connector configuration property. By default, the connector’s snapshot mode is set to initial.

Default connector workflow for creating an initial snapshot

When the snapshot mode is set to the default, the connector completes the following tasks to create a snapshot:

  1. Determines the tables to be captured
  2. Obtains an EXCLUSIVE MODE lock on each of the monitored tables to prevent structural changes from occurring during creation of the snapshot. Debezium holds the locks for only a short time.
  3. Reads the current system change number (SCN) position from the server’s redo log.
  4. Captures the structure of all relevant tables.
  5. Releases the locks obtained in Step 2.
  6. Scans all of the relevant database tables and schemas as valid at the SCN position that was read in Step 3 (SELECT * FROM …​ AS OF SCN 123), generates a READ event for each row, and then writes the event records to the table-specific Kafka topic.
  7. Records the successful completion of the snapshot in the connector offsets.

After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts. After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 3 so that it does not miss any updates. If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.

Table 6.1. Settings for snapshot.mode connector configuration property
SettingDescription

initial

The connector performs a database snapshot as described in the default workflow for creating an initial snapshot. After the snapshot completes, the connector begins to stream event records for subsequent database changes.

schema_only

The connector captures the structure of all relevant tables, performing all of the steps described in the default snapshot workflow, except that it does not create READ events to represent the data set at the point of the connector’s start-up (Step 6).

6.1.2. Default names of Kafka topics that receive Debezium Oracle change event records

The default behavior is that a Debezium Oracle connector writes events for all INSERT, UPDATE, and DELETE operations in one table to one Kafka topic. The Kafka topic naming convention is as follows:

serverName.schemaName.tableName

For example, if fulfillment is the server name, inventory is the schema name, and the database contains tables with the names orders, customers, and products, the Debezium Oracle connector emits events to the following Kafka topics, one for each table in the database:

fulfillment.inventory.orders
fulfillment.inventory.customers
fulfillment.inventory.products

6.1.3. How Debezium Oracle connectors expose database schema changes

The Debezium Oracle connector stores the history of schema changes in a database history topic. This topic reflects an internal connector state and you should not use it directly. Applications that require notifications about schema changes should obtain the information from the public schema change topic. The connector writes schema change events to a Kafka topic named <serverName>, where serverName is the name of the connector that is specified in the database.server.name configuration property.

Debezium emits a new message to this topic whenever it streams data from a new table. .

The message contains a logical representation of the table schema.

Example: Message emitted to the schema change topic

The following example shows a typical schema change message in JSON format:

{
  "schema": {
  ...
  },
  "payload": {
    "source": {
      "version": "1.5.4.Final",
      "connector": "oracle",
      "name": "server1",
      "ts_ms": 1588252618953,
      "snapshot": "true",
      "db": "ORCLPDB1",
      "schema": "DEBEZIUM",
      "table": "CUSTOMERS",
      "txId" : null,
      "scn" : "1513734",
      "commit_scn": "1513734",
      "lcr_position" : null
    },
    "databaseName": "ORCLPDB1", 1
    "schemaName": "DEBEZIUM", //
    "ddl": "CREATE TABLE \"DEBEZIUM\".\"CUSTOMERS\" \n   (    \"ID\" NUMBER(9,0) NOT NULL ENABLE, \n    \"FIRST_NAME\" VARCHAR2(255), \n    \"LAST_NAME" VARCHAR2(255), \n    \"EMAIL\" VARCHAR2(255), \n     PRIMARY KEY (\"ID\") ENABLE, \n     SUPPLEMENTAL LOG DATA (ALL) COLUMNS\n   ) SEGMENT CREATION IMMEDIATE \n  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 \n NOCOMPRESS LOGGING\n  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645\n  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1\n  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)\n  TABLESPACE \"USERS\" ", 2
    "tableChanges": [ 3
      {
        "type": "CREATE", 4
        "id": "\"ORCLPDB1\".\"DEBEZIUM\".\"CUSTOMERS\"", 5
        "table": { 6
          "defaultCharsetName": null,
          "primaryKeyColumnNames": [ 7
            "ID"
          ],
          "columns": [ 8
            {
              "name": "ID",
              "jdbcType": 2,
              "nativeType": null,
              "typeName": "NUMBER",
              "typeExpression": "NUMBER",
              "charsetName": null,
              "length": 9,
              "scale": 0,
              "position": 1,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            },
            {
              "name": "FIRST_NAME",
              "jdbcType": 12,
              "nativeType": null,
              "typeName": "VARCHAR2",
              "typeExpression": "VARCHAR2",
              "charsetName": null,
              "length": 255,
              "scale": null,
              "position": 2,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            },
            {
              "name": "LAST_NAME",
              "jdbcType": 12,
              "nativeType": null,
              "typeName": "VARCHAR2",
              "typeExpression": "VARCHAR2",
              "charsetName": null,
              "length": 255,
              "scale": null,
              "position": 3,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            },
            {
              "name": "EMAIL",
              "jdbcType": 12,
              "nativeType": null,
              "typeName": "VARCHAR2",
              "typeExpression": "VARCHAR2",
              "charsetName": null,
              "length": 255,
              "scale": null,
              "position": 4,
              "optional": false,
              "autoIncremented": false,
              "generated": false
            }
          ]
        }
      }
    ]
  }
}
Table 6.2. Descriptions of fields in messages emitted to the schema change topic
ItemField nameDescription

1

databaseName
schemaName

Identifies the database and the schema that contains the change.

2

ddl

This field contains the DDL that is responsible for the schema change.

3

tableChanges

An array of one or more items that contain the schema changes generated by a DDL command.

4

type

Describes the kind of change. The value is one of the following:

CREATE
Table created.
ALTER
Table modified.
DROP
Table deleted.

5

id

Full identifier of the table that was created, altered, or dropped.

6

table

Represents table metadata after the applied change.

7

primaryKeyColumnNames

List of columns that compose the table’s primary key.

8

columns

Metadata for each column in the changed table.

Messages that the connector sends to the schema change topic use a message key that is equal to the name of the database that contains the schema change. In the following example, the payload field contains the key:

{
  "schema": {
    "type": "struct",
    "fields": [
      {
        "type": "string",
        "optional": false,
        "field": "databaseName"
      }
    ],
    "optional": false,
    "name": "io.debezium.connector.oracle.SchemaChangeKey"
  },
  "payload": {
    "databaseName": "ORCLPDB1"
  }
}

6.1.4. Debezium Oracle connector-generated events that represent transaction boundaries

Debezium can generate events that represent transaction metadata boundaries and that enrich data change event messages.

Database transactions are represented by a statement block that is enclosed between the BEGIN and END keywords. Debezium generates transaction boundary events for the BEGIN and END delimiters in every transaction. Transaction boundary events contain the following fields:

status
BEGIN or END
id
String representation of unique transaction identifier.
event_count (for END events)
Total number of events emmitted by the transaction.
data_collections (for END events)
An array of pairs of data_collection and event_count that provides number of events emitted by changes originating from the given data collection.

The following example shows a typical transaction boundary message:

Example: Oracle connector transaction boundary event

{
  "status": "BEGIN",
  "id": "5.6.641",
  "event_count": null,
  "data_collections": null
}

{
  "status": "END",
  "id": "5.6.641",
  "event_count": 2,
  "data_collections": [
    {
      "data_collection": "ORCLPDB1.DEBEZIUM.CUSTOMER",
      "event_count": 1
    },
    {
      "data_collection": "ORCLPDB1.DEBEZIUM.ORDER",
      "event_count": 1
    }
  ]
}

The transaction events are written to the topic named <database.server.name>.transaction.

6.1.4.1. Change data event enrichment

When transaction metadata is enabled, the data message Envelope is enriched with a new transaction field. This field provides information about every event in the form of a composite of fields:

id
String representation of unique transaction identifier.
total_order
The absolute position of the event among all events generated by the transaction.
data_collection_order
The per-data collection position of the event among all events that were emitted by the transaction.

The following example shows a typical transaction event message:

{
  "before": null,
  "after": {
    "pk": "2",
    "aa": "1"
  },
  "source": {
...
  },
  "op": "c",
  "ts_ms": "1580390884335",
  "transaction": {
    "id": "5.6.641",
    "total_order": "1",
    "data_collection_order": "1"
  }
}

6.2. Descriptions of Debezium Oracle connector data change events

Every data change event that the Oracle connector emits has a key and a value. The structures of the key and value depend on the table from which the change events originate. For information about how Debezium constructs topic names, see Topic names.

Warning

The Debezium Oracle connector ensures that all Kafka Connect schema names are valid Avro schema names. This means that the logical server name must start with alphabetic characters or an underscore ([a-z,A-Z,_]), and the remaining characters in the logical server name and all characters in the schema and table names must be alphanumeric characters or an underscore ([a-z,A-Z,0-9,\_]). The connector automatically replaces invalid characters with an underscore character.

Unexpected naming conflicts can result when the only distinguishing characters between multiple logical server names, schema names, or table names are not valid characters, and those characters are replaced with underscores.

Debezium and Kafka Connect are designed around continuous streams of event messages. However, the structure of these events might change over time, which can be difficult for topic consumers to handle. To facilitate the processing of mutable event structures, each event in Kafka Connect is self-contained. Every message key and value has two parts: a schema and payload. The schema describes the structure of the payload, while the payload contains the actual data.

Warning

Changes that are performed by the SYS, SYSTEM, or connector user accounts are not captured by the connector.

The following topics contain more details about data change events:

6.2.1. About keys in Debezium Oracle connector change events

For each changed table, the change event key is structured such that a field exists for each column in the primary key (or unique key constraint) of the table at the time when the event is created.

For example, a customers table that is defined in the inventory database schema, might have the following change event key:

CREATE TABLE customers (
  id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY,
  first_name VARCHAR2(255) NOT NULL,
  last_name VARCHAR2(255) NOT NULL,
  email VARCHAR2(255) NOT NULL UNIQUE
);

If the value of the database.server.name configuration property is set to server1, the JSON representation for every change event that occurs in the customers table in the database features the following key structure:

{
    "schema": {
        "type": "struct",
        "fields": [
            {
                "type": "int32",
                "optional": false,
                "field": "ID"
            }
        ],
        "optional": false,
        "name": "server1.INVENTORY.CUSTOMERS.Key"
    },
    "payload": {
        "ID": 1004
    }
}

The schema portion of the key contains a Kafka Connect schema that describes the content of the key portion. In the preceding example, the payload value is not optional, the structure is defined by a schema named server1.DEBEZIUM.CUSTOMERS.Key, and there is one required field named id of type int32. The value of the key’s payload field indicates that it is indeed a structure (which in JSON is just an object) with a single id field, whose value is 1004.

Therefore, you can interpret this key as describing the row in the inventory.customers table (output from the connector named server1) whose id primary key column had a value of 1004.

6.2.2. About values in Debezium Oracle connector change events

Like the message key, the value of a change event message has a schema section and payload section. The payload section of every change event value produced by the Oracle connector has an envelope structure with the following fields:

op
A mandatory field that contains a string value describing the type of operation. Values for the Oracle connector are c for create (or insert), u for update, d for delete, and r for read (in the case of a snapshot).
before

An optional field that, if present, contains the state of the row before the event occurred. The structure is described by the server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema, which the server1 connector uses for all rows in the inventory.customers table.

Warning

Whether or not this field and its elements are available is highly dependent on the Supplemental Logging configuration applying to the table.

after
An optional field that if present contains the state of the row after the event occurred. The structure is described by the same server1.INVENTORY.CUSTOMERS.Value Kafka Connect schema used in before.
source

A mandatory field that contains a structure describing the source metadata for the event, which in the case of Oracle contains these fields: the Debezium version, the connector name, whether the event is part of an ongoing snapshot or not, the transaction id (not while snapshotting), the SCN of the change, and a timestamp representing the point in time when the record was changed in the source database (during snapshotting, this is the point in time of snapshotting).

Tip

The commit_scn field is optional and describes the SCN of the transaction commit that the change event participates within. This field is only present when using the LogMiner connection adapter.

ts_ms
An optional field that, if present, contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event.

And of course, the schema portion of the event message’s value contains a schema that describes this envelope structure and the nested fields within it.

create events

Let’s look at what a create event value might look like for our customers table:

{
    "schema": {
        "type": "struct",
        "fields": [
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "int32",
                        "optional": false,
                        "field": "ID"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "FIRST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "LAST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "EMAIL"
                    }
                ],
                "optional": true,
                "name": "server1.DEBEZIUM.CUSTOMERS.Value",
                "field": "before"
            },
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "int32",
                        "optional": false,
                        "field": "ID"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "FIRST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "LAST_NAME"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "EMAIL"
                    }
                ],
                "optional": true,
                "name": "server1.DEBEZIUM.CUSTOMERS.Value",
                "field": "after"
            },
            {
                "type": "struct",
                "fields": [
                    {
                        "type": "string",
                        "optional": true,
                        "field": "version"
                    },
                    {
                        "type": "string",
                        "optional": false,
                        "field": "name"
                    },
                    {
                        "type": "int64",
                        "optional": true,
                        "field": "ts_ms"
                    },
                    {
                        "type": "string",
                        "optional": true,
                        "field": "txId"
                    },
                    {
                        "type": "string",
                        "optional": true,
                        "field": "scn"
                    },
                    {
                        "type": "string",
                        "optional": true,
                        "field": "commit_scn"
                    },
                    {
                        "type": "boolean",
                        "optional": true,
                        "field": "snapshot"
                    }
                ],
                "optional": false,
                "name": "io.debezium.connector.oracle.Source",
                "field": "source"
            },
            {
                "type": "string",
                "optional": false,
                "field": "op"
            },
            {
                "type": "int64",
                "optional": true,
                "field": "ts_ms"
            }
        ],
        "optional": false,
        "name": "server1.DEBEZIUM.CUSTOMERS.Envelope"
    },
    "payload": {
        "before": null,
        "after": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "annek@noanswer.org"
        },
        "source": {
            "version": "1.5.4.Final",
            "name": "server1",
            "ts_ms": 1520085154000,
            "txId": "6.28.807",
            "scn": "2122185",
            "commit_scn": "2122185",
            "snapshot": false
        },
        "op": "c",
        "ts_ms": 1532592105975
    }
}

Examining the schema portion of the preceding event’s value, we can see how the following schema are defined:

  • The envelope
  • The source structure (which is specific to the Oracle connector and reused across all events).
  • The table-specific schemas for the before and after fields.
Tip

The names of the schemas for the before and after fields are of the form <logicalName>.<schemaName>.<tableName>.Value, and thus are entirely independent from the schemas for all other tables. This means that when using the Avro Converter, the resulting Avro schems for each table in each logical source have their own evolution and history.

The payload portion of this event’s value, provides information about the event. It describes that a row was created (op=c), and shows that the after field value contains the values that were inserted into the ID, FIRST_NAME, LAST_NAME, and EMAIL columns of the row.

Tip

By default, the JSON representations of events are much larger than the rows they describe. This is true, because the JSON representation must include the schema and the payload portions of the message. You can use the Avro Converter to significantly decrease the size of the messages that the connector writes to Kafka topics.

update events

The value of an update change event on this table has the same schema as the create event. The payload uses the same structure, but it holds different values. Here’s an example:

{
    "schema": { ... },
    "payload": {
        "before": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "annek@noanswer.org"
        },
        "after": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "anne@example.com"
        },
        "source": {
            "version": "1.5.4.Final",
            "name": "server1",
            "ts_ms": 1520085811000,
            "txId": "6.9.809",
            "scn": "2125544",
            "commit_scn": "2125544",
            "snapshot": false
        },
        "op": "u",
        "ts_ms": 1532592713485
    }
}

Comparing the value of the update event to the create (insert) event, notice the following differences in the payload section:

  • The op field value is now u, signifying that this row changed because of an update
  • The before field now has the state of the row with the values before the database commit
  • The after field now has the updated state of the row, and here was can see that the EMAIL value is now anne@example.com.
  • The source field structure has the same fields as before, but the values are different since this event is from a different position in the redo log.
  • The ts_ms shows the timestamp that Debezium processed this event.

The payload section reveals several other useful pieces of information. For example, by comparing the before and after structures, we can determine how a row changed as the result of a commit. The source structure provides information about Oracle’s record of this change, providing traceability. It also gives us insight into when this event occurred in relation to other events in this topic and in other topics. Did it occur before, after, or as part of the same commit as another event?

Note

When the columns for a row’s primary/unique key are updated, the value of the row’s key changes. As a result, Debezium emits three events after such an update:

  • A DELETE event.
  • A tombstone event with the old key for the row.
  • An INSERT event that provides the new key for the row.

delete events

So far we’ve seen samples of create and update events. Now, let’s look at the value of a delete event for the same table. As is the case with create and update events, for a delete event, the schema portion of the value is exactly the same:

{
    "schema": { ... },
    "payload": {
        "before": {
            "ID": 1004,
            "FIRST_NAME": "Anne",
            "LAST_NAME": "Kretchmar",
            "EMAIL": "anne@example.com"
        },
        "after": null,
        "source": {
            "version": "1.5.4.Final",
            "name": "server1",
            "ts_ms": 1520085153000,
            "txId": "6.28.807",
            "scn": "2122184",
            "commit_scn": "2122184",
            "snapshot": false
        },
        "op": "d",
        "ts_ms": 1532592105960
    }
}

If we look at the payload portion, we see a number of differences compared with the create or update event payloads:

  • The op field value is now d, signifying that this row was deleted
  • The before field now has the state of the row that was deleted with the database commit.
  • The after field is null, signifying that the row no longer exists
  • The source field structure has many of the same values as before, except the ts_ms, scn and txId fields have changed
  • The ts_ms shows the timestamp that Debezium processed this event.

This event gives a consumer all kinds of information that it can use to process the removal of this row.

The Oracle connector’s events are designed to work with Kafka log compaction, which allows for the removal of some older messages as long as at least the most recent message for every key is kept. This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state.

When a row is deleted, the delete event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key. The message value must be set to null to instruct Kafka to remove all messages that share the same key. To make this possible, by default, Debezium’s Oracle connector always follows a delete event with a special tombstone event that has the same key but null value. You can change the default behavior by setting the connector property tombstones.on.delete.

6.3. How Debezium Oracle connectors map data types

To represent changes that occur in a table rows, the Debezium Oracle connector emits change events that are structured like the table in which the rows exists. The event contains a field for each column value. Column values are represented according to the Oracle data type of the column. The following sections describe how the connector maps oracle data types to a literal type and a semantic type in event fields.

literal type
Describes how the value is literally represented using Kafka Connect schema types: INT8, INT16, INT32, INT64, FLOAT32, FLOAT64, BOOLEAN, STRING, BYTES, ARRAY, MAP, and STRUCT.
semantic type
Describes how the Kafka Connect schema captures the meaning of the field using the name of the Kafka Connect schema for the field.

Details are in the following sections:

Character types

The following table describes how the connector maps basic character types.

Table 6.3. Mappings for Oracle basic character types
Oracle Data TypeLiteral type (schema type)Semantic type (schema name) and Notes

CHAR[(M)]

STRING

n/a

NCHAR[(M)]

STRING

n/a

NVARCHAR2[(M)]

STRING

n/a

VARCHAR[(M)]

STRING

n/a

VARCHAR2[(M)]

STRING

n/a

Binary and Character LOB types

The following table describes how the connector maps binary and character LOB types.

Table 6.4. Mappings for Oracle binary and character LOB types
Oracle Data TypeLiteral type (schema type)Semantic type (schema name) and Notes

BLOB

n/a

This data type is not supported.

CLOB

n/a

This data type is not supported.

LONG

n/a

This data type is not supported.

LONG RAW

n/a

This data type is not supported.

NCLOB

n/a

This data type is not supported.

RAW

n/a

This data type is not supported.

Numeric types

The following table describes how the connector maps numeric types.

Table 6.5. Mappings for Oracle numeric data types
Oracle Data TypeLiteral type (schema type)Semantic type (schema name) and Notes

BINARY_FLOAT

FLOAT32

n/a

BINARY_DOUBLE

FLOAT64

n/a

DECIMAL[(P, S)]

BYTES / INT8 / INT16 / INT32 / INT64

org.apache.kafka.connect.data.Decimal if using BYTES

Handled equivalently to NUMBER (note that S defaults to 0 for DECIMAL).

DOUBLE PRECISION

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

FLOAT[(P)]

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

INTEGER, INT

BYTES

org.apache.kafka.connect.data.Decimal

INTEGER is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store

NUMBER[(P[, *])]

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

NUMBER(P, S <= 0)

INT8 / INT16 / INT32 / INT64

NUMBER columns with a scale of 0 represent integer numbers. A negative scale indicates rounding in Oracle, for example, a scale of -2 causes rounding to hundreds.

Depending on the precision and scale, one of the following matching Kafka Connect integer type is chosen:

  • P - S < 3, INT8
  • P - S < 5, INT16
  • P - S < 10, INT32
  • P - S < 19, INT64
  • P - S >= 19, BYTES (org.apache.kafka.connect.data.Decimal).

NUMBER(P, S > 0)

BYTES

org.apache.kafka.connect.data.Decimal

NUMERIC[(P, S)]

BYTES / INT8 / INT16 / INT32 / INT64

org.apache.kafka.connect.data.Decimal if using BYTES

Handled equivalently to NUMBER (note that S defaults to 0 for NUMERIC).

SMALLINT

BYTES

org.apache.kafka.connect.data.Decimal

SMALLINT is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the INT types could store

REAL

STRUCT

io.debezium.data.VariableScaleDecimal

Contains a structure with two fields: scale of type INT32 that contains the scale of the transferred value and value of type BYTES containing the original value in an unscaled form.

Boolean types

Oracle does not natively have support for a BOOLEAN data type; however, it is common practice to use other data types with certain semantics to simulate the concept of a logical BOOLEAN data type.

The operator can configure the out-of-the-box NumberOneToBooleanConverter custom converter that would either map all NUMBER(1) columns to a BOOLEAN or if the selector parameter is set, then a subset of columns could be enumerated using a comma-separated list of regular expressions.

Following is an example configuration:

converters=boolean
boolean.type=io.debezium.connector.oracle.converters.NumberOneToBooleanConverter
boolean.selector=.*MYTABLE.FLAG,.*.IS_ARCHIVED

Decimal types

The setting of the Oracle connector configuration property, decimal.handling.mode determines how the connector maps decimal types.

When the decimal.handling.mode property is set to precise, the connector uses Kafka Connect org.apache.kafka.connect.data.Decimal logical type for all DECIMAL and NUMERIC columns. This is the default mode.

However, when the decimal.handling.mode property is set to double, the connector represents the values as Java double values with schema type FLOAT64.

You can also set the decimal.handling.mode configuration property to use the string option. When the property is set to string, the connector represents DECIMAL and NUMERIC values as their formatted string representation with schema type STRING.

Temporal types

Other than Oracle’s INTERVAL, TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE data types, the other temporal types depend on the value of the time.precision.mode configuration property.

When the time.precision.mode configuration property is set to adaptive (the default), then the connector determines the literal and semantic type for the temporal types based on the column’s data type definition so that events exactly represent the values in the database:

Oracle data typeLiteral type (schema type)Semantic type (schema name) and Notes

DATE

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

INTERVAL DAY[(M)] TO SECOND

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

INTERVAL YEAR[(M)] TO MONTH

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

TIMESTAMP(0 - 3)

INT64

io.debezium.time.Timestamp

Represents the number of milliseconds past epoch, and does not include timezone information.

TIMESTAMP, TIMESTAMP(4 - 6)

INT64

io.debezium.time.MicroTimestamp

Represents the number of microseconds past epoch, and does not include timezone information.

TIMESTAMP(7 - 9)

INT64

io.debezium.time.NanoTimestamp

Represents the number of nanoseconds past epoch, and does not include timezone information.

TIMESTAMP WITH TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp with timezone information.

TIMESTAMP WITH LOCAL TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp in UTC.

When the time.precision.mode configuration property is set to connect, then the connector uses the predefined Kafka Connect logical types. This can be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. Because the level of precision that Oracle supports exceeds the level that the logical types in Kafka Connect support, if you set time.precision.mode to connect, a loss of precision results when the fractional second precision value of a database column is greater than 3:

Oracle data typeLiteral type (schema type)Semantic type (schema name) and Notes

DATE

INT32

org.apache.kafka.connect.data.Date

Represents the number of days since the epoch.

INTERVAL DAY[(M)] TO SECOND

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

INTERVAL YEAR[(M)] TO MONTH

FLOAT64

io.debezium.time.MicroDuration

The number of micro seconds for a time interval using the 365.25 / 12.0 formula for days per month average.

TIMESTAMP(0 - 3)

INT64

org.apache.kafka.connect.data.Timestamp

Represents the number of milliseconds since epoch, and does not include timezone information.

TIMESTAMP(4 - 6)

INT64

org.apache.kafka.connect.data.Timestamp

Represents the number of milliseconds since epoch, and does not include timezone information.

TIMESTAMP(7 - 9)

INT64

org.apache.kafka.connect.data.Timestamp

Represents the number of milliseconds since epoch, and does not include timezone information.

TIMESTAMP WITH TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp with timezone information.

TIMESTAMP WITH LOCAL TIME ZONE

STRING

io.debezium.time.ZonedTimestamp

A string representation of a timestamp in UTC.

6.4. Setting up Oracle to work with Debezium

The following steps are necessary to set up Oracle for use with the Debezium Oracle connector. These steps assume the use of the multi-tenancy configuration with a container database and at least one pluggable database. If you do not intend to use a multi-tenant configuration, it might be necessary to adjust the following steps.

For information about using Vagrant to set up Oracle in a virtual machine, see the Debezium Vagrant Box for Oracle database GitHub repository.

For details about setting up Oracle for use with the Debezium connector, see the following sections:

6.4.1. Preparing Oracle databases for use with Debezium

Configuration needed for Oracle LogMiner

ORACLE_SID=ORACLCDB dbz_oracle sqlplus /nolog

CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 10G;
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should now "Database log mode: Archive Mode"
archive log list

exit;

In addition, supplemental logging must be enabled for captured tables or the database in order for data changes to capture the before state of changed database rows. The following illustrates how to configure this on a specific table, which is the ideal choice to minimize the amount of information captured in the Oracle redo logs.

ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Minimal supplemental logging must be enabled at the database level and can be configured as follows.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

6.4.2. Creating an Oracle user for the Debezium Oracle connector

For the Debezium Oracle connector to capture change events, it must run as an Oracle LogMiner user that has specific permissions. The following example shows the SQL for creating an Oracle user account for the connector in a multi-tenant database model.

Warning

The connector does not capture database changes made by the SYS, SYSTEM, or the connector user accounts.

Creating the connector’s LogMiner user

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
  CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
  CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf'
    SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  exit;

sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba

  CREATE USER c##dbzuser IDENTIFIED BY dbz
    DEFAULT TABLESPACE logminer_tbs
    QUOTA UNLIMITED ON logminer_tbs
    CONTAINER=ALL;

  GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL;
  GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL;
  GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
  GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL;
  GRANT LOGMINING TO c##dbzuser CONTAINER=ALL;

  GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT ALTER ANY TABLE TO c##dbzuser CONTAINER=ALL;
  GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL;

  GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL;
  GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL;

  GRANT SELECT ON V_$LOG TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOG_HISTORY TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGMNR_LOGS TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$LOGFILE TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$ARCHIVED_LOG TO c##dbzuser CONTAINER=ALL;
  GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL;

  exit;

6.5. Deployment of Debezium Oracle connectors

To deploy a Debezium Oracle connector, you add the connector files to Kafka Connect, create a custom container to run the connector, and then add connector configuration to your container. For details about deploying the Debezium Oracle connector, see the following topics:

6.5.1. Obtaining the Oracle JDBC driver

The Debezium Oracle connector requires the Oracle JDBC driver (ojdbc8.jar) to connect to Oracle databases. Due to licensing requirements, the required driver file is not included in the Debezium Oracle connector archive. You must download the required driver file directly from Oracle and add it to your Kafka Connect environment. The following steps describe how to download the Oracle Instant Client and extract the driver.

Procedure

  1. From a browser, download the Oracle Instant Client package for your operating system.
  2. Extract the archive and then open the instantclient_<version> directory.

    For example:

    instantclient_21_1/
    ├── adrci
    ├── BASIC_LITE_LICENSE
    ├── BASIC_LITE_README
    ├── genezi
    ├── libclntshcore.so -> libclntshcore.so.21.1
    ├── libclntshcore.so.12.1 -> libclntshcore.so.21.1
    
    ...
    
    ├── ojdbc8.jar
    ├── ucp.jar
    ├── uidrvci
    └── xstreams.jar
    Copy the `ojdbc8.jar` file to the `_<kafka_home>_/libs` directory.

6.5.2. Deploying Debezium Oracle connectors

To deploy a Debezium Oracle connector, you must build a custom Kafka Connect container image that contains the Debezium connector archive, and then push this container image to a container registry. You then need to create the following custom resources (CRs):

  • A KafkaConnect CR that defines your Kafka Connect instance. The image property in the CR specifies the name of the container image that you create to run your Debezium connector. You apply this CR to the OpenShift instance where Red Hat AMQ Streams is deployed. AMQ Streams offers operators and images that bring Apache Kafka to OpenShift.
  • A KafkaConnector CR that defines your Debezium Oracle connector. Apply this CR to the same OpenShift instance where you apply the KafkaConnect CR.

    .Prerequisites
  • Oracle Database is running and you completed the steps to set up Oracle to work with a Debezium connector.
  • AMQ Streams is deployed on OpenShift and is running Apache Kafka and Kafka Connect. For more information, see Deploying and Upgrading AMQ Streams on OpenShift
  • Podman or Docker is installed.
  • You have an account and permissions to create and manage containers in the container registry (such as quay.io or docker.io) to which you plan to add the container that will run your Debezium connector.
  • You have a copy of the Oracle JDBC driver. Due to licensing requirements, the Debezium Oracle connector does not include the required JDBC driver files.

    For more information, see Obtaining the Oracle JDBC driver.

Procedure

  1. Create the Debezium Oracle container for Kafka Connect:

    1. Download the Debezium Oracle connector archive.
    2. Extract the Debezium Oracle connector archive to create a directory structure for the connector plug-in, for example:

      ./my-plugins/
      ├── debezium-connector-oracle
      │   ├── ...
    3. Create a Dockerfile that uses registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.0 as the base image. For example, from a terminal window, enter the following, replacing my-plugins with the name of your plug-ins directory:

      cat <<EOF >debezium-container-for-oracle.yaml 1
      FROM registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.0
      USER root:root
      COPY ./<my-plugins>/ /opt/kafka/plugins/ 2
      USER 1001
      EOF
      1 1 1
      You can specify any file name that you want.
      2 2 2
      Replace my-plugins with the name of your plug-ins directory.

      The command creates a Docker file with the name debezium-container-for-oracle.yaml in the current directory.

    4. Build the container image from the debezium-container-for-oracle.yaml Docker file that you created in the previous step. From the directory that contains the file, open a terminal window and enter one of the following commands:

      podman build -t debezium-container-for-oracle:latest .
      docker build -t debezium-container-for-oracle:latest .

      The preceding commands build a container image with the name debezium-container-for-oracle.

    5. Push your custom image to a container registry, such as quay.io or an internal container registry. The container registry must be available to the OpenShift instance where you want to deploy the image. Enter one of the following commands:

      podman push <myregistry.io>/debezium-container-for-oracle:latest
      docker push <myregistry.io>/debezium-container-for-oracle:latest
    6. Create a new Debezium Oracle KafkaConnect custom resource (CR). For example, create a KafkaConnect CR with the name dbz-connect.yaml that specifies annotations and image properties as shown in the following example:

      apiVersion: kafka.strimzi.io/v1beta2
      kind: KafkaConnect
      metadata:
        name: my-connect-cluster
        annotations:
          strimzi.io/use-connector-resources: "true" 1
      spec:
        #...
        image: debezium-container-for-oracle  2
      1
      metadata.annotations indicates to the Cluster Operator that KafkaConnector resources are used to configure connectors in this Kafka Connect cluster.
      2
      spec.image specifies the name of the image that you created to run your Debezium connector. This property overrides the STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE variable in the Cluster Operator
    7. Apply the KafkaConnect CR to the OpenShift Kafka Connect environment by entering the following command:

      oc create -f dbz-connect.yaml

      The command adds a Kafka Connect instance that specifies the name of the image that you created to run your Debezium connector.

  2. Create a KafkaConnector custom resource that configures your Debezium Oracle connector instance.

    You configure a Debezium Oracle connector in a .yaml file that specifies the configuration properties for the connector. The connector configuration might instruct Debezium to produce events for a subset of the schemas and tables, or it might set properties so that Debezium ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.

    The following example configures a Debezium connector that connects to an Oracle host IP address, on port 1521. This host has a database named ORCLCDB, and server1 is the server’s logical name.

    Oracle inventory-connector.yaml

    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaConnector
    metadata:
      name: inventory-connector 1
      labels:
        strimzi.io/cluster: my-connect-cluster
      annotations:
        strimzi.io/use-connector-resources: 'true'
    spec:
      class: io.debezium.connector.oracle.OracleConnector 2
      config:
        database.hostname: <oracle_ip_address> 3
        database.port: 1521 4
        database.user: c##dbzuser 5
        database.password: dbz 6
        database.dbname: ORCLCDB 7
        database.pdb.name : ORCLPDB1, 8
        database.server.name: server1 9
        database.history.kafka.bootstrap.servers: kafka:9092 10
        database.history.kafka.topic: schema-changes.inventory 11

    Table 6.6. Descriptions of connector configuration settings
    ItemDescription

    1

    The name of our connector when we register it with a Kafka Connect service.

    2

    The name of this Oracle connector class.

    3

    The address of the Oracle instance.

    4

    The port number of the Oracle instance.

    5

    The name of the Oracle user, as specified in Creating users for the connector.

    6

    The password for the Oracle user, as specified in Creating users for the connector.

    7

    The name of the database to capture changes from.

    8

    The name of the Oracle pluggable database that the connector captures changes from. Used in container database (CDB) installations only.

    9

    Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes.

    10

    The list of Kafka brokers that this connector uses to write and recover DDL statements to the database history topic.

    11

    The name of the database history topic where the connector writes and recovers DDL statements. This topic is for internal use only and should not be used by consumers.

  3. Create your connector instance with Kafka Connect. For example, if you saved your KafkaConnector resource in the inventory-connector.yaml file, you would run the following command:

    oc apply -f inventory-connector.yaml

    The preceding command registers inventory-connector and the connector starts to run against the server1 database as defined in the KafkaConnector CR.

  4. Verify that the connector was created and has started:

    1. Display the Kafka Connect log output to verify that the connector was created and has started to capture changes in the specified database:

      oc logs $(oc get pods -o name -l strimzi.io/cluster=my-connect-cluster)
    2. Review the log output to verify that Debezium performs the initial snapshot. The log displays output that is similar to the following messages:

      ... INFO Starting snapshot for ...
      ... INFO Snapshot is using user 'c##dbzuser' ...

      If the connector starts correctly without errors, it creates a topic for each table whose changes the connector is capturing. Downstream applications can subscribe to these topics.

    3. Verify that the connector created the topics by running the following command:

      oc get kafkatopics

For the complete list of the configuration properties that you can set for the Debezium Oracle connector, see Oracle connector properties.

Results

When the connector starts, it performs a consistent snapshot of the Oracle databases that the connector is configured for. The connector then starts generating data change events for row-level operations and streaming the change event records to Kafka topics.

6.5.3. Descriptions of Debezium Oracle connector configuration properties

The Debezium Oracle connector has numerous configuration properties that you can use to achieve the right connector behavior for your application. Many properties have default values. Information about the properties is organized as follows:

Required Debezium Oracle connector configuration properties

The following configuration properties are required unless a default value is available.

Property

Default

Description

name

No default

Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)

connector.class

No default

The name of the Java class for the connector. Always use a value of io.debezium.connector.oracle.OracleConnector for the Oracle connector.

tasks.max

1

The maximum number of tasks that should be created for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable.

database.hostname

No default

IP address or hostname of the Oracle database server.

database.port

No default

Integer port number of the Oracle database server.

database.user

No default

Name of the Oracle user account that the connector uses to connect to the Oracle database server.

database.password

No default

Password to use when connecting to the Oracle database server.

database.dbname

No default

Name of the database to connect to. Must be the CDB name when working with the CDB + PDB model.

database.url

No default

Specifies the raw database JDBC URL. Use this property to provide flexibility in defining that database connection. Valid values include raw TNS names and RAC connection strings.

database.pdb.name

No default

Name of the Oracle pluggable database to connect to. Use this property with container database (CDB) installations only.

database.server.name

No default

Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes. The value that you set is used as a prefix for all Kafka topic names that the connector emits. Specify a logical name that is unique among all connectors in your Debezium environment. The following characters are valid: alphanumeric characters, hyphens, and underscores.

database.connection.adapter

logminer

The adapter implementation that the connector uses when it streams database changes. You can set the following values:

logminer(default)
The connector uses the native Oracle LogMiner API.
xstream
The connector uses the Oracle XStreams API.

snapshot.mode

initial

Specifies the mode that the connector uses to take snapshots of a captured table. You can set the following values:

initial
The snapshot includes the structure and data of captured tables. Specify this value to populate topics with a complete representation of the data from the captured tables.
schema_only
The snapshot includes only the structure of captured tables. Specify this value if you want the connector to capture data only for changes that occur after the snapshot.

After the snapshot is complete, the connector continues to read change events from the database’s redo logs.

snapshot.select.statement.overrides

No default

Specifies the table rows to include in a snapshot.
This property contains a comma-separated list of fully-qualified tables (<schema_name.table_name>). Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id snapshot.select.statement.overrides.[<schema_name>].[<table_name>]. The value of those properties is the SELECT statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted.
Note: This setting affects snapshots only. It does not apply to events that the connector captures during log reading.

schema.include.list

No default

An optional, comma-separated list of regular expressions that match names of schemas for which you want to capture changes. Any schema name not included in schema.include.list is excluded from having its changes captured. By default, all non-system schemas have their changes captured. Do not also set the schema.exclude.list property. In environments that use the LogMiner implementation, you must use POSIX regular expressions only.

schema.exclude.list

No default

An optional, comma-separated list of regular expressions that match names of schemas for which you do not want to capture changes. Any schema whose name is not included in schema.exclude.list has its changes captured, with the exception of system schemas. Do not also set the schema.include.list property. In environments that use the LogMiner implementation, you must use POSIX regular expressions only.

table.include.list

No default

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored. Tables that are not included in the include list are excluded from monitoring. Each identifier is of the form <schema_name>.<table_name>. By default, the connector monitors every non-system table in each monitored database. Do not use this property in combination with table.exclude.list. If you use the LogMiner implementation, use only POSIX regular expressions with this property.

table.exclude.list

No default

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring. The connector captures change events from any table that is not specified in the exclude list. Specify the identifier for each table using the following format: <schema_name>.<table_name>. Do not use this property in combination with table.include.list. If you use the LogMiner implementation, use only POSIX regular expressions with this property.

column.include.list

No default

An optional comma-separated list of regular expressions that match the fully-qualified names of columns that want to include in the change event message values. Fully-qualified names for columns are of the form <Schema_name>.<table_name>.<column_name>. The primary key column is always included in an event’s key, even if you do not use this property to explicitly include its value. If you include this property in the configuration, do not also set the column.exclude.list property.

column.exclude.list

No default

An optional comma-separated list of regular expressions that match the fully-qualified names of columns that you want to exclude from change event message values. Fully-qualified names for columns are of the form <schema_name>.<table_name>.<column_name>. The primary key column is always included in an event’s key, even if you use this property to explicitly exclude its value. If you include this property in the configuration, do not set the column.include.list property.

column.mask.hash.<hashAlgorithm>.with.salt.<salt>

No default

An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns. Fully-qualified names for columns are of the form pdbName.schemaName.tableName.columnName. In the resulting change event record, the values for the specified columns are replaced with pseudonyms.

A pseudonym consists of the hashed value that results from applying the specified hashAlgorithm and salt. Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms. Supported hash functions are described in the MessageDigest section of the Java Cryptography Architecture Standard Algorithm Name Documentation.

In the following example, CzQMA0cB5K is a randomly selected salt.

column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName

If necessary, the pseudonym is automatically shortened to the length of the column. The connector configuration can include multiple properties that specify different hash algorithms and salts.

Depending on the hashAlgorithm used, the salt selected, and the actual data set, the resulting data set might not be completely masked.

decimal.handling.mode

precise

Specifies how the connector should handle floating point values for NUMBER, DECIMAL and NUMERIC columns. You can set one of the following options:

precise (default)
Represents values precisely by using java.math.BigDecimal values represented in change events in a binary form.
double
Represents values by using double values. Using double values is easier, but can result in a loss of precision.
string
Encodes values as formatted strings. Using the string option is easier to consume, but results in a loss of semantic information about the real type. For more information, see Decimal types.

event.processing.failure.handling.mode

fail

Specifies how the connector should react to exceptions during processing of events. You can set one of the following options:

fail
Propagates the exception (indicating the offset of the problematic event), causing the connector to stop.
warn
Causes the problematic event to be skipped. The offset of the problematic event is then logged.
skip
Causes the problematic event to be skipped.

max.queue.size

8192

A positive integer value that specifies the maximum size of the blocking queue. Change events read from the database log are placed in the blocking queue before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slow, or if Kafka is not available. Events that appear in the queue are not included in the offsets that the connector records periodically. Always specify a value that is larger than the maximum batch size that specified for the max.batch.size property.

max.batch.size

2048

A positive integer value that specifies the maximum size of each batch of events to process during each iteration of this connector.

max.queue.size.in.bytes

0 (disabled)

Long value for the maximum size in bytes of the blocking queue. To activate the feature, set the value to a positive long data type.

poll.interval.ms

1000 (1 second)

Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear.

tombstones.on.delete

true

Controls whether a delete event is followed by a tombstone event. The following values are possible:

true
For each delete operation, the connector emits a delete event and a subsequent tombstone event.
false
For each delete operation, the connector emits only a delete event.

After a source record is deleted, a tombstone event (the default behavior) enables Kafka to completely delete all events that share the key of the deleted row in topics that have log compaction enabled.

message.key.columns

No default

A list of regular expressions, separated by semi-colons, that match the fully-qualified tables and columns to map a primary key.
Each item (regular expression) must match the <fully-qualified table>:<a comma-separated list of columns> representing the custom key.
Define fully-qualified tables by using the following format: <pdbName>.<schemaName>.<tableName>.

column.truncate.to.length.chars

No default

An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns to be truncated in change event messages if their length exceeds the specified number of characters. Length is specified as a positive integer. A configuration can include multiple properties that specify different lengths. Specify the fully-qualified name for columns by using the following format: <pdbName>.<schemaName>.<tableName>.<columnName>.

column.mask.with.length.chars

No default

An optional comma-separated list of regular expressions for masking column names in change event messages by replacing characters with asterisks (*). Specify the number of characters to replace in the name of the property, for example, column.mask.with.8.chars. Specify length as a positive integer or zero. Then add regular expressions to the list for each character-based column name where you want to apply a mask. Use the following format to specify fully-qualified column names: pdbName.schemaName.tableName.columnName. The connector configuration can include multiple properties that specify different lengths.

column.propagate.source.type

No default

An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters __debezium.source.column.type, __debezium.source.column.length, and __debezium.source.column.scale are used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified names for columns are of the form tableName.columnName, or schemaName.tableName.columnName.

datatype.propagate.source.type

No default

An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters __debezium.source.column.type, __debezium.source.column.length and __debezium.source.column.scale are used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified data type names are of the form tableName.typeName, or schemaName.tableName.typeName. See the list of Oracle-specific data type names.

heartbeat.interval.ms

0

Specifies, in milliseconds, how frequently the connector sends messages to a heartbeat topic.
Use this property to determine whether the connector continues to receive change events from the source database. It can also be useful to set the property in situations where the connector no change events occur in captured tables for an extended period. In such a a case, although the connector continues to read the redo log, it emits no change event messages, so that the offset in the Kafka topic remains unchanged. Because the connector does not flush the latest system change number (SCN) that it read from the database, the database might retain the redo log files for longer than necessary. If the connector restarts, the extended retention period could result in the connector redundantly sending some change events. The default value of 0 prevents the connector from sending any heartbeat messages.

heartbeat.topics.prefix

__debezium-heartbeat

Specifies the string that prefixes the name of the topic to which the connector sends heartbeat messages.
The topic is named according to the pattern <heartbeat.topics.prefix>.<server.name>.

snapshot.delay.ms

No default

Specifies an interval in milliseconds that the connector waits after it starts before it takes a snapshot.
Use this property to prevent snapshot interruptions when you start multiple connectors in a cluster, which might cause re-balancing of connectors.

snapshot.fetch.size

2000

Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot. The connector reads table contents in multiple batches of the specified size.

sanitize.field.names

true when the connector configuration explicitly specifies the key.converter or value.converter parameters to use Avro, otherwise defaults to false.

Specifies whether field names are normalized to comply with Avro naming requirements. For more information, see Avro naming.

provide.transaction.metadata

false

Set the property to true if you want Debezium to generate events with transaction boundaries and enriches data events envelope with transaction metadata.

See Transaction Metadata for additional details.

log.mining.strategy

redo_log_catalog

The mining strategy controls how Oracle LogMiner builds and uses a given data dictionary for resolving table and column ids to names.

redo_log_catalog - Writes the data dictionary to the online redo logs causing more archive logs to be generated over time. This also enables tracking DDL changes against captured tables, so if the schema changes frequently this is the ideal choice.

online_catalog - Uses the database’s current data dictionary to resolve object ids and does not write any extra information to the online redo logs. This allows LogMiner to mine substantially faster but at the expense that DDL changes cannot be tracked. If the captured table(s) schema changes infrequently or never, this is the ideal choice.

log.mining.batch.size.min

1000

The minimum SCN interval size that this connector attempts to read from redo/archive logs. Active batch size is also increased/decreased by this amount for tuning connector throughput when needed.

log.mining.batch.size.max

100000

The maximum SCN interval size that this connector uses when reading from redo/archive logs.

log.mining.batch.size.default

20000

The starting SCN interval size that the connector uses for reading data from redo/archive logs.

log.mining.sleep.time.min.ms

0

The minimum amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.

log.mining.sleep.time.max.ms

3000

The maximum amount of time that the connector ill sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.

log.mining.sleep.time.default.ms

1000

The starting amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.

log.mining.sleep.time.increment.ms

200

The maximum amount of time up or down that the connector uses to tune the optimal sleep time when reading data from logminer. Value is in milliseconds.

log.mining.view.fetch.size

10000

The number of content records that the connector fetches from the LogMiner content view.

log.mining.archive.log.hours

0

The number of hours in the past from SYSDATE to mine archive logs. When the default setting (0) is used, the connector mines all archive logs.

log.mining.transaction.retention.hours

0

Positive integer value that specifies the number of hours to retain long running transactions between redo log switches. When set to 0, transactions are retained until a commit or rollback is detected.

The LogMiner adapter maintains an in-memory buffer of all running transactions. Because all of the DML operations that are part of a transaction are buffered until a commit or rollback is detected, long-running transactions should be avoided in order to not overflow that buffer. Any transaction that exceeds this configured value is discarded entirely, and the connector does not emit any messages for the operations that were part of the transaction.

rac.nodes

No default

A comma-separated list of Oracle Real Application Clusters (RAC) node host names or addresses. This field is required to enable use with Oracle RAC.

Debezium connector database history configuration properties

Debezium provides a set of database.history.* properties that control how the connector interacts with the schema history topic.

The following table describes the database.history properties for configuring the Debezium connector.

Table 6.7. Connector database history configuration properties
PropertyDefaultDescription

database.history.kafka.topic

 

The full name of the Kafka topic where the connector stores the database schema history.

database.history.kafka.bootstrap.servers

 

A list of host/port pairs that the connector uses for establishing an initial connection to the Kafka cluster. This connection is used for retrieving the database schema history previously stored by the connector, and for writing each DDL statement read from the source database. Each pair should point to the same Kafka cluster used by the Kafka Connect process.

database.history.kafka.recovery.poll.interval.ms

100

An integer value that specifies the maximum number of milliseconds the connector should wait during startup/recovery while polling for persisted data. The default is 100ms.

database.history.kafka.recovery.attempts

4

The maximum number of times that the connector should try to read persisted history data before the connector recovery fails with an error. The maximum amount of time to wait after receiving no data is recovery.attempts x recovery.poll.interval.ms.

database.history.skip.unparseable.ddl

false

A Boolean value that specifies whether the connector should ignore malformed or unknown database statements or stop processing so a human can fix the issue. The safe default is false. Skipping should be used only with care as it can lead to data loss or mangling when the binlog is being processed.

database.history.store.only.monitored.tables.ddl

Deprecated and scheduled for removal in a future release; use database.history.store.only.captured.tables.ddl instead.

false

A Boolean value that specifies whether the connector should record all DDL statements

true records only those DDL statements that are relevant to tables whose changes are being captured by Debezium. Set to true with care because missing data might become necessary if you change which tables have their changes captured.

The safe default is false.

database.history.store.only.captured.tables.ddl

false

A Boolean value that specifies whether the connector should record all DDL statements

true records only those DDL statements that are relevant to tables whose changes are being captured by Debezium. Set to true with care because missing data might become necessary if you change which tables have their changes captured.

The safe default is false.

Pass-through database history properties for configuring producer and consumer clients


Debezium relies on a Kafka producer to write schema changes to database history topics. Similarly, it relies on a Kafka consumer to read from database history topics when a connector starts. You define the configuration for the Kafka producer and consumer clients by assigning values to a set of pass-through configuration properties that begin with the database.history.producer.* and database.history.consumer.* prefixes. The pass-through producer and consumer database history properties control a range of behaviors, such as how these clients secure connections with the Kafka broker, as shown in the following example:

database.history.producer.security.protocol=SSL
database.history.producer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.producer.ssl.keystore.password=test1234
database.history.producer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.producer.ssl.truststore.password=test1234
database.history.producer.ssl.key.password=test1234

database.history.consumer.security.protocol=SSL
database.history.consumer.ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
database.history.consumer.ssl.keystore.password=test1234
database.history.consumer.ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
database.history.consumer.ssl.truststore.password=test1234
database.history.consumer.ssl.key.password=test1234

Debezium strips the prefix from the property name before it passes the property to the Kafka client.

See the Kafka documentation for more details about Kafka producer configuration properties and Kafka consumer configuration properties.

Debezium connector pass-through database driver configuration properties

The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix database.*. For example, the connector passes properties such as database.foobar=false to the JDBC URL.

As is the case with the pass-through properties for database history clients, Debezium strips the prefixes from the properties before it passes them to the database driver.

6.6. Monitoring Debezium Oracle connector performance

The Debezium Oracle connector provides three metric types in addition to the built-in support for JMX metrics that Apache Zookeeper, Apache Kafka, and Kafka Connect have.

Please refer to the monitoring documentation for details of how to expose these metrics via JMX.

6.6.1. Debezium Oracle connector snapshot metrics

The MBean is debezium.oracle:type=connector-metrics,context=snapshot,server=<database.server.name>.

AttributesTypeDescription

LastEvent

string

The last snapshot event that the connector has read.

MilliSecondsSinceLastEvent

long

The number of milliseconds since the connector has read and processed the most recent event.

TotalNumberOfEventsSeen

long

The total number of events that this connector has seen since last started or reset.

NumberOfEventsFiltered

long

The number of events that have been filtered by include/exclude list filtering rules configured on the connector.

MonitoredTables

string[]

The list of tables that are monitored by the connector.

QueueTotalCapacity

int

The length the queue used to pass events between the snapshotter and the main Kafka Connect loop.

QueueRemainingCapacity

int

The free capacity of the queue used to pass events between the snapshotter and the main Kafka Connect loop.

TotalTableCount

int

The total number of tables that are being included in the snapshot.

RemainingTableCount

int

The number of tables that the snapshot has yet to copy.

SnapshotRunning

boolean

Whether the snapshot was started.

SnapshotAborted

boolean

Whether the snapshot was aborted.

SnapshotCompleted

boolean

Whether the snapshot completed.

SnapshotDurationInSeconds

long

The total number of seconds that the snapshot has taken so far, even if not complete.

RowsScanned

Map<String, Long>

Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.

MaxQueueSizeInBytes

long

The maximum buffer of the queue in bytes. It will be enabled if max.queue.size.in.bytes is passed with a positive long value.

CurrentQueueSizeInBytes

long

The current data of records in the queue in bytes.

6.6.2. Debezium Oracle connector streaming metrics

The MBean is debezium.oracle:type=connector-metrics,context=streaming,server=<database.server.name>.

AttributesTypeDescription

LastEvent

string

The last streaming event that the connector has read.

MilliSecondsSinceLastEvent

long

The number of milliseconds since the connector has read and processed the most recent event.

TotalNumberOfEventsSeen

long

The total number of events that this connector has seen since last started or reset.

NumberOfEventsFiltered

long

The number of events that have been filtered by include/exclude list filtering rules configured on the connector.

MonitoredTables

string[]

The list of tables that are monitored by the connector.

QueueTotalCapacity

int

The length the queue used to pass events between the streamer and the main Kafka Connect loop.

QueueRemainingCapacity

int

The free capacity of the queue used to pass events between the streamer and the main Kafka Connect loop.

Connected

boolean

Flag that denotes whether the connector is currently connected to the database server.

MilliSecondsBehindSource

long

The number of milliseconds between the last change event’s timestamp and the connector processing it. The values will incoporate any differences between the clocks on the machines where the database server and the connector are running.

NumberOfCommittedTransactions

long

The number of processed transactions that were committed.

SourceEventPosition

Map<String, String>

The coordinates of the last received event.

LastTransactionId

string

Transaction identifier of the last processed transaction.

MaxQueueSizeInBytes

long

The maximum buffer of the queue in bytes.

CurrentQueueSizeInBytes

long

The current data of records in the queue in bytes.

The Debezium Oracle connector also provides the following additional streaming metrics:

Table 6.8. Descriptions of additional streaming metrics
AttributesTypeDescription

CurrentScn

string

The most recent system change number that has been processed.

OldestScn

string

The oldest system change number in the transaction buffer.

ComittedScn

string

The last committed system change number from the transaction buffer.

OffsetScn

string

The system change number currently written to the connector’s offsets.

CurrentRedoLogFileName

string[]

Array of the log files that are currently mined.

MinimumMinedLogCount

long

The minimum number of logs specified for any LogMiner session.

MaximumMinedLogCount

long

The maximum number of logs specified for any LogMiner session.

RedoLogStatus

string[]

Array of the current state for each mined logfile with the format filename|status.

SwitchCounter

int

The number of times the database has performed a log switch for the last day.

LastCapturedDmlCount

long

The number of DML operations observed in the last LogMiner session query.

MaxCapturedDmlInBatch

long

The maximum number of DML operations observed while processing a single LogMiner session query.

TotalCapturedDmlCount

long

The total number of DML operations observed.

FetchingQueryCount

long

The total number of LogMiner session query (aka batches) performed.

LastDurationOfFetchQueryInMilliseconds

long

The duration of the last LogMiner session query’s fetch in milliseconds.

MaxDurationOfFetchQueryInMilliseconds

long

The maximum duration of any LogMiner session query’s fetch in milliseconds.

LastBatchProcessingTimeInMilliseconds

long

The duration for processing the last LogMiner query batch results in milliseconds.

TotalParseTimeInMilliseconds

long

The time in milliseconds spent parsing DML event SQL statements.

LastMiningSessionStartTimeInMilliseconds

long

The duration in milliseconds to start the last LogMiner session.

MaxMiningSessionStartTimeInMilliseconds

long

The longest duration in milliseconds to start a LogMiner session.

TotalMiningSessionStartTimeInMilliseconds

long

The total duration in milliseconds spent by the connector starting LogMiner sessions.

MinBatchProcessingTimeInMilliseconds

long

The minimum duration in milliseconds spent processing results from a single LogMiner session.

MaxBatchProcessingTimeInMilliseconds

long

The maximum duration in milliseconds spent processing results from a single LogMiner session.

TotalProcessingTimeInMilliseconds

long

The total duration in milliseconds spent processing results from LogMiner sessions.

TotalResultSetNextTimeInMilliseconds

long

The total duration in milliseconds spent by the JDBC driver fetching the next row to be processed from the log mining view.

TotalProcessedRows

long

The total number of rows processed from the log mining view across all sessions.

BatchSize

int

The number of entries fetched by the log mining query per database round-trip.

MillisecondToSleepBetweenMiningQuery

long

The number of milliseconds the connector sleeps before fetching another batch of results from the log mining view.

MaxBatchProcessingThroughput

long

The maximum number of rows/second processed from the log mining view.

AverageBatchProcessingThroughput

long

The average number of rows/second processed from the log mining.

LastBatchProcessingThroughput

long

The average number of rows/second processed from the log mining view for the last batch.

NetworkConnectionProblemsCounter

long

The number of connection problems detected.

HoursToKeepTransactionInBuffer

int

The number of hours that transactions are retained by the connector’s in-memory buffer without being committed or rolled back before being discarded. See log.mining.transaction.retention for more details.

NumberOfActiveTransactions

long

The number of current active transactions in the transaction buffer.

NumberOfCommittedTransactions

long

The number of committed transactions in the transaction buffer.

NumberOfRolledBackTransactions

long

The number of rolled back transactions in the transaction buffer.

CommitThroughput

long

The average number of committed transactions per second in the transaction buffer.

RegisteredDmlCount

long

The number of registered DML operations in the transaction buffer.

LagFromSourceInMilliseconds

long

The time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer.

MaxLagFromSourceInMilliseconds

long

The maximum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer.

MinLagFromSourceInMilliseconds

long

The minimum time difference in milliseconds between when a change occurred in the transaction logs and when its added to the transaction buffer.

AbandonedTransactionIds

string[]

An array of abandoned transaction identifiers removed from the transaction buffer due to their age. See log.mining.transaction.retention.hours for details.

RolledBackTransactionIds

string[]

An array of transaction identifiers that have been mined and rolled back in the transaction buffer.

LastCommitDurationInMilliseconds

long

The duration of the last transaction buffer commit operation in milliseconds.

MaxCommitDurationInMilliseconds

long

The duration of the longest transaction buffer commit operation in milliseconds.

ErrorCount

int

The number of errors detected.

WarningCount

int

The number of warnings detected.

ScnFreezeCount

int

The number of times the system change number has been checked for advancement and remains unchanged. This is an indicator that long-running transaction(s) are ongoing and preventing the connector from flushing the latest processed system change number to the connector’s offsets. Under optimal operations, this should always be or remain close to 0.

6.6.3. Debezium Oracle connector schema history metrics

The MBean is debezium.oracle:type=connector-metrics,context=schema-history,server=<database.server.name>.

AttributesTypeDescription

Status

string

One of STOPPED, RECOVERING (recovering history from the storage), RUNNING describing the state of the database history.

RecoveryStartTime

long

The time in epoch seconds at what recovery has started.

ChangesRecovered

long

The number of changes that were read during recovery phase.

ChangesApplied

long

the total number of schema changes applied during recovery and runtime.

MilliSecondsSinceLast​RecoveredChange

long

The number of milliseconds that elapsed since the last change was recovered from the history store.

MilliSecondsSinceLast​AppliedChange

long

The number of milliseconds that elapsed since the last change was applied.

LastRecoveredChange

string

The string representation of the last change recovered from the history store.

LastAppliedChange

string

The string representation of the last applied change.

6.7. How Debezium Oracle connectors handle faults and problems

Debezium is a distributed system that captures all changes in multiple upstream databases; it never misses or loses an event. When the system is operating normally or being managed carefully then Debezium provides exactly once delivery of every change event record.

If a fault occurs, Debezium does not lose any events. However, while it is recovering from the fault, it might repeat some change events. In these abnormal situations, Debezium, like Kafka, provides at least once delivery of change events.

The rest of this section describes how Debezium handles various kinds of faults and problems.

ORA-25191 - Cannot reference overflow table of an index-organized table

Oracle might issue this error during the snapshot phase when encountering an index-organized table (IOT). This error means that the connector has attempted to execute an operation that must be executed against the parent index-organized table that contains the specified overflow table.

To resolve this, the IOT name used in the SQL operation should be replaced with the parent index-organized table name. To determine the parent index-organized table name, use the following SQL:

SELECT IOT_NAME
  FROM DBA_TABLES
 WHERE OWNER='<tablespace-owner>'
   AND TABLE_NAME='<iot-table-name-that-failed>'

The connector’s table.include.list or table.exclude.list configuration options should then be adjusted to explicitly include or exclude the appropriate tables to avoid the connector from attempting to capture changes from the child index-organized table.

LogMiner adapter does not capture changes made by SYS or SYSTEM

Oracle uses the SYS and SYSTEM accounts for lots of internal changes and therefore the connector automatically filters changes made by these users when fetching changes from LogMiner. Never use the SYS or SYSTEM user accounts for changes to be emitted by the Debezium Oracle connector.

Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.