Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

12.18. JDBC Translator

download PDF

12.18.1. JDBC Translator

The JDBC translator bridges between SQL semantic and data type difference between JBoss Data Virtualization and a target RDBMS.
The base JDBC translator is implemented by the org.teiid.translator.jdbc.JDBCExecutionFactory class.

Note

The resource adapter for a particular JDBC translator is provided by configuring the corresponding data source in the JBoss EAP instance. See the Red Hat JBoss Data Virtualization Administration and Configuration Guide for more configuration information.

12.18.2. JDBC Translator: Execution Properties

The following execution properties are shared by all JDBC translators.
Table 12.13. Execution Properties
Name Description Default
DatabaseTimeZone The time zone of the database. Used when fetching date, time, or timestamp values. The system default time zone
DatabaseVersion The specific database version. Used to further tune pushdown support. The base supported version or derived from the DatabaseMetadata.getProduceVersion string. Automatic detection requires a Connection. If there are circumstances where you are getting an exception from capabilities being unavailable (most likely due to an issue obtaining a Connection), then set the DatabaseVersion property. Use the JDBCExecutionFactory.usesDatabaseVersion() method to control whether your translator requires a connection to determine capabilities.
TrimStrings Set to true to trim trailing whitespace from fixed length character strings. Note that JBoss Data Virtualization only has a string, or varchar, type that treats trailing whitespace as meaningful. false
UseBindVariables Set to true to indicate that PreparedStatements will be used and that literal values in the source query will be replaced with bind variables. If false, only LOB values will trigger the use of PreparedStatements. true
UseCommentsInSourceQuery This will embed a leading comment with session/request id in source SQL query for informational purposes false
CommentFormat MessageFormat string to be used if UseCommentsInSourceQuery is enabled. Available properties:
  • 0 - session id string
  • 1 - parent request id string
  • 2 - request part id string
  • 3 - execution count id string
  • 4 - user name string
  • 5 - vdb name string
  • 6 - vdb version integer
  • 7 - is transactional boolean
/*teiid sessionid:{0}, requestid:{1}.{2}*/
MaxPreparedInsertBatchSize The max size of a prepared insert batch. 2048
StructRetrieval Struct retrieval mode can be one of OBJECT - getObject value returned, COPY - returned as a SerialStruct, ARRAY - returned as an Array) OBJECT
EnableDependentJoins For sources that support temporary tables (DB2, Derby, H2, HSQL 2.0+, MySQL 5.0+, Oracle, PostgreSQL, SQLServer, Sybase) allow dependent join pushdown false

12.18.3. JDBC Translator: Importer Properties

The following properties are shared by all JDBC translators.
Table 12.14. Importer Properties
Name Description Default
catalog See DatabaseMetaData.getTables at http://download.oracle.com/javase/6/docs/api/java/sql/DatabaseMetaData.html for more information. null
importRowIdAsBinary 'true' will import RowId columns as varbinary values. false
schemaPattern See DatabaseMetaData.getTables at http://download.oracle.com/javase/6/docs/api/java/sql/DatabaseMetaData.html for more information. null
tableNamePattern See DatabaseMetaData.getTables at http://download.oracle.com/javase/6/docs/api/java/sql/DatabaseMetaData.html for more information. null
procedureNamePattern See DatabaseMetaData.getProcedures at http://download.oracle.com/javase/6/docs/api/java/sql/DatabaseMetaData.html for more information. null
tableTypes Comma separated list - without spaces - of imported table types. See DatabaseMetaData.getTables at http://download.oracle.com/javase/6/docs/api/java/sql/DatabaseMetaData.html for more information. null
excludeTables A case-insensitive regular expression that when matched against a fully qualified JBoss Data Virtualization table name will exclude it from import. Applied after table names are retrieved. Use a negative look-ahead (?!<inclusion pattern>).* to act as an inclusion filter. null
excludeProcedures A case-insensitive regular expression that when matched against a fully qualified JBoss Data Virtualization procedure name will exclude it from import. Applied after procedure names are retrieved. Use a negative look-ahead (?!<inclusion pattern>).* to act as an inclusion filter. null
autoCreateUniqueConstraints True to create a unique constraint if one is not found for a foreign keys true
useFullSchemaName When false, directs the importer to drop the source catalog/schema from the JBoss Data Virtualization object name, so that the JBoss Data Virtualization fully qualified name will be in the form of <model name>.<table name>. Note that when this is false, it may lead to objects with duplicate names when importing from multiple schemas, which results in an exception. This option does not affect the name in source property. true
importKeys Set to true to import primary and foreign keys. true
importIndexes Set to true to import index/unique key/cardinality information. false
importApproximateIndexes Set to true to import approximate index information. See DatabaseMetaData.getIndexInfo at http://download.oracle.com/javase/6/docs/api/java/sql/DatabaseMetaData.html for more information. true
importProcedures Set to true to import procedures and procedure columns. Note that it is not always possible to import procedure result set columns due to database limitations. It is also not currently possible to import overloaded procedures. true
widenUnsignedTypes Set to true to convert unsigned types to the next widest type. For example SQL Server reports tinyint as an unsigned type. With this option enabled, tinyint would be imported as a short instead of a byte. true
quoteNameInSource Set to false to override the default and direct JBoss Data Virtualization to create source queries using unquoted identifiers. true
useProcedureSpecificName Set to true to allow the import of overloaded procedures (which will normally result in a duplicate procedure error) by using the unique procedure-specific name as the JBoss Data Virtualization name. This option will only work with JDBC 4.0 compatible drivers that report specific names. false
useCatalogName Set to true to use any non-null/non-empty catalog name as part of the name in source, e.g. "catalog"."table"."column", and in the JBoss Data Virtualization runtime name if useFullSchemaName is true. Set to false to not use the catalog name in either the name in source or the JBoss Data Virtualization runtime name. Must be set to false for sources that do not fully support a catalog concept, but return a non-null catalog name in their metadata, such as HSQL. true
useQualifiedName True will use name qualification for both the Teiid name and name in source as dictated by the useCatalogName and useFullSchemaName properties. Set to false to disable all qualification for both the Teiid name and the name in source, which effectively ignores the useCatalogName and useFullSchemaName properties. Note: when false this may lead to objects with duplicate names when importing from multiple schemas, which results in an exception. true
useAnyIndexCardinality True will use the maximum cardinality returned from DatabaseMetaData.getIndexInfo. importKeys or importIndexes needs to be enabled for this setting to have an effect. This allows for better stats gathering from sources that do not support returning a statistical index. false
importStatistics This uses database-dependent logic to determine the cardinality if none is determined. (This is currently only supported on Oracle and MySQL.) false

Warning

The default import settings will traverse all available metadata. This import process is time consuming and full metadata import is not needed in most situations. In most situations you will limit import by at least schemaPattern and tableTypes.
Example importer settings to only import tables and views from my-schema.
	<model ...
 
  <property name="importer.tableTypes" value="TABLE,VIEW"/>
  <property name="importer.schemaPattern" value="my-schema"/>
  ...
</model>

12.18.4. JDBC Translator: Translator Types

JBoss Data Virtualization has a range of specific translators that target the most popular open source and proprietary databases.

Note

To decrease the amount of time it takes to import data from a source, you can set these parameter values:
schemaPattern = {targetSchema}
tableTypes = TABLE
jdbc-ansi
This translator provides support for most SQL constructs supported by JBoss Data Virtualization, except for row limit/offset and EXCEPT/INTERSECT. It translates source SQL into ANSI compliant syntax.
This translator can be used when another more specific type is not available.
jdbc-simple
This translator is the same as jdbc-ansi, except that it disables support for function, UNION and aggregate pushdown.
access
This translator is for use with Microsoft Access 2003 or later.
actian-vector
This translator is for use Actian Vector in Hadoop.

Note

Download the JDBC driver from http://esd.actian.com/platform. Note that the port number in the connection URL is "AH7" which maps to 16967.
db2
This translator is for use with DB2 8 or later (and DB2 for i 5.4 or later).
Execution properties specific to DB2:
  • DB2ForI indicates that the DB2 instance is DB2 for i. The default is "false".
hbase
The Apache HBase Translator exposes querying functionality to HBase Tables. Apache Phoenix is an SQL interface for HBase. With the Phoenix Data Sources, the translator translates Teiid push-down commands into Phoenix SQL.
The HBase Translator does not support Join commands, because Phoenix has more simple constraints. The only supported is that for the Primary Key, which maps to the HBase Table Row ID. This translator is developed with Phoenix 4.x for HBase 0.98.1+.

Warning

The translator implements INSERT/UPDATE through the Phoenix UPSERT operation. This means you can see different behavior than with standard INSERT/UPDATE, such as repeated inserts will not throw a duplicate key exception, but will instead update the row in question.

Warning

Due to Phoenix driver limitations, the importer will not look for unique constraints and does not import foreign keys by default.

Warning

The Phoenix driver does not have robust handling of time values. If your time values are normalized to use a date component of 1970-01-01, then the default handling will work correctly. If not, then the time column should be modeled as timestamp instead.
If you use the translator for Apache HBase, be aware that insert statements can rewrite data. To illustrate, here is a standard set of SQL queries:
CREATE TABLE TableA (id integer PRIMARY KEY, name varchar(10)); 
INSERT INTO TableA (id, name) VALUES (1, 'name1'); 
INSERT INTO TableA (id, name) VALUES (1, 'name2');
Normally, the second INSERT command would fail as the uniqueness of the primary key would be corrupted. However, with the HBase translator, the command will not fail. Rather, it will rewrite the data in the table, (so "name1" would become "name2"). This is because the translator converts the INSERT command into an UPSERT command.
Derby
derby - for use with Derby 10.1 or later.
excel-odbc

Important

This translator is now deprecated as the JDBC-ODBC bridge has been removed from Java 1.8.
This translator is for use with Excel 2003 or later via a JDBC-ODBC bridge.
greenplum
This translator is for use with the Greenplum database.
h2
This translator is for use with h2 version 1.1 or later.
hana
This translator is for use with SAP Hana.
hive
This translator is for use with Hive v.10 and Apache SparkSQL v1.0 and later.
Spark is configured to use the Hive Metastore and its configured target to store data. Apache Spark introduces a new computational model alternative to MapReduce. To access data stored in Apache Spark, use the hive jdbc driver while connecting to a hive-specific JDBC URL.
Hive has limited support for data types as it does not support time-based types, XML or LOBs. A view table can use these types but you would need to configure the translator to specify the necessary transformations. In these situations, the evaluations will be done in the JBoss Data Virtualization engine.

Important

The Hive translator does not use the DatabaseTimeZone property.

Important

The Hive importer does not have concept of catalog or source schema, nor does it import keys, procedures and indexes.
Another limitation of Hive is that it only supports EQUI joins. If you try to use any other kind of join on the source tables, you will have inefficient queries. To write criteria based on partitioned columns, model them on source tables, but do not include them in selection columns.
These importer qualities are specific to the Hive translator:
  • trimColumnNames: For Hive 0.11.0 and later the DESCRIBE command metadata is returned with padding. Set to true to strip white space from column names. By default it is set to false.
    useDatabaseMetaData: For Hive 0.13.0 and later the normal JDBC DatabaseMetaData facilities are sufficient to perform an import. Set to true to use the normal import logic with the option to import index information disabled. Defaults to false. When true, trimColumnNames has no effect. If it is set to false, the typical JDBC DatabaseMetaData calls are not used so not all of the common JDBC importer properties are applicable to Hive. You can still use excludeTables anyway.

    Important

    When the database name used in the Hive is differs from "default", the metadata retrieval and execution of queries does not work as expected in Teiid, as Hive JDBC driver seems to be implicitly connecting (tested with versions lower than 0.12) to "default" database, thus ignoring the database name mentioned on connection URL. You can work around this in Red Hat JBoss Data Virtualization in the JBoss EAP environment by setting the following in data source configuration:
    <new-connection-sql>use {database-name}</new-connection-sql>
    
    This is fixed in version 0.13 and later of the Hive driver.
hsql
This translator is for use with HSQLDB 1.7 or later.
impala
This translator is for use with Cloudera Impala 1.2.1 or later.
Impala has limited support for data types. It does not have native support for time/date/xml or LOBs. These limitations are reflected in the translator capabilities. A Teiid view can use these types, however the transformation would need to specify the necessary conversions. Note that in those situations, the evaluations will be done in Teiid engine.
Impala only supports EQUI join, so using any other joins types on its source tables will result in inefficient queries.
To write criteria based on partitioned columns, model them on the source table, but do not include them in selection columns.

Important

The Impala importer does not currently use typical JDBC DatabaseMetaData calls, nor does it have the concept of catalog or source schema, nor does it import keys, procedures, indexes, etc. Thus not all of the common JDBC importer properties are applicable to Impala. You may still use excludeTables.
Impala specific importer properties:
useDatabaseMetaData - Set to true to use the normal import logic with the option to import index information disabled. Defaults to false.
If false the typical JDBC DatabaseMetaData calls are not used so not all of the common JDBC importer properties are applicable to Impala. (You can still use excludeTables regardless.)

Important

Some versions of Impala require the use of a LIMIT when performing an ORDER BY. If no default is configured in Impala, an exception can occur when a Teiid query with an ORDER BY but no LIMIT is issued. You must set an Impala wide default, or configure the connection pool to use a new connection SQL string to issue a SET DEFAULT_ORDER_BY_LIMIT statement. See the Cloudera documentation for more on limit options, such as controlling what happens when the limit is exceeded.
ingres
This translator is for use with Ingres 2006 or later.
ingres93
This translator is for use with Ingres 9.3 or later.
intersystems-cache
For use with Intersystems Cache Object database (only relational aspect of it)
informix
For use with any Informix version.
metamatrix
This translator is for use with MetaMatrix 5.5.0 or later.
modeshape
This translator is for use with Modeshape 2.2.1 or later.
The PATH, NAME, LOCALNODENAME, DEPTH, and SCORE functions are accessed as pseudo-columns, e.g. "nt:base"."jcr:path".
JBoss Data Virtualization user defined functions (prefixed by JCR_) are available for CONTAINS, ISCHILDNODE, ISDESCENDENT, ISSAMENODE, REFERENCE. See the JCRFunctions.xmi file.
If a selector name is needed in a JCR function, you can use the pseudo-column "jcr:path". For example, JCR_ISCHILDNODE(foo.jcr_path, 'x/y') would become ISCHILDNODE(foo, 'x/y') in the ModeShape query.
An additional pseudo-column "mode:properties" can be imported by setting the ModeShape JDBC connection property teiidsupport=true. The "mode:properties" column should be used by the JCR_REFERENCE and other functions that expect a .* selector name. For example, JCR_REFERENCE(nt_base.jcr_properties) would become REFERENCE("nt:base".*) in the ModeShape query.
mysql5
This translator is for use with MySQL version 5 or later. It also works with backwards-compatible MySQL derivatives, including MariaDB.
The MySQL Translator expects the database or session to be using ANSI mode. If the database is not using ANSI mode, an initialization query must be used on the pool to set ANSI mode:
set SESSION sql_mode = 'ANSI'
If you deal with null timestamp values, then set the connection property zeroDateTimeBehavior=convertToNull. Otherwise you'll get conversion errors in Teiid that '0000-00-00 00:00:00' cannot be converted to a timestamp.
netezza
This translator is for use with any Netezza version.

Important

The current Netezza vendor supplied JDBC driver performs poorly with single transactional updates. As is generally the case, use batched updates when possible.
Netezza-specific execution properties:
SqlExtensionsInstalled- indicates that SQL Extensions including support fo REGEXP_LIKE are installed. Defaults to false.
oracle
This translator is for use with Oracle 9i or later.
Sequences may be used with the Oracle translator. A sequence may be modeled as a table with a name in source of DUAL and columns with the name in source set to this:
<sequence name>.[nextval|currval].
Teiid 8.4 and Prior Oracle Sequence DDL
CREATE FOREIGN TABLE seq (nextval integer OPTIONS (NAMEINSOURCE 'seq.nextval'), currval integer options (NAMEINSOURCE 'seq.currval') ) OPTIONS (NAMEINSOURCE 'DUAL')
With Teiid 8.5 it is no longer necessary to rely on a table representation and Oracle specific handling for sequences. See DDL Metadata for representing currval and nextval as source functions.
You can also use a sequence as the default value for insert columns by setting the column to autoincrement and the name in source to this:
<element name>:SEQUENCE=<sequence name>.<sequence value>
A rownum column can be added to any Oracle physical table to support the rownum pseudo-column. The name of the column has to be rownum.
These rownum columns do not have the same semantics as the Oracle rownum construct so care must be taken in their usage.
Oracle specific importer properties:
useGeometryType- Use the Teiid Geomety type when importing columns with a source type of SDO_GEOMETRY. Defaults to false.
useIntegralTypes- Use integral types rather than decimal when the scale is 0. Defaults to false.
Execution properties specific to Oracle:
  • OracleSuppliedDriver - indicates that the Oracle supplied driver (typically prefixed by ojdbc) is being used. Defaults to true. Set to false when using DataDirect or other Oracle JDBC drivers.
    Oracle translator supports geo spatial functions. The supported functions are:
    • Relate = sdo_relate
      CREATE FOREIGN FUNCTION sdo_relate (arg1 string,  arg2 string,  arg3 string) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_relate (arg1 Object,  arg2 Object,  arg3 string) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_relate (arg1 string,  arg2 Object,  arg3 string) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_relate (arg1 Object,  arg2 string,  arg3 string) RETURNS string;
      
    • Nearest_Neighbor = dso_nn
      CREATE FOREIGN FUNCTION sdo_nn (arg1 string,  arg2 Object,  arg3 string,  arg4 integer) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_nn (arg1 Object,  arg2 Object,  arg3 string,  arg4 integer) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_nn (arg1 Object,  arg2 string,  arg3 string,  arg4 integer) RETURNS string;
      
    • Within_Distance = sdo_within_distance
      CREATE FOREIGN FUNCTION sdo_within_distance (arg1 Object,  arg2 Object,  arg3 string) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_within_distance (arg1 string,  arg2 Object,  arg3 string) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_within_distance (arg1 Object,  arg2 string,  arg3 string) RETURNS string;
      
    • Nearest_Neighbour_Distance = sdo_nn_distance
      CREATE FOREIGN FUNCTION sdo_nn_distance (arg integer) RETURNS integer;
      
    • Filter = sdo_filter
      CREATE FOREIGN FUNCTION sdo_filter (arg1 Object,  arg2 string,  arg3 string) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_filter (arg1 Object,  arg2 Object,  arg3 string) RETURNS string;
      CREATE FOREIGN FUNCTION sdo_filter (arg1 string,  arg2 object,  arg3 string) RETURNS string;
      
osisoft-pi
The OSISoft Translator, known by the type name osisoft-pi, is for use with OSIsoft PI OLEDB Enterprise. This translator uses the JDBC driver provided by the OSISoft.
When you are installing on Linux, make sure you have the OpenSSL libraries installed, and you have these export statements added correctly to your shell environment variables. Otherwise you can also add it to the [EAP_HOME]/bin/standalone.sh file or to the [EAP_HOME]/bin/domain.sh file.
export PI_RDSA_LIB=/path/pipc/jdbc/lib/libRdsaWrapper-1.5b.so
export PI_RDSA_LIB64=/path/pipc/jdbc/lib/libRdsaWrapper64-1.5b.so
To execute from Linux, you also need the gSoap library, as the PI JDBC driver uses SOAP over HTTPS to communicate with the PI server.
To install on Microsoft Windows, follow the installation program provided by OSISoft for installing the JDBC drivers. Make sure you have these environment variables configured (the JDBC Driver installer sets them automatically):
PI_RDSA_LIB     C:\Program Files (x86)\PIPC\JDBC\RDSAWrapper.dll
PI_RDSA_LIB64   C:\Program Files\PIPC\JDBC\RDSAWrapper64.dll
The PI translator is an extension of jdbc-ansi translator, so all standard SQL ANSI queries are supported. The PI translator also you to perform LATERAL joins with Table Valued Functions (TVF). Here is an example query:
SELECT EH.Name, BT."Time", BT."Number of Computers", BT."Temperature" FROM Sample.Asset.ElementHierarchy EH LEFT JOIN LATERAL (exec "TransposeArchive_Building Template"(EH.ElementID, TIMESTAMPADD(SQL_TSI_HOUR, -1, now()), now())) BT on 1=1 WHERE EH.ElementID IN (SELECT ElementID FROM Sample.Asset.ElementHierarchy WHERE Path='\Data Center\')

Note

ANSI SQL semantics require a ON clause, but CROSS APPLY or OUTER APPLY do not use the ON clause. For this reason you must pass in a dummy ON clause like ON (1 = 1), which will be ignored when converted to the APPLY clause which will be pushed down.
By default this translator sets the importer.ImportKeys to false.

Note

You must model the PI data type, GUID, String and define the NATIVE_TYPE on the column as guid, then the translator will appropriately convert the data back and forth with the PI datasource’s native GUID type with appropriate type casting from the string.
postgresql
This translator is for use with 8.0 or later clients and 7.1 or later server.
PostgreSQL specific execution properties:
PostGisVersion - indicate the PostGIS version in use. Defaults to 0 meaning PostGIS is not installed. Will be set automatically if the database version is not set.
ProjSupported - boolean indicating if Proj is support for PostGis. Will be set automatically if the database version is not set.
prestodb
The PrestoDB translator, known by the type name prestodb, exposes querying functionality to PrestoDB Data Sources. In data integration respect, PrestoDB has very similar capabilities of Teiid, however it goes beyond in terms of distributed query execution with multiple worker nodes. Teiid's execution model is limited to single execution node and focuses more on pushing the query down to sources. Currently Teiid has much more complete query support and many enterprise features.
The PrestoDB translator supports only SELECT statements with a restrictive set of capabilities. This translator is developed with 0.85 version of PrestoDB and capabilities are designed for this version. With new versions of PrestoDB Teiid will adjust the capabilities of this translator. Since PrestoDB exposes a relational model, the usage of this is no different than any RDBMS source like Oracle, DB2 etc. For configuring the PrestoDB consult the PrestoDB documentation.

Note

PrestoDB does not support multiple columns in the ORDER BY in JOIN situations. The translator property supportsOrderBy can be used to disable Order by in some specific situations.

Note

Some versions of PrestoDB do not support nulls as valid values in subqueries.

Note

PrestoDB does not support transactions. To overcome issues caused by this limitation, define the datasource as non-transactional.

Note

Every catalog in PrestoDB has an information_schema by default. If you have to configure multiple catalogs, use import options to filter the schemas, to avoid a duplicate table error that causes the VDB deploy to fail. For instance, set catalog to a specific catalog name to match the catalog name as it is stored in the PrestoDB, set schemaPattern to a regular expression to filter schemas by matching result and set excludeTables to a regular expression to filter tables by matching results.

Note

The PrestoDB JDBC driver uses the Joda-Time library to work with time/date/timestamps. If you need to customize your server’s time zone (using the setting -Duser.timezone via JAVA_OPTS), you cannot use the GMT/…​ ID as Joda-Time does not recognize it. However, you can use equivalent ETC/... ID.
redshift
The Redshift Translator, known by the type name redshift, is for use with the Redshift database. This translator is an extension of the PostgreSQL Translator and inherits its options.
sqlserver
This translator is for use with SQL Server 2000 or later. A SQL Server JDBC driver version 2.0 or later (or compatible e.g. JTDS 1.2 or later) must be used. The SQL Server DatabaseVersion property may be set to 2000, 2005, 2008, or 2012, but otherwise expects a standard version number, for example, 10.0.
Execution properties specific to SQL Server:
  • JtdsDriver - indicates that the open source JTDS driver is being used. Defaults to false.
sybase
This translator is for use with Sybase version 12.5 or later. If used in a dynamic vdb and no import properties are specified (not recommended, see import properties below), then exceptions can be thrown retrieving system table information. Specify a schemaPattern or use excludeTables to exclude system tables if this occurs.
If the name in source metadata contains quoted identifiers (such as reserved words or words containing characters that would not otherwise be allowed) and you are using a jconnect Sybase driver, you must first configure the connection pool to enable quoted_identifier.

Example 12.2. Driver URL with SQLINITSTRING

jdbc:sybase:Tds:host.at.some.domain:5000/db_name?SQLINITSTRING=set quoted_identifier on
Execution properties specific to Sybase:
  • JtdsDriver - indicates that the open source JTDS driver is being used. Defaults to false.

    Important

    You must set the connection parameter JCONNECT_VERSION to 6 or later when using the Sybase data source. If you do not do so, you will encounter an exception.
sybaseiq
This translator is for use with Sybase IQ version 15.1 or later.
teiid
This translator is for use with Teiid 6.0 or later.
teradata
This translator is for use with Teradata V2R5.1 or later.
vertica
This translator is for use with Vertica 6 or later.

12.18.5. JDBC Translator: Usage

Using JBoss Data Virtualization SQL, the source may be queried as if the tables and procedures were local to the JBoss Data Virtualization system.

12.18.6. JDBC Translator: Native Queries

Both physical tables and procedures may optionally have native queries associated with them. No validation of the native query is performed; it is used to generate the source SQL.
For a physical table, setting the teiid_rel:native-query extension metadata to the desired query string will execute the native query as an inline view in the source query. This feature can only be used against sources that support inline views. The native query is used as is and is not treated as a parameterized string. For example, on a physical table y with nameInSource="x" and teiid_rel:native-query="select c from g", the JBoss Data Virtualization source query "SELECT c FROM y" would generate the SQL query "SELECT c FROM (select c from g) as x". Note that the column names in the native query must match the nameInSource of the physical table columns for the resulting SQL to be valid.
For physical procedures, you may also set the teiid_rel:native-query extension metadata to a desired query string with the added ability to positionally reference IN parameters (see Section 12.7, “Parameterizable Native Queries”).
A parameter reference has the form $integer, for example, $1. Note that one-based indexing is used and that only IN parameters may be referenced. $integer is reserved, but may be escaped with another $, for example, $$1.
By default, bind values will be used for parameter values. In some situations you might need to bind values directly into the resulting SQL.
The teiid_rel:non-prepared extension metadata property may be set to false to turn off parameter binding. Note that this option must be used with caution as inbound may allow for SQL injection attacks if not properly validated. The native query does not need to call a stored procedure. Any SQL that returns a result set positionally matching the result set expected by the physical stored procedure metadata will work. For example, on a stored procedure x with teiid_rel:native-query="select c from g where c1 = $1 and c2 = '$$1'", the JBoss Data Virtualization source query "CALL x(?)" would generate the SQL query "select c from g where c1 = ? and c2 = '$1'". Note that ? in this example will be replaced with the actual value bound to parameter 1.

12.18.7. JDBC Translator: Native Procedure

Warning

This feature is turned off by default because of the security risk this exposes to execute any command against the source. To enable this feature, override the translator property called "SupportsNativeQueries" to true. See Section 12.6, “Override Execution Properties”.
JDBC translator also provides a procedure with name native that gives ability to execute any ad hoc native SQL command that is specific to an underlying source directly against the source without any JBoss Data Virtualization parsing or resolving. The metadata of this procedure's execution results are not known to JBoss Data Virtualization, and they are returned as object array. Users can use the ARRAYTABLE construct ( Section 2.6.10, “Nested Tables: ARRAYTABLE”) to produce tabular output for client applications.

Example 12.3. Select Example

SELECT x.* FROM (call pm1.native('select * from g1')) w,
 ARRAYTABLE(w.tuple COLUMNS "e1" integer , "e2" string) AS x

Example 12.4. Insert Example

SELECT x.* FROM (call pm1.native('insert into g1 (e1,e2) values (?, ?)', 112, 'foo')) w,
 ARRAYTABLE(w.tuple COLUMNS "update_count" integer) AS x

Example 12.5. Update Example

SELECT x.* FROM (call pm1.native('update g1 set e2=? where e1 = ?','blah', 112)) w,
 ARRAYTABLE(w.tuple COLUMNS "update_count" integer) AS x

Example 12.6. Delete Example

SELECT x.* FROM (call pm1.native('delete from g1 where e1 = ?', 112)) w,
 ARRAYTABLE(w.tuple COLUMNS "update_count" integer) AS x

Important

By default, the name of the procedure that executes the queries directly is called native , however users can override the NativeQueryProcedureName execution property in the vdb.xml file to change it to any other procedure name. See Section 12.6, “Override Execution Properties”.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.