Chapter 4. Using Maven Spring Boot to create virtual databases
You can use Maven Spring Boot to create, build, and deploy a virtual database on OpenShift.
The method of using Fabric8 and Maven Spring Boot plugins to build and deploy virtual databases is deprecated in this release.
The Spring Boot Maven plugin converts the data virtualization library into an executable Spring Boot JAR file. The Fabric8 Maven plugin helps to build a container image that is based on the Spring Boot executable and optionally deploy that container to OpenShift. Both plugins are available from the public Red Hat Maven repository, and they are downloaded on demand at build time after you specify dependencies in your pom.xml
file. For information about specifying build dependencies, see Section 4.4, “Specifying project dependencies in the pom.xml file”.
Prerequisites
- You have the 2019-07 release of Red Hat Integration, and you are running Fuse 7.5.
- You have a DDL file for the virtual database that you want to create, or you know how to write SQL code and create DDL files.
- You have Maven 3.0 installed.
- You have JDK 8 (Java Platform, Standard Edition 8 Development Kit) or greater installed.
You can connect to one of the following types of data sources:
- Relational database
- mongoDB
- Salesforce
- REST
- OData
- OpenAPI
- Google Sheets∗
- You have administrator access to the configured data source.
4.1. Using Maven to create a Java shell project for your virtual database
Use Maven to generate a Java shell project that you can then modify.
Prerequisites
- You have Maven 3.0 installed.
- You have Java JDK 11 or greater installed.
Procedure
- On your local workstation, change to a directory where you want to create the Java project for your virtual database.
Run the following Maven command to generate a plain Java shell project:
mvn archetype:generate -DgroupID=<domainSuffix>.<domainName> -DartifactID=<virtualDbName> -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
For example:
mvn archetype:generate -DgroupId=com.example -DartifactId=sampledb -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
All of the artifacts that you need to create a Java project are saved to the project directory.
4.2. Creating and deploying secrets
Create and deploy secret objects to store values for your environment variables.
Although secrets exist primarily to protect sensitive data by obscuring the value of a property, you can use them to store the value of any property.
Prerequisites
- You have the login credentials and other information that are required to access the data source.
Procedure
Create a secrets file to contain the credentials for your data source, and save it locally as a
.yml
file. For example,Sample secrets.yml file
apiVersion: v1 kind: Secret metadata: name: postgresql type: Opaque stringData: database-user: bob database-name: sampledb database-password: bob_password
Deploy the secret object on OpenShift.
Log in to OpenShift, and open the project that you want to use for your virtual database. For example,
oc login --token=<token> --server=https://<server>
oc project <projectName>
Run the following command to deploy the secret file:
oc create -f ./secret.yaml
Set an environment variable to retrieve its value from the secret.
In the environment variable, use the format
valueFrom:/secretKeyRef
to specify that the variable retrieves it value from a key in the secret that you created in Step 1.For example, in the following excerpt, the
SPRING_DATASOURCE_SAMPLEDB_PASSWORD
retrieves its value from a reference to thedatabase-password
key of thepostgresql
secret:
- name: SPRING_DATASOURCE_SAMPLEDB_PASSWORD valueFrom: secretKeyRef: name: postgresql key: database-password
Additional resources
- For more information about how to use secrets on OpenShift, see Providing sensitive data to pods in the OpenShift documentation.
4.3. Secret objects for storing data source information
Use secret objects to securely store sensitive information about how your virtual database connects to its data sources.
For OpenShift to open a connection to a data source on behalf your virtual database application, it must pass login credentials and other information to the data source. Typically, to maintain security administrators limit access to database credentials, and do not expose credentials to developers directly. To enable developers indirect access to credentials, it’s possible to deploy secret objects on OpenShift to securely store and pass credentials.
Use secrets in combination with environment variables. Rather than specify static values directly in your configuration files, you can configure OpenShift to retrieve values for environment variables from secret objects. When a key in an environment variable refers to a secret object, to obtain the key value, OpenShift examines the secret to find a token that has a name that matches the key name. It extracts the token value and then passes it to the environment variable.
For example, the following environment variable is set to retrieve the value for the database-user
key from a secret object that has the name postgresql
.
- name: SPRING_DATASOURCE_SAMPLEDB_USERNAME valueFrom: secretKeyRef: name: postgresql key: database-user
When the data virtualization service needs to retrieve the value for the preceding environment variable, it accesses the secret object with the name postgresql
and reads the value of the database-user
key.
In the following secret object, the database-user
token is assigned the value bob
. OpenShift passes that value to the environment variable.
Sample secrets.yml file
apiVersion: v1 kind: Secret metadata: name: postgresql type: Opaque stringData: database-user: bob
4.4. Specifying project dependencies in the pom.xml file
To provide your Java shell project with the details that are required to build the project, edit the pom.xml
file to specify project dependencies, plugins to run, and build profiles.
Some properties in pom.xml
are common to all data source types. For example, the Teiid Spring Boot starter and the Spring Boot Maven plugin are required to connect to both a postgreSQL database and a Salesforce database. Other properties, such as the drivers that a data source requires, are specific to individual data sources.
Use the sample pom.xml
file in the Teiid OpenShift repository as the basis for your own file. The sample file contains settings for a postgreSQL database, but the settings in the <dependencyManagement>
, <build>
, and <profiles>
elements apply to any data source. The build resource must be set to the Spring Boot Maven plugin. This plugin, spring-boot-maven-plugin
, converts the virtual database schema project into a Spring Boot executable Uber JAR file that becomes the basis for an OpenShift container image. The OpenShift profile element must be set to use the Fabric8 Maven plugin (<artifactId>fabric8-maven-plugin</artifactId>
), which helps to build a container image from the executable JAR, and optionally deploy it into OpenShift.
Modify the values in the <dependencies>
element as needed to enable connectivity to the data sources that you want to use.
Driver modules for most databases are included in the Red Hat Maven repository. At build time, the drivers are downloaded automatically based on the dependency statements in pom.xml
. For some proprietary data sources, drivers might not be publicly available. If a driver is not available in the repository, download the driver from a third-party site, and deploy it to your local Maven repository.
Prerequisites
- You have a Java shell project for your virtual database application.
-
You are familiar with editing Maven
pom.xml
files. -
Download the sample
pom.xml
file from the Teiid OpenShift repository. - If the driver for your database is not available from the public Maven repository, you have downloaded the driver and deployed it to your local Maven repository.
Procedure
-
Replace the default
pom.xml
file that you created in your Java shell project with the file that you download from the Teiid OpenShift repository. -
Edit the
pom.xml
to specify the name of the OpenShift project in the<fabric8 namespace>
element. Set the value of the properties version element to the data virtualization version that you are using. For example,
<properties> <version.teiid.spring.boot>${version.teiid.spring-boot} </version.teiid.spring.boot> </properties>
${version.teiid.spring-boot}
represents the build version of the code that is available in the Maven repository. Substitute the value of the build version for the product that you are working with.-
Specify your data source dependencies in the
<dependencies>
element. If you want to connect a postgreSQL database, you can use the values in the<dependencies>
element as they are.
Additional resources
-
For information about changes that you must make to the
<dependencies>
element of thepom.xml
file to support Google Sheets or Salesforce, see Section 4.13, “Connection settings for other data sources”.
4.5. Defining the structure for virtual databases in a DDL file
After you complete changes to the pom.xml
file, you’re ready to define the structure of your virtual database. You define a virtual database through a text-based DDL file. You can supply an existing DDL file, if you have one, or you can create one.
If you have a .vdb
or .xml
file from an earlier data-virtualization product that you want to reuse, you must convert the file into DDL format before you can use it. For more information about how to convert legacy virtual database files to DDL format, see Chapter 8, Migrating legacy virtual database files to DDL format.
If you want to create your own DDL file you can use the sample DDL file in the Teiid OpenShift repository as a guide. But be mindful that the entries in your DDL file are unique to your environment and must include settings that are specific to the data sources that you use.
The design of a DDL file can be complex and is beyond the scope of this documentation. For more information about using SQL in DDL files that support data virtualization, see the Teiid Reference Guide.
Prerequisites
- You have a DDL file for the virtual database that you want to create.
- You know how to write SQL code and you are familiar with creating DDL files.
Procedure
-
Add your DDL file to the
src/main/resources
directory. You can create a new file from scratch, modify the sample DDL file, or use an existing file. - Use SQL statements to specify the structural elements of your virtual database.
4.6. DDL files
DDL files contain SQL commands that describe and define the structure of the virtual database, such as its views or schema, procedures, functions, tables, records, columns, and other fields.
When you are ready to build your virtual database, the data virtualization service reads the information in the DDL file and uses that information to generate the virtual database image.
Structures that you define in the DDL file are converted to relational database tables. Translators in the data virtualization service import and convert data from your data sources, and use the data to create views in the virtual database.
A typical DDL file define database elements by using statements similar to those in the following list:
The name of the virtual database. For example:
CREATE DATABASE customer OPTIONS (ANNOTATION 'Customer VDB'); USE DATABASE customer;
The name of the translator, or foreign data wrapper, that is needed to interpret data from the data source. For example,
CREATE FOREIGN DATA WRAPPER postgresql;
The name of the data source server, and the name of the resource adapter that provides the connections details for the external data source. For example,
CREATE SERVER sampledb TYPE 'NONE' FOREIGN DATA WRAPPER postgresql OPTIONS ("resource-name" 'sampledb');
NoteSeveral of the files that you use to configure your virtual database refer to the name of the data source that is defined in the
CREATE SERVER
statement of the DDL file. For example, the name of the data source appears in theDatasources.java
file, in theapplication.properties
file, and in the names of environment variables in thedeploymentconfig.yml
file. To wire the various configuration files together, it’s important to use the data source name consistently.The names of the foreign schema from which you want to import metadata, and the virtual schema into which you want to import that metadata. For example:
CREATE SCHEMA accounts SERVER sampledb; CREATE VIRTUAL SCHEMA portfolio; SET SCHEMA accounts; IMPORT FOREIGN SCHEMA public FROM SERVER sampledb INTO accounts OPTIONS("importer.useFullSchemaName" 'false');
Views in the virtual database and the mapping between data in the data source and in the virtual database view. For example,
CREATE VIEW CustomerZip(id bigint PRIMARY KEY, name string, ssn string, zip string) AS SELECT c.ID as id, c.NAME as name, c.SSN as ssn, a.ZIP as zip FROM accounts.CUSTOMER c LEFT OUTER JOIN accounts.ADDRESS a ON c.ID = a.CUSTOMER_ID;
4.7. Adding Java application and class files
In the application library of your virtual database project, create a Java application file, Application.java
, and a data source class file, DataSources.java
.
Application.java
is the main Spring Boot application file that bootstraps the virtual database application. DataSources.java
adds @Bean methods for each data source that you want to use as an input to your virtual database. The Java class serves as a resource adapter, which provides access to the data source.
When the virtual database starts, the virtualization service reads the metadata and generates an internal model from it. The service then uses that model to read and write to the virtual database. Entities within a data source become available in the virtual database for users to access as tables. For example, if you use a Salesforce database as a data source, then SObjects
in Salesforce become available as tables in the virtual database.
Prerequisites
- You have Java 11 or greater installed.
- You have a Java shell project that you generated with Maven.
Procedure
Create the following
Application.java
file in your Java class folder (for example,src/main/java/com/example
).package com.example; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
You can remove the default
app.java
file that Maven generates when you create the project.-
Create a
Datasources.java
file in the class folder, and add a bean method for each data source that you want to connect to your virtual database. For an example of aDatasources.java
file that is designed to work with a postgreSQL database, see the Section 4.8, “Sample Datasources.java file”.
4.8. Sample Datasources.java file
The Datasources.java
file adds a class to represent a connection to a data source. The file also establishes a prefix in the ConfigurationProperties
argument (spring.datasource.sampledb
). This prefix must be used in the names of data source properties that you specify in the application.properties
file.
You can define multiple data sources in Datasources.java
by adding multiple classes, each with its own prefix designation. In each case you must add corresponding entries to the DDL file and to the properties or deployment configuration files.
To associate the Java bean with the data source that is defined in your DDL file, the bean name must be the same as the name of the SERVER
and resource-name
properties in the DDL file. For example, the following sample file establishes a connection to a PostgreSQL database called sampledb
, which is the name that is assigned in the DDL file to the data source SERVER
object and to its resource-name
definition.
package com.example; import javax.sql.DataSource; import org.springframework.boot.jdbc.DataSourceBuilder; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class DataSources { @ConfigurationProperties(prefix = "spring.datasource.sampledb") 1 @Bean public DataSource sampledb() { 2 return DataSourceBuilder.create().build(); } }
- 1
- The prefix must match the prefix that you assign to properties that you define in the
application.properties
file. - 2
- The name
sampledb
in the prefix definition and in the method name must match the name in theSERVER
andresource-name
objects that are defined in the virtual database DDL file. The Spring Boot framework automatically associates the names of methods in theDatasources.java
file with the names of data sources in the DDL file.
The preceding sample file is designed to work with a postgreSQL database. For information about how to adapt the file for use with other data sources, see Section 4.13, “Connection settings for other data sources”.
4.9. Specifying application properties
You define static properties for your virtual database application in an application.properties
file in the /src/main/resource
directory. Static properties are configuration settings that remain constant across different environments. After you deploy a virtual database on OpenShift, any modifications that you make to the application.properties
file are not effective unless you rebuild and redeploy your virtual database.
At minimum the application.properties
file must contain a value for the teiid.vdb-file
property, which names the DDL file that defines the structure of the the virtual database. For example, teiid.vdb-file=customer-vdb.ddl
.
You can also use the application.properties
file to define other properties, such as data source properties, including their names, their drivers, and the URLs, user names, and passwords that are required to connect to them. Using static properties to assign values for these can be useful in a test environment. But if you deploy your virtual database in multiple OpenShift environments, it’s best to use environment variables to dynamically assign unique values for each environment. For more information about using environment variables in your virtual database configuration, see Section 4.12, “Setting the deployment configuration”.
If you define data source properties in the application.properties
file, you must prefix the configuration properties string that you specified in the Datasources.java
file. The prefix establishes a connection between the properties and the Java class. For example, if you establish the configuration properties prefix spring.datasource.sampledb
in the Datasources.java
file, then you must precede the names of the properties that you define in your application.properties
file with that string. For example,
spring.datasource.sampledb.username=<username> spring.datasource.sampledb.password=<password>
Prerequisites
-
You have a DDL file in
/src/main/resources
that defines your virtual database structure. -
You have a
Datasources.java
file in your Java class folder that specifies an application prefix.
Procedure
-
Add the file
application.properties
to thesrc/main/resources
folder of your Java project. -
In the file, add the property
teiid.vdb-file
and set its value to the name of the DDL file in thesrc/main/resources
folder of your Java project, for example,customer-vdb.ddl
. -
(Optional) Add properties to specify connection information for your data source, such as its name, URL, login credentials, and drivers. For an example of an
application.properties
file for a postgreSQL data source, see Section 4.10, “Sample application.properties file”
4.10. Sample application.properties file
The following application.properties
file includes settings for connecting to a postgreSQL database. You might use a file similar to this for testing purposes, but for an OpenShift deployment, it’s best to specify data source properties in a deploymentconfig.yml
file.
You can specify source-specfic properties for other types of data sources. For more information, see Section 4.13, “Connection settings for other data sources”.
spring.datasource.sampledb.jdbc-url=jdbc:postgresql://localhost/sampledb 1 2 spring.datasource.sampledb.username=user 3 spring.datasource.sampledb.password=user spring.datasource.sampledb.driver-class-name=org.postgresql.Driver 4 spring.datasource.sampledb.platform=sampledb 5 # VDB location teiid.vdb-file=customer-vdb.ddl 6 # true to allow JDBC connections teiid.jdbc-enable=true #logging.level.org.teiid=DEBUG 7
- 1
- The JDBC URL that the virtual database uses to connect to a local postgreSQL database as its data source.
- 2
- The prefix that is used in each of these properties matches the prefix that is defined in the
Datasources.java
file. - 3
- The user name and password values listed here are displayed in plain text. To secure these credentials in a production deployment on OpenShift, use environment variables in a deployment configuration file to reference the secret object that defines these values.
- 4
- The driver that is required to connect to the data source. This driver is defined in the
pom.xml
file. - 5
- The name of the data source.
- 6
- The name of the DDL file.
- 7
- Uncomment this statement to enable debug logging. You can view the logs for a pod by running the following command:
oc logs <podname>
Logs are also available on the Logs tab of the Pod Overview page in the OpenShift web console.
4.11. Deployment configuration files (deploymentconfig.yml
)
A deployment configuration file stores settings that govern how the Fabric8 Maven plugin builds and deploys the container image for your virtual database.
The deploymentconfig.yml
file can also define environment variables for the properties that are required to configure data sources for your virtual databases. The environment variables that you define in the deploymentconfig.yml
file map to properties in the application.properties
file. But unlike settings in the properties file, the settings that you define in the deploymentconfig.yml
file are dynamic. That is, if you add an environment variable or change its value, you do not have to rebuild the virtual database service to put the change into effect.
Environment variables and their corresponding properties have similar names, but they are formatted differently. Separators in the property names are converted from dots or dashes to underscores, and alphabetic characters are converted to uppercase.
Property in application.properties | Environment variable in deploymentconfig.yml | |
---|---|---|
spring.datasource.sampledb.jdbc-url | SPRING_DATASOURCE_SAMPLEDB_JDBCURL |
Because you commonly deploy virtual databases to multiple OpenShift environments, for example a staging and a production environment, you typically specify different data source properties in each environment. For example, the login credentials for accessing a data source in your staging environment probably differ from the credentials that you need to access the data source in the production environment. To define unique values in each environment, you can use environment variables.
The environment variables in deploymentconfig.yml
replace any static properties that you might set in the application.properties
file. If a property is defined in both files, the value in the deploymentconfig.yml
file takes precedence.
You use a single version of the file across environments and use secret objects to isolate the unique details of each environment. Instead of specifying static values for environment variables directly in the file, you can store the values for each deployment environment in secret objects that are unique to each environment. The value of each environment variable in the file contains only a key reference, which specifies the name of a secret object, and the name of a token in the secret. The token stores the actual value. At runtime, environment variables retrieve their values from the tokens.
By using secrets to store the values of your environment variables, you can use a single version of the deploymentconfig.yml
across environments. The secret objects that you deploy in each environment must have the same name, but each secret object contains token values that are specific to its environment.
Additional resources
- For more information about using secrets, see Section 4.3, “Secret objects for storing data source information”.
-
For information about adding a
deploymentconfig.yml
file, see Section 4.12, “Setting the deployment configuration”.
4.12. Setting the deployment configuration
You set the deployment configuration by editing a deploymentconfiguration.yml
file. You can define environment variables in the file for each data source that the virtual database uses.
Prerequisites
-
You have a copy of the sample
deploymentconfiguration.yml
file from the Teiid OpenShift repository. - You have information about the connection settings for your data sources.
- If you want to use secrets to store values for your environment variables, you have information about the name of the secret for your virtual database and the names of the tokens that you want to refer to in your environment variables.
Procedure
-
In the
/src/main/fabric8
folder of your Java project, create adeploymentconfiguration.yml
file. - Add environment variables and other settings that are consistent with your environment.
Additional resources
-
For more information about the
deploymentconfiguration.yml
file, see Section 4.11, “Deployment configuration files (deploymentconfig.yml
)”. - For more information about secrets, see Section 4.3, “Secret objects for storing data source information”.
- For information about deployment configuration settings for other data sources, see Section 4.13, “Connection settings for other data sources”.
4.13. Connection settings for other data sources
To enable a virtual database to connect to a data source, you must provide connection details such as the name of the data source, the driver to use, the user credentials and so forth. You specify these settings across several files.
The sample files in the Teiid OpenShift repository, or elsewhere in this documentation provide configuration information that is consistent with using a postgreSQL database as the data source for your virtual database. If you want to a different data source, you must modify settings in the postgreSQL versions of the following files:
- pom.xml file
- Specifies the dependencies for a data source, such as the drivers that are required to connect to the data source. At build time, if the driver for your database type is publically available, Teiid Spring Boot downloads the required drivers automatically.
- application.properties file
- Defines static application properties that cannot be changed after you deploy the application to OpenShift, for example the name of the DDL file.
- Deploymentconfig.yml
- Defines application properties through dynamic environment variables so that you can specify values to correspond to a particular deployment environment
- Datasources.java
- Specifies a Java class to represent the connection to the data source. The service name that you specify in the annotation and as the name of the method must match exactly the name of the SERVER that is listed in the DDL file.
- DDL file
- Defines the virtual database structure including specific mapping from the source schema to the virtual schema.
When the virtual database service starts, the data virtualization service scans the application’s packages for dependency annotations and uses the information to build the metadata to create the virtual database and deploy it to the OpenShift server.
Prerequisites
- You have reviewed the postgreSQL versions of the files in the preceding list.
4.13.1. Settings to connect to Salesforce as a data source
If you want to use a Salesforce database as a data source for your virtual database, you must add some source-specific settings. You can use the postgreSQL files in the Teiid OpenShift repository as a starting point, but you’ll have to modify the files to adapt them for use with Salesforce.
The following files contain information that you must modify to use a Salesforce database as a data source:
-
pom.xml
-
application.properties
-
deploymentconfig.yml
-
datasources.java
- DDL file
pom.xml settings for using Salesforce as a data source
To support Salesforce as a data source, you must add the following dependencies in the pom.xml
file:
- teiid-spring-boot-starter
- spring-data-salesforce
For example:
<dependency> <groupId>org.teiid</groupId> <artifactId>teiid-spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.teiid.spring</groupId> <artifactId>spring-data-salesforce</artifactId> </dependency>
application.properties settings to add to use Salesforce as a data source
You can specify values in application.properties
to configure Salesforce as a data source, as shown in the following table. Each of the properties uses the prefix spring.teiid.data.salesforce
Property Name | Description | Default |
---|---|---|
url | Login URL | |
requestTimeout | Request timeout | Integer.MAX_VALUE |
connectTimeout | Connection timeout | Integer.MAX_VALUE |
pollingInterval | Polling interval for bulk results | 500 |
clientId | OAuth2 client ID | N/A |
clientSecret | OAuth2 client secret | N/A |
refreshToken | OAuth2 refresh token | N/A |
userName | User name | N/A |
password | Password | N/A |
datasources.java file for connecting to Salesforce as a data source
The Datasources.java
file creates a class that Teiid can recognize as a Salesforce data source. The class acts as a resource adapter to enable the virtual database to access information in the data source.
package org.teiid.spring.example; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.teiid.spring.data.salesforce.SalesforceConfiguration; import org.teiid.spring.data.salesforce.SalesforceConnectionFactory; @Configuration public class DataSources { @Bean public SalesforceConnectionFactory accounts(SalesforceConfiguration config) { return new SalesforceConnectionFactory(config); } @Bean @ConfigurationProperties("spring.teiid.data.salesforce") public SalesforceConfiguration salesforceConfig() { return new SalesforceConfiguration(); } }
The preceding class defines an accounts
bean. When you create the virtual database, the data virtualization service recognizes the class as a data source, reads its metadata, and generates an internal model from it. The data virtualization service can then read from and write to it.
DDL file changes to connect a virtual database to Salesforce as a data source
Update the name of the SERVER
object and the FOREIGN DATA WRAPPER
to reflect the data source. For example, assign the following values to objects in the DDL file:
-
Set the name of the
SERVER
object tosalesforce
-
Set the name of the
FOREIGN DATA WRAPPER
tosalesforce
4.13.2. Settings to connect to Google Sheets as a data source
If you want to use a Google Sheets as a data source for your virtual database, you must add some source-specific settings. You can use the postgreSQL files in the Teiid OpenShift repository as a starting point, but you’ll have to modify the files to adapt them for use with Google Sheets.
The following files contain information that you must modify to use Google Sheets as a data source:
-
pom.xml
-
application.properties
-
Deploymentconfig.yml
-
Datasources.java
- DDL file
Pom.xml settings for using Google Sheets as a data source
To support Google Sheets as a data source, you must add the following dependencies in the pom.xml
file:
- teiid-spring-boot-starter
- spring-data-google
For example:
<dependency> <groupId>org.teiid</groupId> <artifactId>teiid-spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.teiid.spring</groupId> <artifactId>spring-data-google</artifactId> </dependency>
Application.properties settings to add to use Google Sheets as a data source.
You can specify values in application.properties
to configure Google Sheets as a data source, as shown in the following table. Each of the properties uses the prefix spring.teiid.data.google-sheets
Property Name | Description | Default |
---|---|---|
refresh-token | OAuth2 refresh token | N/A |
spread-sheet-name | Name of the Google Spreadsheet | N/A |
spread-sheet-id | Spreadsheet ID | Sheet Id, Look in URL of the spreadsheet. For more info see https://developers.google.com/sheets/api/guides/concepts#spreadsheet_id |
client-id | Client ID | OAuth2 client ID |
client-secret | Client secret | OAuth2 client secret |
Datasources.java file for connecting to Google Sheets as a data source
The Datasources.java
file creates a class that Teiid can recognize as a Google Sheets data source. The class acts as a resource adapter to enable the virtual database to access information in the data source.
@Configuration public class DataSources { @Bean public SpreadsheetConnectionFactory accounts(SpreadSheetConfiguration config) { return new SpreadsheetConnectionFactory(config); } @Bean @ConfigurationProperties("spring.teiid.data.google.sheets") public SpreadSheetConfiguration sheetsConfig() { return new SpreadSheetConfiguration(); } }
The preceding class defines an accounts
bean. When you create the virtual database, the data virtualization service recognizes the class as a data source, reads its metadata, and generates an internal model from it. The data virtualization service read from and write to it.
DDL file changes to connect a virtual database to Google Sheets as a data source
Update the name of the SERVER
object and the FOREIGN DATA WRAPPER
to reflect the data source. For example, assign the following values to objects in the DDL file:
-
Set the name of the
SERVER
object togoogle-spreadsheet
. -
Set the name of the
FOREIGN DATA WRAPPER
togoogle-spreadsheet
.
4.14. Building a data virtualization project
After you complete the configuration tasks for your virtual database, you are ready to run Maven to build the project.
The Spring Boot Maven plugin creates a self-contained Uber JAR that includes all of the application code and dependencies in a single JAR file. The resulting JAR file serves as the basis for an OpenShift image. The OpenShift profile includes a Fabric8 Maven plugin that compiles the current build with the Uber JAR to generate an image that you can deploy to OpenShift.
After you make certain changes to the virtual database configuration, for example adding protocol services or routes for the virtual database, you must re-compile the project to update the deployed image.
Prerequisites
- You have completed all of the configuration tasks listed in Chapter 2, Virtual database creation.
Procedure
- Log in to OpenShift and run the following command:
mvn clean install -Popenshift -Dfabric8.namespace=`oc project -q`