DSS can connect to Google BigQuery through a JDBC driver developed by Simba.
- Java 8 is required
BigQuery uses wording that sometime clashes with DSS concepts. Here are a few hints:
- A DSS Dataset corresponds to a BigQuery table
- The concept of dataset in BigQuery is named schema in DSS
- A DSS partition might not translate into a BigQuery table partition and vice-versa (see below for more explanations)
- Creating external datasets targeting BigQuery projects
- Reading data across all datasets and tables.
- Reading all BigQuery data types
- SQL notebook for interactive querying
- SQL query datasets
- SQL query recipes, with inputs and outputs in BigQuery
- Sync query recipes, with output in BigQuery and input in either Google Cloud Storage or BigQuery.
- All visual recipes (Group, Join, VStack, Window, Filter executed in BigQuery), with inputs and outputs in BigQuery
- Python code recipes with inputs and outputs in BigQuery if you’re using SQLExecutor2 to generate the results.
- Charts with DSS and In-Database engine modes.
- BigQuery native table partitioning by Date or by Range, and clustering.
If your data are not stored in BigQuery, you first need to upload them into a Google Cloud Storage (GCS) bucket. Once they are into GCS, you can use a Sync recipe to transfer them into BigQuery.
- BigQuery wildcard tables
- BigQuery BYTES, DATETIME, DATE, TIME, ARRAY, STRUCT data types
- INSERT / CREATE statements in SQL notebook
- SQL script recipes
- Code recipes (except Python with SQLExecutor2)
- Creation of BigQuery tables partitioned by ingestion time
Installing the JDBC driver¶
The 188.8.131.520 version of the JDBC driver contains a bug that prevents any error message related to connection from displaying, making investigation of connection issues extremely difficult.
We recommend that you use version 184.108.40.2064 of the driver (or above), which is unaffected.
- The JDBC Driver can be downloaded from https://cloud.google.com/bigquery/providers/simba-drivers/
- Choose the “JDBC 4.2-compatible” download (beware: do not choose ODBC but JDBC)
- Unzip the downloaded file
The JDBC driver is made of many JAR files, all of which are contained in the Zip file.
- In your DSS home directory, create a subdirectory of
bigquery. If you are updating the driver from an old version and the directory already exist, remove all files inside the
- Copy all the JAR files to the
- If the file
slf4j-api-1.7.30.jar(or a later version) is present in
lib/jdbc/bigquery, remove the file.
If you are updating your BigQuery driver to a newer version and the old version had been copied into
carrefully remove all the files of the old BigQuery driver in
lib/jdbc, and then copy the files of the new
Connecting to BigQuery¶
DSS connects to BigQuery using Service Account-based authentication.
- You first need to create a Google Service Account
- Create a private key for this account, and download the corresponding JSON file.
- Upload the JSON file somewhere on the DSS server.
In the connection settings, in the Secret key field, enter the absolute path (on the DSS server) to the credentials JSON file.
Alternatively, you can directly enter the content of the JSON file in the Secret key field to avoid storing the file on the server. Keep in mind that in this latter case, any DSS administrator will be able to see the content of this private file.
In the connection settings, you must also specify the location of the JDBC driver jar: Enter
lib/jdbc/bigquery in the Driver jars directory field.
Finally we recommend that you add an Advanced JDBC property:
- Value: 180
If you need to connect to BigQuery via a Proxy, you must add the following advanced JDBC properties (and also configure the global proxy for DSS in the “Settings” tab of the Administration page):
- Value: The IP address or host name of your proxy server.
- Value: The listening port of your proxy server.
- Value: The user name, if needed, for proxy server settings.
- Value: The password, if needed, for proxy server settings.
BigQuery native partitioning and clustering¶
By default, DSS does not create BigQuery tables that are partitioned at the database level, even when creating a DSS partitioned dataset.
To create a BigQuery table that is natively partitioned:
- Go to the settings for your Dataset and open the Advanced tab.
- Check the “Create BigQuery partitioned table” checkbox and indicate the column to use to partition the table. This column must be of type DATE, INT, TINYINT, SMALLINT or BIGINT (in the DSS semantic).
BigQuery partitioned tables can also be clustered by up to 4 clustering columns of STRING, DATE, BOOLEAN, INT, TINYINT, SMALLINT or BIGINT (in DSS semantic).
You must manually configure the BigQuery native partioning and clustering for each and every DSS dataset. DSS does not automatically propagate these settings when creating new datasets.
You can create a BigQuery partitioned table even for a DSS dataset that is not partitioned (in DSS semantic).