BigQuery is Google's fully managed, petabyte scale, low cost analytics data warehouse. BigQuery is NoOps—there is no infrastructure to manage and you don't need a database administrator—so you can focus on analyzing data to find meaningful insights, use familiar SQL, and take advantage of our pay-as-you-go model.
-
Read Prerequisites and How to run a sample first.
-
Install dependencies:
With npm:
npm install
With yarn:
yarn install
View the documentation or the source code.
Usage: node datasets.js --help
Commands:
create <datasetId> Creates a new dataset.
delete <datasetId> Deletes a dataset.
list Lists datasets.
Options:
--projectId, -p The Project ID to use. Defaults to the value of the GCLOUD_PROJECT or GOOGLE_CLOUD_PROJECT
environment variables. [string]
--help Show help [boolean]
Examples:
node datasets.js create my_dataset Creates a new dataset named "my_dataset".
node datasets.js delete my_dataset Deletes a dataset named "my_dataset".
node datasets.js list Lists all datasets in the project specified by the
GCLOUD_PROJECT or GOOGLE_CLOUD_PROJECT environments variables.
node datasets.js list --projectId=bigquery-public-data Lists all datasets in the "bigquery-public-data" project.
For more information, see https://cloud.google.com/bigquery/docs
View the documentation or the source code.
Usage: node tables.js --help
Commands:
create <datasetId> <tableId> <schema> Creates a new table.
list <datasetId> Lists all tables in a dataset.
delete <datasetId> <tableId> Deletes a table.
copy <srcDatasetId> <srcTableId> <destDatasetId> Makes a copy of a table.
<destTableId>
browse <datasetId> <tableId> Lists rows in a table.
import <datasetId> <tableId> <fileName> Imports data from a local file into a table.
import-gcs <datasetId> <tableId> <bucketName> <fileName> Imports data from a Google Cloud Storage file into a
table.
export <datasetId> <tableId> <bucketName> <fileName> Export a table from BigQuery to Google Cloud Storage.
insert <datasetId> <tableId> <json_or_file> Insert a JSON array (as a string or newline-delimited
file) into a BigQuery table.
Options:
--projectId, -p The Project ID to use. Defaults to the value of the GCLOUD_PROJECT or GOOGLE_CLOUD_PROJECT
environment variables. [string]
--help Show help [boolean]
Examples:
node tables.js create my_dataset my_table "Name:string, Creates a new table named "my_table" in "my_dataset".
Age:integer, Weight:float, IsMagic:boolean"
node tables.js list my_dataset Lists tables in "my_dataset".
node tables.js browse my_dataset my_table Displays rows from "my_table" in "my_dataset".
node tables.js delete my_dataset my_table Deletes "my_table" from "my_dataset".
node tables.js import my_dataset my_table ./data.csv Imports a local file into a table.
node tables.js import-gcs my_dataset my_table my-bucket Imports a GCS file into a table.
data.csv
node tables.js export my_dataset my_table my-bucket my-file Exports my_dataset:my_table to gcs://my-bucket/my-file
as raw CSV.
node tables.js export my_dataset my_table my-bucket my-file Exports my_dataset:my_table to gcs://my-bucket/my-file
-f JSON --gzip as gzipped JSON.
node tables.js insert my_dataset my_table json_string Inserts the JSON array represented by json_string into
my_dataset:my_table.
node tables.js insert my_dataset my_table json_file Inserts the JSON objects contained in json_file (one per
line) into my_dataset:my_table.
node tables.js copy src_dataset src_table dest_dataset Copies src_dataset:src_table to dest_dataset:dest_table.
dest_table
For more information, see https://cloud.google.com/bigquery/docs
View the documentation or the source code.
Usage: node queries.js --help
Commands:
sync <sqlQuery> Run the specified synchronous query.
async <sqlQuery> Start the specified asynchronous query.
shakespeare Queries a public Shakespeare dataset.
Options:
--projectId, -p The Project ID to use. Defaults to the value of the GCLOUD_PROJECT or GOOGLE_CLOUD_PROJECT
environment variables. [string]
--help Show help [boolean]
Examples:
node queries.js sync "SELECT * FROM Synchronously queries the natality dataset.
publicdata.samples.natality LIMIT 5;"
node queries.js async "SELECT * FROM Queries the natality dataset as a job.
publicdata.samples.natality LIMIT 5;"
node queries.js shakespeare Queries a public Shakespeare dataset.
For more information, see https://cloud.google.com/bigquery/docs
-
Set the GCLOUD_PROJECT and GOOGLE_APPLICATION_CREDENTIALS environment variables.
-
Run the tests:
With npm:
npm test
With yarn:
yarn test