| query_exec {bigrquery} | R Documentation |
This is a high-level function that inserts a query job
(with insert_query_job()), repeatedly checks the status (with
get_job()) until it is complete, then retrieves the results
(with list_tabledata())
query_exec(query, project, destination_table = NULL, default_dataset = NULL,
page_size = 10000, max_pages = 10, warn = TRUE,
create_disposition = "CREATE_IF_NEEDED",
write_disposition = "WRITE_EMPTY", use_legacy_sql = TRUE,
quiet = getOption("bigrquery.quiet"), ...)
query |
SQL query string |
project |
The project name, a string |
destination_table |
(optional) destination table for large queries,
either as a string in the format used by BigQuery, or as a list with
|
default_dataset |
(optional) default dataset for any table references in
|
page_size |
Number of items per page. |
max_pages |
maximum number of pages to retrieve. Use |
warn |
If |
create_disposition |
behavior for table creation.
defaults to |
write_disposition |
behavior for writing data.
defaults to |
use_legacy_sql |
(optional) set to |
quiet |
if |
... |
Additional arguments merged into the body of the
request. |
Google documentation describing asynchronous queries: https://developers.google.com/bigquery/docs/queries#asyncqueries
Google documentation for handling large results: https://developers.google.com/bigquery/querying-data#largequeryresults
## Not run: project <- "fantastic-voyage-389" # put your project ID here sql <- "SELECT year, month, day, weight_pounds FROM [publicdata:samples.natality] LIMIT 5" query_exec(sql, project = project) # Put the results in a table you own (which uses project by default) query_exec(sql, project = project, destination_table = "my_dataset.results") # Use a default dataset for the query sql <- "SELECT year, month, day, weight_pounds FROM natality LIMIT 5" query_exec(sql, project = project, default_dataset = "publicdata:samples") ## End(Not run)