| insert_upload_job {bigrquery} | R Documentation |
This sends all of the data inline in the HTTP request so is only suitable for relatively small datasets.
insert_upload_job(project, dataset, table, values, billing = project, create_disposition = "CREATE_IF_NEEDED", write_disposition = "WRITE_APPEND", ...)
project |
The project name, a string |
dataset |
The name of the dataset to create, a string |
table |
name of table to insert values into |
values |
data frame of data to upload |
billing |
project ID to use for billing |
create_disposition |
behavior for table creation if the destination
already exists. defaults to |
write_disposition |
behavior for writing data if the destination already
exists. defaults to |
... |
Additional arguments merged into the body of the
request. |
Google API documentation: https://developers.google.com/bigquery/loading-data-into-bigquery#loaddatapostrequest
Other jobs: get_job,
insert_extract_job,
insert_query_job, wait_for
## Not run:
list_datasets("193487687779")
list_tables("193487687779", "houston")
job <- insert_upload_job("193487687779", "houston", "mtcars", mtcars)
wait_for(job)
list_tables("193487687779", "houston")
delete_table("193487687779", "houston", "mtcars")
## End(Not run)