Sends files from a Google Cloud Services (GCS) Storage bucket to the GCS Document AI v1beta2 API for asynchronous (offline) processing. The output is delivered to the same bucket as JSON files containing the OCRed text and additional information, including table-related data.

  filetype = "pdf",
  dest_folder = NULL,
  bucket = Sys.getenv("GCS_DEFAULT_BUCKET"),
  proj_id = get_project_id(),
  loc = "eu",
  token = dai_token(),
  pps = 100



A vector or list of pdf filepaths in a GCS Storage bucket. Filepaths must include all parent bucket folder(s) except the bucket name.


Either "pdf", "gif", or "tiff". If files is a vector, all elements must be of the same type.


The name of the bucket subfolder where you want the JSON output.


The name of the GCS Storage bucket. Not necessary if you have set a default bucket as a .Renviron variable named GCS_DEFAULT_BUCKET as described in the package vignette


a GCS project id


a two-letter region code ("eu" or "us")


an access token generated by dai_auth() or another auth function.


an integer from 1 to 100 for the desired number of pages per shard in the JSON output


A list of HTTP responses


This function accesses a different API endpoint than the main dai_async() function, one that has less language support, but returns table data in addition to parsed text (which dai_async() currently does not). This function may be deprecated if/when the v1 API endpoint incorporates table extraction. Use of this service requires a GCS access token and some configuration of the .Renviron file; see vignettes for details. Note that this API endpoint does not require a Document AI processor id. Maximum pdf document length is 2,000 pages, and the maximum number of pages in active processing is 10,000. Also note that this function does not provide 'true' batch processing; instead it successively submits single requests at 10-second intervals.


if (FALSE) {
# with daiR configured on your system, several parameters are automatically provided,
# and you can pass simple calls, such as:

# NB: Include all parent bucket folders (but not the bucket name) in the filepath:

# Bulk process by passing a vector of filepaths in the files argument:

# Specify a bucket subfolder for the json output:
dai_async_tab(my_files, dest_folder = "processed")