PutBigQueryBatch

Deprecation notice:

This processor is deprecated and may be removed in future releases.

Please consider using one the following alternatives: PutBigQuery

Description:

Please be aware this processor is deprecated and may be removed in the near future. Use PutBigQuery instead. Batch loads flow files content to a Google BigQuery table.

Tags:

google, google cloud, bq, bigquery

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

Display NameAPI NameDefault ValueAllowable ValuesDescription
Project IDgcp-project-idGoogle Cloud Project ID
Supports Expression Language: true (will be evaluated using variable registry only)
GCP Credentials Provider ServiceGCP Credentials Provider ServiceController Service API:
GCPCredentialsService
Implementation: GCPCredentialsControllerService
The Controller Service used to obtain Google Cloud Platform credentials.
Number of retriesgcp-retry-count6How many retry attempts should be made before routing to the failure relationship.
Proxy hostgcp-proxy-hostIP or hostname of the proxy to be used. You might need to set the following properties in bootstrap for https proxy usage: -Djdk.http.auth.tunneling.disabledSchemes= -Djdk.http.auth.proxying.disabledSchemes=
Supports Expression Language: true (will be evaluated using variable registry only)
Proxy portgcp-proxy-portProxy port number
Supports Expression Language: true (will be evaluated using variable registry only)
HTTP Proxy Usernamegcp-proxy-user-nameHTTP Proxy Username
Supports Expression Language: true (will be evaluated using variable registry only)
HTTP Proxy Passwordgcp-proxy-user-passwordHTTP Proxy Password
Sensitive Property: true
Supports Expression Language: true (will be evaluated using variable registry only)
Proxy Configuration Serviceproxy-configuration-serviceController Service API:
ProxyConfigurationService
Implementation: StandardProxyConfigurationService
Specifies the Proxy Configuration Controller Service to proxy network requests. If set, it supersedes proxy settings configured per component. Supported proxies: HTTP + AuthN
Datasetbq.dataset${bq.dataset}BigQuery dataset name (Note - The dataset must exist in GCP)
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Table Namebq.table.name${bq.table.name}BigQuery table name
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Ignore Unknown Valuesbq.load.ignore_unknownfalseSets whether BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. By default unknown values are not allowed.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Table Schemabq.table.schemaBigQuery schema in JSON format
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Read Timeoutbq.readtimeout5 minutesLoad Job Time Out
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Load file typebq.load.typeData type of the file to be loaded. Possible values: AVRO, NEWLINE_DELIMITED_JSON, CSV.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Create Dispositionbq.load.create_dispositionCREATE_IF_NEEDED
  • CREATE_IF_NEEDED Configures the job to create the table if it does not exist.
  • CREATE_NEVER Configures the job to fail with a not-found error if the table does not exist.
Sets whether the job is allowed to create new tables
Write Dispositionbq.load.write_dispositionWRITE_EMPTY
  • WRITE_EMPTY Configures the job to fail with a duplicate error if the table already exists.
  • WRITE_APPEND Configures the job to append data to the table if it already exists.
  • WRITE_TRUNCATE Configures the job to overwrite the table data if table already exists.
Sets the action that should occur if the destination table already exists.
Max Bad Recordsbq.load.max_badrecords0Sets the maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. By default no bad record is ignored.
CSV Input - Allow Jagged Rowsbq.csv.allow.jagged.rowsfalse
  • true
  • false
Set whether BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. By default, rows with missing trailing columns are considered bad records.
CSV Input - Allow Quoted New Linesbq.csv.allow.quoted.new.linesfalse
  • true
  • false
Sets whether BigQuery should allow quoted data sections that contain newline characters in a CSV file. By default quoted newline are not allowed.
CSV Input - Character Setbq.csv.charsetUTF-8
  • UTF-8
  • ISO-8859-1
Sets the character encoding of the data.
CSV Input - Field Delimiterbq.csv.delimiter,Sets the separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence " " to specify a tab separator. The default value is a comma (',').
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
CSV Input - Quotebq.csv.quote"Sets the value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the Allow Quoted New Lines property to true.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
CSV Input - Skip Leading Rowsbq.csv.skip.leading.rows0Sets the number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)
Avro Input - Use Logical Typesbq.avro.use.logical.typesfalse
  • true
  • false
If format is set to Avro and if this option is set to true, you can interpret logical types into their corresponding types (such as TIMESTAMP) instead of only using their raw types (such as INTEGER).

Relationships:

NameDescription
successFlowFiles are routed to this relationship after a successful Google BigQuery operation.
failureFlowFiles are routed to this relationship if the Google BigQuery operation fails.

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
bq.job.stat.creation_timeTime load job creation
bq.job.stat.end_timeTime load job ended
bq.job.stat.start_timeTime load job started
bq.job.linkAPI Link to load job
bq.job.idID of the BigQuery job
bq.error.messageLoad job error message
bq.error.reasonLoad job error reason
bq.error.locationLoad job error location
bq.records.countNumber of records successfully inserted

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

System Resource Considerations:

None specified.

See Also:

PutGCSObject, DeleteGCSObject