PutIceberg

Description:

This processor uses Iceberg API to parse and load records into Iceberg tables. The incoming data sets are parsed with Record Reader Controller Service and ingested into an Iceberg table using the configured catalog service and provided table information. The target Iceberg table should already exist and it must have matching schemas with the incoming records, which means the Record Reader schema must contain all the Iceberg schema fields, every additional field which is not present in the Iceberg schema will be ignored. To avoid 'small file problem' it is recommended pre-appending a MergeRecord processor.

Additional Details...

Tags:

iceberg, put, table, store, record, parse, orc, parquet, avro

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

Display NameAPI NameDefault ValueAllowable ValuesDescription
Record Readerrecord-readerController Service API:
RecordReaderFactory
Implementations: CSVReader
JsonPathReader
AvroReader
CEFReader
Syslog5424Reader
JsonTreeReader
WindowsEventLogReader
XMLReader
SyslogReader
JASN1Reader
ReaderLookup
ParquetReader
GrokReader
ScriptedReader
YamlTreeReader
ExcelReader
Specifies the Controller Service to use for parsing incoming data and determining the data's schema.
Catalog Servicecatalog-serviceController Service API:
IcebergCatalogService
Implementations: HiveCatalogService
HadoopCatalogService
Specifies the Controller Service to use for handling references to table’s metadata files.
Catalog Namespacecatalog-namespaceThe namespace of the catalog.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Table Nametable-nameThe name of the Iceberg table to write to.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Unmatched Column Behaviorunmatched-column-behaviorFail on Unmatched Columns
  • Ignore Unmatched Columns Any column in the database that does not have a field in the document will be assumed to not be required.  No notification will be logged
  • Warn on Unmatched Columns Any column in the database that does not have a field in the document will be assumed to not be required.  A warning will be logged
  • Fail on Unmatched Columns A flow will fail if any column in the database that does not have a field in the document.  An error will be logged
If an incoming record does not have a field mapping for all of the database table's columns, this property specifies how to handle the situation.
File Formatfile-format
  • AVRO
  • PARQUET
  • ORC
File format to use when writing Iceberg data files. If not set, then the 'write.format.default' table property will be used, default value is parquet.
Maximum File Sizemaximum-file-sizeThe maximum size that a file can be, if the file size is exceeded a new file will be generated with the remaining data. If not set, then the 'write.target-file-size-bytes' table property will be used, default value is 512 MB.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Kerberos User Servicekerberos-user-serviceController Service API:
KerberosUserService
Implementations: KerberosPasswordUserService
KerberosKeytabUserService
KerberosTicketCacheUserService
Specifies the Kerberos User Controller Service that should be used for authenticating with Kerberos.
Number of Commit Retriesnumber-of-commit-retries10Number of times to retry a commit before failing.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Minimum Commit Wait Timeminimum-commit-wait-time100 msMinimum time to wait before retrying a commit.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Maximum Commit Wait Timemaximum-commit-wait-time2 secMaximum time to wait before retrying a commit.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)
Maximum Commit Durationmaximum-commit-duration30 secTotal retry timeout period for a commit.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)

Dynamic Properties:

Supports Sensitive Dynamic Properties: No

Dynamic Properties allow the user to specify both the name and value of a property.

NameValueDescription
A custom key to add to the snapshot summary. The value must start with 'snapshot-property.' prefix.A custom value to add to the snapshot summary.Adds an entry with custom-key and corresponding value in the snapshot summary. The key format must be 'snapshot-property.custom-key'.
Supports Expression Language: true (will be evaluated using flow file attributes and Environment variables)

Relationships:

NameDescription
failureA FlowFile is routed to this relationship if the operation failed and retrying the operation will also fail, such as an invalid data or schema.
successA FlowFile is routed to this relationship after the data ingestion was successful.

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
iceberg.record.countThe number of records in the FlowFile.

State management:

This component does not store state.

Restricted:

This component is not restricted.

System Resource Considerations:

None specified.