DeduplicateRecord

Description:

This processor de-duplicates individual records within a record set. It can operate on a per-file basis using an in-memory hashset or bloom filter. When configured with a distributed map cache, it de-duplicates records across multiple files.

Tags:

text, record, update, change, replace, modify, distinct, unique, filter, hash, dupe, duplicate, dedupe

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

Display NameAPI NameDefault ValueAllowable ValuesDescription
Deduplication Strategydeduplication-strategySingle File
  • Single File
  • Multiple Files
The strategy to use for detecting and routing duplicate records. The option for detecting duplicates across a single FlowFile operates in-memory, whereas detection spanning multiple FlowFiles utilises a distributed map cache.
Distributed Map Cache clientdistributed-map-cacheController Service API:
DistributedMapCacheClient
Implementations: HBase_2_ClientMapCacheService
SimpleRedisDistributedMapCacheClientService
HazelcastMapCacheClient
CouchbaseMapCacheClient
RedisDistributedMapCacheClientService
DistributedMapCacheClientService
CassandraDistributedMapCache
This property is required when the deduplication strategy is set to 'multiple files.' The map cache will for each record, atomically check whether the cache key exists and if not, set it.

This Property is only considered if the [Deduplication Strategy] Property has a value of "Multiple Files".
Cache Identifiercache-identifierAn optional expression language field that overrides the record's computed cache key. This field has an additional attribute available: ${record.hash.value}, which contains the cache key derived from dynamic properties (if set) or record fields.
Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)

This Property is only considered if the [Deduplication Strategy] Property has a value of "Multiple Files".
Cache the Entry Identifierput-cache-identifierfalse
  • true
  • false
For each record, check whether the cache identifier exists in the distributed map cache. If it doesn't exist and this property is true, put the identifier to the cache.

This Property is only considered if the [Distributed Map Cache client] Property has a value specified.
Record Readerrecord-readerController Service API:
RecordReaderFactory
Implementations: GrokReader
JsonTreeReader
WindowsEventLogReader
ReaderLookup
ParquetReader
CSVReader
Syslog5424Reader
ExcelReader
CEFReader
XMLReader
ScriptedReader
SyslogReader
JsonPathReader
AvroReader
YamlTreeReader
Specifies the Controller Service to use for reading incoming data
Record Writerrecord-writerController Service API:
RecordSetWriterFactory
Implementations: FreeFormTextRecordSetWriter
CSVRecordSetWriter
ParquetRecordSetWriter
RecordSetWriterLookup
ScriptedRecordSetWriter
XMLRecordSetWriter
JsonRecordSetWriter
AvroRecordSetWriter
Specifies the Controller Service to use for writing out the records
Include Zero Record FlowFilesinclude-zero-record-flowfilestrue
  • true
  • false
If a FlowFile sent to either the duplicate or non-duplicate relationships contains no records, a value of `false` in this property causes the FlowFile to be dropped. Otherwise, the empty FlowFile is emitted.
Record Hashing Algorithmrecord-hashing-algorithmSHA-256
  • None Do not use a hashing algorithm. The value of resolved RecordPaths will be combined with a delimiter (~) to form the unique cache key. This may use significantly more storage depending on the size and shape or your data.
  • SHA-256 SHA-256 cryptographic hashing algorithm.
  • SHA-512 SHA-512 cryptographic hashing algorithm.
The algorithm used to hash the cache key.
Filter Typefilter-typeHashSet
  • HashSet Exactly matches records seen before with 100% accuracy at the expense of more storage usage. Stores the filter data in a single cache entry in the distributed cache, and is loaded entirely into memory during duplicate detection. This filter is preferred for small to medium data sets and offers high performance, being loaded into memory when this processor is running.
  • BloomFilter Space-efficient data structure ideal for large data sets using probability to determine if a record was seen previously. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in the set" or "definitely not in the set". You should use this option if the FlowFile content is large and you can tolerate some duplication in the data. Uses constant storage space regardless of the record set size.
The filter used to determine whether a record has been seen before based on the matching RecordPath criteria. If hash set is selected, a Java HashSet object will be used to deduplicate all encountered records. If the bloom filter option is selected, a bloom filter will be used. The bloom filter option is less memory intensive, but has a chance of having false positives.

This Property is only considered if the [Deduplication Strategy] Property has a value of "Single File".
Filter Capacity Hintfilter-capacity-hint25000An estimation of the total number of unique records to be processed. The more accurate this number is will lead to fewer false negatives on a BloomFilter.

This Property is only considered if the [Filter Type] Property has a value of "BloomFilter".
Bloom Filter Certaintybloom-filter-certainty0.10The desired false positive probability when using the BloomFilter type. Using a value of .05 for example, guarantees a five-percent probability that the result is a false positive. The closer to 1 this value is set, the more precise the result at the expense of more storage space utilization.

Dynamic Properties:

Supports Sensitive Dynamic Properties: No

Dynamic Properties allow the user to specify both the name and value of a property.

NameValueDescription
Name of the property.A valid RecordPath to the record field to be included in the cache key used for deduplication.A record's cache key is generated by combining the name of each dynamic property with its evaluated record value (as specified by the corresponding RecordPath).
Supports Expression Language: false

Relationships:

NameDescription
duplicateRecords detected as duplicates are routed to this relationship.
non-duplicateRecords not found in the cache are routed to this relationship.
failureIf unable to communicate with the cache, the FlowFile will be penalized and routed to this relationship
originalThe original input FlowFile is sent to this relationship unless a fatal error occurs.

Reads Attributes:

None specified.

Writes Attributes:

NameDescription
record.countNumber of records written to the destination FlowFile.

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

System Resource Considerations:

ResourceDescription
MEMORYThe HashSet filter type will grow memory space proportionate to the number of unique records processed. The BloomFilter type will use constant memory regardless of the number of records processed.
CPUIf a more advanced hash algorithm is chosen, the amount of time required to hash any particular record could increase substantially.

See Also:

DistributedMapCacheClientService, DistributedMapCacheServer, DetectDuplicate