-
Processors
- AttributeRollingWindow
- AttributesToCSV
- AttributesToJSON
- CalculateRecordStats
- CaptureChangeMySQL
- CompressContent
- ConnectWebSocket
- ConsumeAMQP
- ConsumeAzureEventHub
- ConsumeElasticsearch
- ConsumeGCPubSub
- ConsumeIMAP
- ConsumeJMS
- ConsumeKafka
- ConsumeKinesisStream
- ConsumeMQTT
- ConsumePOP3
- ConsumeSlack
- ConsumeTwitter
- ConsumeWindowsEventLog
- ControlRate
- ConvertCharacterSet
- ConvertRecord
- CopyAzureBlobStorage_v12
- CopyS3Object
- CountText
- CryptographicHashContent
- DebugFlow
- DecryptContentAge
- DecryptContentPGP
- DeduplicateRecord
- DeleteAzureBlobStorage_v12
- DeleteAzureDataLakeStorage
- DeleteByQueryElasticsearch
- DeleteDynamoDB
- DeleteFile
- DeleteGCSObject
- DeleteGridFS
- DeleteMongo
- DeleteS3Object
- DeleteSFTP
- DeleteSQS
- DetectDuplicate
- DistributeLoad
- DuplicateFlowFile
- EncodeContent
- EncryptContentAge
- EncryptContentPGP
- EnforceOrder
- EvaluateJsonPath
- EvaluateXPath
- EvaluateXQuery
- ExecuteGroovyScript
- ExecuteProcess
- ExecuteScript
- ExecuteSQL
- ExecuteSQLRecord
- ExecuteStreamCommand
- ExtractAvroMetadata
- ExtractEmailAttachments
- ExtractEmailHeaders
- ExtractGrok
- ExtractHL7Attributes
- ExtractRecordSchema
- ExtractText
- FetchAzureBlobStorage_v12
- FetchAzureDataLakeStorage
- FetchBoxFile
- FetchDistributedMapCache
- FetchDropbox
- FetchFile
- FetchFTP
- FetchGCSObject
- FetchGoogleDrive
- FetchGridFS
- FetchS3Object
- FetchSFTP
- FetchSmb
- FilterAttribute
- FlattenJson
- ForkEnrichment
- ForkRecord
- GenerateFlowFile
- GenerateRecord
- GenerateTableFetch
- GeoEnrichIP
- GeoEnrichIPRecord
- GeohashRecord
- GetAsanaObject
- GetAwsPollyJobStatus
- GetAwsTextractJobStatus
- GetAwsTranscribeJobStatus
- GetAwsTranslateJobStatus
- GetAzureEventHub
- GetAzureQueueStorage_v12
- GetDynamoDB
- GetElasticsearch
- GetFile
- GetFTP
- GetGcpVisionAnnotateFilesOperationStatus
- GetGcpVisionAnnotateImagesOperationStatus
- GetHubSpot
- GetMongo
- GetMongoRecord
- GetS3ObjectMetadata
- GetSFTP
- GetShopify
- GetSmbFile
- GetSNMP
- GetSplunk
- GetSQS
- GetWorkdayReport
- GetZendesk
- HandleHttpRequest
- HandleHttpResponse
- IdentifyMimeType
- InvokeHTTP
- InvokeScriptedProcessor
- ISPEnrichIP
- JoinEnrichment
- JoltTransformJSON
- JoltTransformRecord
- JSLTTransformJSON
- JsonQueryElasticsearch
- ListAzureBlobStorage_v12
- ListAzureDataLakeStorage
- ListBoxFile
- ListDatabaseTables
- ListDropbox
- ListenFTP
- ListenHTTP
- ListenOTLP
- ListenSlack
- ListenSyslog
- ListenTCP
- ListenTrapSNMP
- ListenUDP
- ListenUDPRecord
- ListenWebSocket
- ListFile
- ListFTP
- ListGCSBucket
- ListGoogleDrive
- ListS3
- ListSFTP
- ListSmb
- LogAttribute
- LogMessage
- LookupAttribute
- LookupRecord
- MergeContent
- MergeRecord
- ModifyBytes
- ModifyCompression
- MonitorActivity
- MoveAzureDataLakeStorage
- Notify
- PackageFlowFile
- PaginatedJsonQueryElasticsearch
- ParseEvtx
- ParseNetflowv5
- ParseSyslog
- ParseSyslog5424
- PartitionRecord
- PublishAMQP
- PublishGCPubSub
- PublishJMS
- PublishKafka
- PublishMQTT
- PublishSlack
- PutAzureBlobStorage_v12
- PutAzureCosmosDBRecord
- PutAzureDataExplorer
- PutAzureDataLakeStorage
- PutAzureEventHub
- PutAzureQueueStorage_v12
- PutBigQuery
- PutBoxFile
- PutCloudWatchMetric
- PutDatabaseRecord
- PutDistributedMapCache
- PutDropbox
- PutDynamoDB
- PutDynamoDBRecord
- PutElasticsearchJson
- PutElasticsearchRecord
- PutEmail
- PutFile
- PutFTP
- PutGCSObject
- PutGoogleDrive
- PutGridFS
- PutKinesisFirehose
- PutKinesisStream
- PutLambda
- PutMongo
- PutMongoBulkOperations
- PutMongoRecord
- PutRecord
- PutRedisHashRecord
- PutS3Object
- PutSalesforceObject
- PutSFTP
- PutSmbFile
- PutSNS
- PutSplunk
- PutSplunkHTTP
- PutSQL
- PutSQS
- PutSyslog
- PutTCP
- PutUDP
- PutWebSocket
- PutZendeskTicket
- QueryAirtableTable
- QueryAzureDataExplorer
- QueryDatabaseTable
- QueryDatabaseTableRecord
- QueryRecord
- QuerySalesforceObject
- QuerySplunkIndexingStatus
- RemoveRecordField
- RenameRecordField
- ReplaceText
- ReplaceTextWithMapping
- RetryFlowFile
- RouteHL7
- RouteOnAttribute
- RouteOnContent
- RouteText
- RunMongoAggregation
- SampleRecord
- ScanAttribute
- ScanContent
- ScriptedFilterRecord
- ScriptedPartitionRecord
- ScriptedTransformRecord
- ScriptedValidateRecord
- SearchElasticsearch
- SegmentContent
- SendTrapSNMP
- SetSNMP
- SignContentPGP
- SplitAvro
- SplitContent
- SplitExcel
- SplitJson
- SplitPCAP
- SplitRecord
- SplitText
- SplitXml
- StartAwsPollyJob
- StartAwsTextractJob
- StartAwsTranscribeJob
- StartAwsTranslateJob
- StartGcpVisionAnnotateFilesOperation
- StartGcpVisionAnnotateImagesOperation
- TagS3Object
- TailFile
- TransformXml
- UnpackContent
- UpdateAttribute
- UpdateByQueryElasticsearch
- UpdateCounter
- UpdateDatabaseTable
- UpdateRecord
- ValidateCsv
- ValidateJson
- ValidateRecord
- ValidateXml
- VerifyContentMAC
- VerifyContentPGP
- Wait
-
Controller Services
- ADLSCredentialsControllerService
- ADLSCredentialsControllerServiceLookup
- AmazonGlueSchemaRegistry
- ApicurioSchemaRegistry
- AvroReader
- AvroRecordSetWriter
- AvroSchemaRegistry
- AWSCredentialsProviderControllerService
- AzureBlobStorageFileResourceService
- AzureCosmosDBClientService
- AzureDataLakeStorageFileResourceService
- AzureEventHubRecordSink
- AzureStorageCredentialsControllerService_v12
- AzureStorageCredentialsControllerServiceLookup_v12
- CEFReader
- ConfluentEncodedSchemaReferenceReader
- ConfluentEncodedSchemaReferenceWriter
- ConfluentSchemaRegistry
- CSVReader
- CSVRecordLookupService
- CSVRecordSetWriter
- DatabaseRecordLookupService
- DatabaseRecordSink
- DatabaseTableSchemaRegistry
- DBCPConnectionPool
- DBCPConnectionPoolLookup
- DistributedMapCacheLookupService
- ElasticSearchClientServiceImpl
- ElasticSearchLookupService
- ElasticSearchStringLookupService
- EmailRecordSink
- EmbeddedHazelcastCacheManager
- ExcelReader
- ExternalHazelcastCacheManager
- FreeFormTextRecordSetWriter
- GCPCredentialsControllerService
- GCSFileResourceService
- GrokReader
- HazelcastMapCacheClient
- HikariCPConnectionPool
- HttpRecordSink
- IPLookupService
- JettyWebSocketClient
- JettyWebSocketServer
- JMSConnectionFactoryProvider
- JndiJmsConnectionFactoryProvider
- JsonConfigBasedBoxClientService
- JsonPathReader
- JsonRecordSetWriter
- JsonTreeReader
- Kafka3ConnectionService
- KerberosKeytabUserService
- KerberosPasswordUserService
- KerberosTicketCacheUserService
- LoggingRecordSink
- MapCacheClientService
- MapCacheServer
- MongoDBControllerService
- MongoDBLookupService
- PropertiesFileLookupService
- ProtobufReader
- ReaderLookup
- RecordSetWriterLookup
- RecordSinkServiceLookup
- RedisConnectionPoolService
- RedisDistributedMapCacheClientService
- RestLookupService
- S3FileResourceService
- ScriptedLookupService
- ScriptedReader
- ScriptedRecordSetWriter
- ScriptedRecordSink
- SetCacheClientService
- SetCacheServer
- SimpleCsvFileLookupService
- SimpleDatabaseLookupService
- SimpleKeyValueLookupService
- SimpleRedisDistributedMapCacheClientService
- SimpleScriptedLookupService
- SiteToSiteReportingRecordSink
- SlackRecordSink
- SmbjClientProviderService
- StandardAsanaClientProviderService
- StandardAzureCredentialsControllerService
- StandardDropboxCredentialService
- StandardFileResourceService
- StandardHashiCorpVaultClientService
- StandardHttpContextMap
- StandardJsonSchemaRegistry
- StandardKustoIngestService
- StandardKustoQueryService
- StandardOauth2AccessTokenProvider
- StandardPGPPrivateKeyService
- StandardPGPPublicKeyService
- StandardPrivateKeyService
- StandardProxyConfigurationService
- StandardRestrictedSSLContextService
- StandardS3EncryptionService
- StandardSSLContextService
- StandardWebClientServiceProvider
- Syslog5424Reader
- SyslogReader
- UDPEventRecordSink
- VolatileSchemaCache
- WindowsEventLogReader
- XMLFileLookupService
- XMLReader
- XMLRecordSetWriter
- YamlTreeReader
- ZendeskRecordSink
ListFTP 2.0.0
- Bundle
- org.apache.nifi | nifi-standard-nar
- Description
- Performs a listing of the files residing on an FTP server. For each file that is found on the remote server, a new FlowFile will be created with the filename attribute set to the name of the file on the remote server. This can then be used in conjunction with FetchFTP in order to fetch those files.
- Tags
- files, ftp, ingest, input, list, remote, source
- Input Requirement
- FORBIDDEN
- Supports Sensitive Dynamic Properties
- false
-
Additional Details for ListFTP 2.0.0
ListFTP
ListFTP performs a listing of all files that it encounters in the configured directory of an FTP server. There are two common, broadly defined use cases.
Streaming Use Case
By default, the Processor will create a separate FlowFile for each file in the directory and add attributes for filename, path, etc. A common use case is to connect ListFTP to the FetchFTP processor. These two processors used in conjunction with one another provide the ability to easily monitor a directory and fetch the contents of any new file as it lands on the FTP server in an efficient streaming fashion.
Batch Use Case
Another common use case is the desire to process all newly arriving files in a given directory, and to then perform some action only when all files have completed their processing. The above approach of streaming the data makes this difficult, because NiFi is inherently a streaming platform in that there is no “job” that has a beginning and an end. Data is simply picked up as it becomes available.
To solve this, the ListFTP Processor can optionally be configured with a Record Writer. When a Record Writer is configured, a single FlowFile will be created that will contain a Record for each file in the directory, instead of a separate FlowFile per file. With this pattern, in order to fetch the contents of each file, the records must be split up into individual FlowFiles and then fetched. So how does this help us?
We can still accomplish the desired use case of waiting until all files in the directory have been processed by splitting apart the FlowFile and processing all the data within a Process Group. Configuring the Process Group with a FlowFile Concurrency of “Single FlowFile per Node” means that only one FlowFile will be brought into the Process Group. Once that happens, the FlowFile can be split apart and each part processed. Configuring the Process Group with an Outbound Policy of “Batch Output” means that none of the FlowFiles will leave the Process Group until all have finished processing.
In this flow, we perform a listing of a directory with ListFTP. The processor is configured with a Record Writer (in this case a CSV Writer, but any Record Writer can be used) so that only a single FlowFile is generated for the entire listing. That listing is then sent to the “Process Listing” Process Group (shown below). Only after the contents of the entire directory have been processed will data leave the “Process Listing” Process Group. At that point, when all data in the Process Group is ready to leave, each of the processed files will be sent to the “Post-Processing” Process Group. At the same time, the original listing is to be sent to the “Processing Complete Notification” Process Group. In order to accomplish this, the Process Group must be configured with a FlowFile Concurrency of “Single FlowFile per Node” and an Outbound Policy of “Batch Output.”
The “Process Listing” Process Group a listing is received via the “Listing” Input Port. This is then sent directly to the “Listing of Processed Data” Output Port so that when all processing completes, the original listing will be sent out also.
Next, the listing is broken apart into an individual FlowFile per record. Because we want to use FetchFTP to fetch the data, we need to get the file’s filename and path as FlowFile attributes. This can be done in a few different ways, but the easiest mechanism is to use the PartitionRecord processor. This Processor is configured with a Record Reader that is able to read the data written by ListFTP (in this case, a CSV Reader). The Processor is also configured with two additional user-defined properties:
path: /path
filename: /filename
As a result, each record that comes into the PartitionRecord processor will be split into an individual FlowFile ( because the combination of the “path” and “filename” fields will be unique for each Record) and the “filename” and " path" record fields will become attributes on the FlowFile. FetchFTP is configured to use a value of
${path}/${filename}
for the “Remote File” property, making use of these attributes.Finally, we process the data - in this example, simply by compressing it with GZIP compression - and send the output to the “Processed Data” Output Port. The data will queue up here until all data is ready to leave the Process Group and then will be released.
Record Schema
When the Processor is configured to write the listing using a Record Writer, the Records will be written using the following schema (in Avro format):
{ "type": "record", "name": "nifiRecord", "namespace": "org.apache.nifi", "fields": [ { "name": "filename", "type": "string" }, { "name": "path", "type": "string" }, { "name": "directory", "type": "boolean" }, { "name": "size", "type": "long" }, { "name": "lastModified", "type": { "type": "long", "logicalType": "timestamp-millis" } }, { "name": "permissions", "type": [ "null", "string" ] }, { "name": "owner", "type": [ "null", "string" ] }, { "name": "group", "type": [ "null", "string" ] } ] }
-
Connection Mode
The FTP Connection Mode
- Display Name
- Connection Mode
- Description
- The FTP Connection Mode
- API Name
- Connection Mode
- Default Value
- Passive
- Allowable Values
-
- Active
- Passive
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Connection Timeout
Amount of time to wait before timing out while creating a connection
- Display Name
- Connection Timeout
- Description
- Amount of time to wait before timing out while creating a connection
- API Name
- Connection Timeout
- Default Value
- 30 sec
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Data Timeout
When transferring a file between the local and remote system, this value specifies how long is allowed to elapse without any data being transferred between systems
- Display Name
- Data Timeout
- Description
- When transferring a file between the local and remote system, this value specifies how long is allowed to elapse without any data being transferred between systems
- API Name
- Data Timeout
- Default Value
- 30 sec
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Entity Tracking Initial Listing Target
Specify how initial listing should be handled. Used by 'Tracking Entities' strategy.
- Display Name
- Entity Tracking Initial Listing Target
- Description
- Specify how initial listing should be handled. Used by 'Tracking Entities' strategy.
- API Name
- et-initial-listing-target
- Default Value
- all
- Allowable Values
-
- Tracking Time Window
- All Available
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Entity Tracking State Cache
Listed entities are stored in the specified cache storage so that this processor can resume listing across NiFi restart or in case of primary node change. 'Tracking Entities' strategy require tracking information of all listed entities within the last 'Tracking Time Window'. To support large number of entities, the strategy uses DistributedMapCache instead of managed state. Cache key format is 'ListedEntities::{processorId}(::{nodeId})'. If it tracks per node listed entities, then the optional '::{nodeId}' part is added to manage state separately. E.g. cluster wide cache key = 'ListedEntities::8dda2321-0164-1000-50fa-3042fe7d6a7b', per node cache key = 'ListedEntities::8dda2321-0164-1000-50fa-3042fe7d6a7b::nifi-node3' The stored cache content is Gzipped JSON string. The cache key will be deleted when target listing configuration is changed. Used by 'Tracking Entities' strategy.
- Display Name
- Entity Tracking State Cache
- Description
- Listed entities are stored in the specified cache storage so that this processor can resume listing across NiFi restart or in case of primary node change. 'Tracking Entities' strategy require tracking information of all listed entities within the last 'Tracking Time Window'. To support large number of entities, the strategy uses DistributedMapCache instead of managed state. Cache key format is 'ListedEntities::{processorId}(::{nodeId})'. If it tracks per node listed entities, then the optional '::{nodeId}' part is added to manage state separately. E.g. cluster wide cache key = 'ListedEntities::8dda2321-0164-1000-50fa-3042fe7d6a7b', per node cache key = 'ListedEntities::8dda2321-0164-1000-50fa-3042fe7d6a7b::nifi-node3' The stored cache content is Gzipped JSON string. The cache key will be deleted when target listing configuration is changed. Used by 'Tracking Entities' strategy.
- API Name
- et-state-cache
- Service Interface
- org.apache.nifi.distributed.cache.client.DistributedMapCacheClient
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Entity Tracking Time Window
Specify how long this processor should track already-listed entities. 'Tracking Entities' strategy can pick any entity whose timestamp is inside the specified time window. For example, if set to '30 minutes', any entity having timestamp in recent 30 minutes will be the listing target when this processor runs. A listed entity is considered 'new/updated' and a FlowFile is emitted if one of following condition meets: 1. does not exist in the already-listed entities, 2. has newer timestamp than the cached entity, 3. has different size than the cached entity. If a cached entity's timestamp becomes older than specified time window, that entity will be removed from the cached already-listed entities. Used by 'Tracking Entities' strategy.
- Display Name
- Entity Tracking Time Window
- Description
- Specify how long this processor should track already-listed entities. 'Tracking Entities' strategy can pick any entity whose timestamp is inside the specified time window. For example, if set to '30 minutes', any entity having timestamp in recent 30 minutes will be the listing target when this processor runs. A listed entity is considered 'new/updated' and a FlowFile is emitted if one of following condition meets: 1. does not exist in the already-listed entities, 2. has newer timestamp than the cached entity, 3. has different size than the cached entity. If a cached entity's timestamp becomes older than specified time window, that entity will be removed from the cached already-listed entities. Used by 'Tracking Entities' strategy.
- API Name
- et-time-window
- Default Value
- 3 hours
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
-
File Filter Regex
Provides a Java Regular Expression for filtering Filenames; if a filter is supplied, only files whose names match that Regular Expression will be fetched
- Display Name
- File Filter Regex
- Description
- Provides a Java Regular Expression for filtering Filenames; if a filter is supplied, only files whose names match that Regular Expression will be fetched
- API Name
- File Filter Regex
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Follow symlink
If true, will pull even symbolic files and also nested symbolic subdirectories; otherwise, will not read symbolic files and will not traverse symbolic link subdirectories
- Display Name
- Follow symlink
- Description
- If true, will pull even symbolic files and also nested symbolic subdirectories; otherwise, will not read symbolic files and will not traverse symbolic link subdirectories
- API Name
- follow-symlink
- Default Value
- false
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Use UTF-8 Encoding
Tells the client to use UTF-8 encoding when processing files and filenames. If set to true, the server must also support UTF-8 encoding.
- Display Name
- Use UTF-8 Encoding
- Description
- Tells the client to use UTF-8 encoding when processing files and filenames. If set to true, the server must also support UTF-8 encoding.
- API Name
- ftp-use-utf8
- Default Value
- false
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Hostname
The fully qualified hostname or IP address of the remote system
- Display Name
- Hostname
- Description
- The fully qualified hostname or IP address of the remote system
- API Name
- Hostname
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
-
Ignore Dotted Files
If true, files whose names begin with a dot (".") will be ignored
- Display Name
- Ignore Dotted Files
- Description
- If true, files whose names begin with a dot (".") will be ignored
- API Name
- Ignore Dotted Files
- Default Value
- true
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Internal Buffer Size
Set the internal buffer size for buffered data streams
- Display Name
- Internal Buffer Size
- Description
- Set the internal buffer size for buffered data streams
- API Name
- Internal Buffer Size
- Default Value
- 16KB
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Listing Strategy
Specify how to determine new/updated entities. See each strategy descriptions for detail.
- Display Name
- Listing Strategy
- Description
- Specify how to determine new/updated entities. See each strategy descriptions for detail.
- API Name
- listing-strategy
- Default Value
- timestamps
- Allowable Values
-
- Tracking Timestamps
- Tracking Entities
- No Tracking
- Time Window
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Password
Password for the user account
- Display Name
- Password
- Description
- Password for the user account
- API Name
- Password
- Expression Language Scope
- Environment variables and FlowFile Attributes
- Sensitive
- true
- Required
- false
-
Path Filter Regex
When Search Recursively is true, then only subdirectories whose path matches the given Regular Expression will be scanned
- Display Name
- Path Filter Regex
- Description
- When Search Recursively is true, then only subdirectories whose path matches the given Regular Expression will be scanned
- API Name
- Path Filter Regex
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Port
The port to connect to on the remote host to fetch the data from
- Display Name
- Port
- Description
- The port to connect to on the remote host to fetch the data from
- API Name
- Port
- Default Value
- 21
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
-
Proxy Configuration Service
Specifies the Proxy Configuration Controller Service to proxy network requests. Supported proxies: SOCKS + AuthN, HTTP + AuthN
- Display Name
- Proxy Configuration Service
- Description
- Specifies the Proxy Configuration Controller Service to proxy network requests. Supported proxies: SOCKS + AuthN, HTTP + AuthN
- API Name
- proxy-configuration-service
- Service Interface
- org.apache.nifi.proxy.ProxyConfigurationService
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Record Writer
Specifies the Record Writer to use for creating the listing. If not specified, one FlowFile will be created for each entity that is listed. If the Record Writer is specified, all entities will be written to a single FlowFile instead of adding attributes to individual FlowFiles.
- Display Name
- Record Writer
- Description
- Specifies the Record Writer to use for creating the listing. If not specified, one FlowFile will be created for each entity that is listed. If the Record Writer is specified, all entities will be written to a single FlowFile instead of adding attributes to individual FlowFiles.
- API Name
- record-writer
- Service Interface
- org.apache.nifi.serialization.RecordSetWriterFactory
- Service Implementations
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Remote Path
The path on the remote system from which to pull or push files
- Display Name
- Remote Path
- Description
- The path on the remote system from which to pull or push files
- API Name
- Remote Path
- Default Value
- .
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- false
-
Remote Poll Batch Size
The value specifies how many file paths to find in a given directory on the remote system when doing a file listing. This value in general should not need to be modified but when polling against a remote system with a tremendous number of files this value can be critical. Setting this value too high can result very poor performance and setting it too low can cause the flow to be slower than normal.
- Display Name
- Remote Poll Batch Size
- Description
- The value specifies how many file paths to find in a given directory on the remote system when doing a file listing. This value in general should not need to be modified but when polling against a remote system with a tremendous number of files this value can be critical. Setting this value too high can result very poor performance and setting it too low can cause the flow to be slower than normal.
- API Name
- Remote Poll Batch Size
- Default Value
- 5000
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Search Recursively
If true, will pull files from arbitrarily nested subdirectories; otherwise, will not traverse subdirectories
- Display Name
- Search Recursively
- Description
- If true, will pull files from arbitrarily nested subdirectories; otherwise, will not traverse subdirectories
- API Name
- Search Recursively
- Default Value
- false
- Allowable Values
-
- true
- false
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Target System Timestamp Precision
Specify timestamp precision at the target system. Since this processor uses timestamp of entities to decide which should be listed, it is crucial to use the right timestamp precision.
- Display Name
- Target System Timestamp Precision
- Description
- Specify timestamp precision at the target system. Since this processor uses timestamp of entities to decide which should be listed, it is crucial to use the right timestamp precision.
- API Name
- target-system-timestamp-precision
- Default Value
- auto-detect
- Allowable Values
-
- Auto Detect
- Milliseconds
- Seconds
- Minutes
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- true
-
Transfer Mode
The FTP Transfer Mode
- Display Name
- Transfer Mode
- Description
- The FTP Transfer Mode
- API Name
- Transfer Mode
- Default Value
- Binary
- Allowable Values
-
- Binary
- ASCII
- Expression Language Scope
- Not Supported
- Sensitive
- false
- Required
- false
-
Username
Username
- Display Name
- Username
- Description
- Username
- API Name
- Username
- Expression Language Scope
- Environment variables defined at JVM level and system properties
- Sensitive
- false
- Required
- true
Scopes | Description |
---|---|
CLUSTER | After performing a listing of files, the timestamp of the newest file is stored. This allows the Processor to list only files that have been added or modified after this date the next time that the Processor is run. State is stored across the cluster so that this Processor can be run on Primary Node only and if a new Primary Node is selected, the new node will not duplicate the data that was listed by the previous Primary Node. |
Name | Description |
---|---|
success | All FlowFiles that are received are routed to success |
Name | Description |
---|---|
ftp.remote.host | The hostname of the FTP Server |
ftp.remote.port | The port that was connected to on the FTP Server |
ftp.listing.user | The username of the user that performed the FTP Listing |
file.owner | The numeric owner id of the source file |
file.group | The numeric group id of the source file |
file.permissions | The read/write/execute permissions of the source file |
file.size | The number of bytes in the source file |
file.lastModifiedTime | The timestamp of when the file in the filesystem waslast modified as 'yyyy-MM-dd'T'HH:mm:ssZ' |
filename | The name of the file on the FTP Server |
path | The fully qualified name of the directory on the FTP Server from which the file was pulled |