REST API Transfer Job Configuration Options
On This Page
Usage
To use the options listed below, you would need to add the option to the "transfer": { } block when creating/editing a job using the REST API. Each colon (:) indicates a nested property. For example, to use source:size_estimate:bytes, you should include it as:
POST v1/jobs |
{
"name": "test copy",
"kind": "transfer",
"transfer": {
"transfer_type": "copy",
"source": {
"connection": {
"id": "5dc531df34554edd96c31272262ad950"
},
"target": {
"path": "/C/data"
},
"size_estimate": {
"bytes": 10240
}
},
"destination": {
"connection": {
"id": "bb44a17816004f2c9fa763a347d7ebbc"
},
"target": {
"path": "/Documents/test"
}
}
},
"schedule": {
"mode": "manual"
}
} |
The example below shows how to use empty_containers with a setting of skip, in conjunction with an exclusion filter.
POST v1/jobs |
{
"name":"Copy Job Skip Empty Folders",
"kind": "transfer",
"transfer": {
"audit_level": "trace",
"transfer_type": "copy",
"empty_containers": "skip",
"filter": {
"source": [{
"action": "exclude",
"rules": [{
"extensions": [
"wav",
"jpg"
],
"type": "filter_extension"
}],
"type": "filter_rule"
}
]
},
"source": {
"connection": { "id": "{{nfs_connection}}" },
"target": {
"path": "/EmptyTest_Source"
}
},
"destination": {
"connection": { "id": "{{cloud_connection}}" },
"target": {
"path": "/EmptyTest_Destination"
}
},
"schedule": {
"mode": "manual"
}
}
} |
Options
Key | Description | Default Value | Possible Other Values | Notes |
---|---|---|---|---|
transfer_type | The default transfer type to use for jobs when not specified | sync | default, sync, publish, | Taxonomy = Copy folder structure |
source:type |
|
| directory, file, control_file, |
|
source:event_position |
|
| string | non customer facing (read-only) |
source:size_estimate:count |
|
| long | estimates number of files on the source, used to estimate job progress |
source:size_estimate:bytes |
|
| long | estimated size of the source in bytes, used to estimate job progress |
source:connection |
|
| connection |
|
source:impersonate_as |
|
| AccountDefinition |
|
source:target:path |
|
| string |
|
source:target:uri |
|
| string |
|
source:target:item |
|
| PlatformItemID |
|
source:authenicate |
|
| true, false |
|
source:options |
|
| custom |
|
destination: |
|
| same as for source |
|
performance:retries | Unused (ideally this would tied into our recovery policies) | (null) | int | recovery policy |
performance:parallel_writes:requested | The default number of parallel writes to use during transfer execution | 2 | int |
|
performance:parallel_writes:max |
|
| int | n/a |
performance:upload:bytes_per_second |
|
| long | bandwith throttling |
performance:upload:window |
|
| Array of | window is start, midnight is end relative to time job is started potentially change window definition |
performance:download:bytes_per_second |
|
| long |
|
performance:download:window |
|
| Array of |
|
audit_level | The default audit level to use for transfer jobs (none, trace, debug, info, warn, error | info | none, trace, debug, info, warn, error |
|
failure_policy | The default failure policy to use for transfer jobs (continue, halt) | continue | continue, halt |
|
rendition | The default rendition selection policy to use for transfer jobs (original, rendition) | original | original, rendition |
|
batch_mode | The default batch mode usage policy to use for transfer jobs (none, initial, always) | always | none, initial, always |
|
permissions | The default permission preservation policy to use for transfer jobs (none, add, diff) | none | add, diff |
none : permissions are not transfered add : permissions are added only, existing permissions on dest are not touched diff: permissions are reconciled / synced (i.e. if dest has more permissions than source it would be removed) |
preserve_owners | The default audit trail preservation option | false | true, false |
|
|
|
|
|
|
restricted_content | The default restricted content handling policy (fail, warn, skip, convert) | convert | fail, warn, skip, convert | restricted extensions (i.e. dll for sp) |
large_item | The default large file handling policy (fail, skip) | fail | fail, skip | fail bubbles up to audit log as failure, skip would go to audit if audit level is set low skip would be 'ignored'
|
item_overwrite | The default item overwrite policy (fail, skip, overwrite) | overwrite | fail, skip, overwrite |
|
segment_transform | The default flag indicating if segment transformation is enabled | true | true, false |
|
encode_invalid_characters | Encodes invalid characters instead of replacing them with an underscore
| false | false, true | The UTF8 bytes for invalid characters are used and converted to a hex string. Example: 123白雜.txt would be converted to 123E799BDE99B9CE8.txt. |
filter:source |
|
| complex with options for |
|
filter:destination |
|
| 1 |
|
tracking:detection | The default change tracking policy (none, native, crawl) | native | none, native, crawl |
|
tracking:reset:on_increment | The default number of executions before resetting change tracking state | (null) | long |
|
tracking:reset:on_interval | The default interval before resetting change tracking state | (null) | {value: double, unit: d|h|m|s|ms|us|ns} |
|
conflict_resolution | The default conflict resolution policy to use for transfer jobs (copy, latest, source, destination, failure) | copy | copy, latest, source, |
|
delete_propagation | The default delete propagation policy to use for transfer jobs (mirror, ignore_source, ignore_destination, ignore_both) | ignore_both | mirror, ignore_source, |
|
duplicate_names | The default duplicate name resolution policy to use for transfer jobs (warn, rename) | rename | warn, rename | i.e. google - native docs come back as duplicate files |
empty_containers | The default empty container policy to use for transfer jobs (create, skip) | create | create, skip | empty folders + folders with all content filtered out |
versioning:preserve | The default version preservation policy to use for transfer jobs (none, native) | native | none, native |
|
versioning:select | The default version selection policy to use for transfer jobs (all, latest, published, unpublished) | all | all, latest, published, unpublished | latest = version.preserve=none |
versioning:from_source | The default number of versions to maintain on the source platform | (null) | int | how many versions to maintain - can delete versions from source |
versioning:from_destination | The default number of versions to maintain on the destination platform | (null) | int | Not all platforms support version deletes. When a specific transfer value is set and the destination platform doesn’t support version deletes, DryvIQ will use the following logic to determine how it handles transferring the versions: If the file doesn’t exist on destination, DryvIQ will respect the version limit set and only transfer the set number of versions during the initial copy/migration. If the file exists on destination, DryvIQ will migrate all new versions of the file from the source to the destination, even if it results in exceeding the file version limit set on version count. This ensures all new content is transferred. DryvIQ will log a warning to inform the user that the transfer took place and resulted in the transfer count being exceeded. |
lock_propagation | The default lock propagation option (ignore, mirror_owner, mirror_lock) | ignore | ignore, mirror_owner, mirror_lock |
|
timestamps | The default timestamp preservation policy to use for transfer jobs | true | true, false |
|
trust_mode |
|
| true, false | when file is there we assume files are same |
metadata_map:schemas |
|
| Array of { |
|
account_map |
|
| AccountMap |
|
group_map |
|
| GroupMap |
|
metadata_import |
|
| PropertyValueImportSpecification |
|
permissions_import |
|
| PermissionsImportSpecification |
|