...
Key | Description | Default Value | Applicable to appSettings.json Only | ||||||
---|---|---|---|---|---|---|---|---|---|
Amazon | |||||||||
amazon:version_buckets | Amazon Version Buckets Turn on bucket versioning globally by default for all Amazon S3 connections | false | |||||||
Azure | |||||||||
azure:concurrent_file_upload_chunks | Azure Concurrent File Upload Chunks Sets the number of concurrent chunks for the Azure chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection. | 3 | |||||||
Box (If using a service account use the box-service prefix) | |||||||||
box:concurrent_file_upload_chunks box-service:concurrent_file_upload_chunks | Box Concurrent File Upload Chunks Sets the number of concurrent chunks for the Box chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection. | 3 | |||||||
box:suppress_notes_versions box-service:suppress_notes_versions | Determines if Box Notes versions should be transferred. By default, the value is set to true, which suppresses versions and only transfers the latest version. If set to false, all versions of the Box Note will transfer. | true | |||||||
box:suppress_notifications box-service:suppress_notifications | Box Suppress Notifications Suppress notifications globally by default for all Box connections | true | |||||||
box:metadata_template box-service:metadata_template | Box Metadata Template Default metadata template for all Box connections | (null) | |||||||
Dropbox for Business | |||||||||
dfb:concurrent_chunk_uploads | Dropbox for Business Concurrent File Upload Chunks Sets the number of concurrent chunks for the Dropbox chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection | 3 | |||||||
dfb-teams:concurrent_chunk_uploads | Dropbox for Teams Concurrent File Upload Chunks Sets the number of concurrent chunks for the Dropbox chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection | 3 | |||||||
Dropbox | |||||||||
dropbox:concurrent_chunk_uploads | Dropbox Concurrent File Upload Chunks Sets the number of concurrent chunks for the Dropbox chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection. | 3 | |||||||
File System | |||||||||
fs:network_drives | FS Network Drives A flag indicating if mapped network drives display in the root of file system connection | false | |||||||
fs:junction_points | FS Junction Points A flag indicating if junction points should be followed | true | |||||||
fs:simulate_libraries | FS Simulate Libraries A flag indicating if libraries should be simulated using platform defaults | true | |||||||
google-suite:suppress_notifications | Google/Google Worskspace (formerly GSuite) Suppress Notifications | true | |||||||
google-suite:new_account:password | Google/Google Workspace (GSuite) New Account Password | Sky$ync1 | |||||||
google-suite:allow_shared_with_me | Google/Google Workspace (GSuite)Allow Shared with Me | false | |||||||
google_suite:allow_file_discovery | This determines if files with shared links are searchable in Google Workspace (GSuite). | false | |||||||
google-suite:suppress_external_notifications | This prevents external share notifications from being triggered specifically for Google Workspace (GSuite). This will prevent the retry that overrides the suppress_notification setting. | false | |||||||
google-suite:add_owner_to_metadata | This allows a Google Drive migration to transfer only files created by the drive owner | false | |||||||
Sharefile | |||||||||
sharefile:suppress_notifications | ShareFile Suppress Notifications | true | |||||||
sharefile:new_account:password | ShareFile New Account Password | Sky$ync1 | |||||||
sharefile:max_segment_length | Max segment length for file names | defualt = 180 (version 4.10.1.1627 forward - previous releases will have a 100 character default + max) max = 256 | |||||||
Syncplicity | |||||||||
syncplicity:base_uri | Syncplicity Base URI | ||||||||
syncplicity:new_account:password | Syncplicity New Account Password | Sky$ync1 | |||||||
Office 365 | |||||||||
(office365/office365-oauth2):batch_item_limit | The default batch item limit in Office365 | 100 | |||||||
(office365/office365-oauth2):batch_max_size | The default batch max total size in Office365 | (null) | |||||||
(office365/office365-oauth2):batch_monitor_interval | The retry interval when monitoring Office365 batches for completion (ms) | 1000 | |||||||
(office365/office365-oauth2/transfers):batch_monitor_max_retries | The maximum number of retries when monitoring transfer batches for completion | 86400 | 24 hr batch timeout | ||||||
(office365/office365-oauth2):force_csom_batch_validation | A flag that when "true" will force Office365 to leverage CSOM for batch validation. When it is "false", we will leverage the new Async Metadata Read API for batch validation. The default for the time being is "true" while we continue to put the new API through its paces. However, this will change once that effort is complete because leveraging the new API should put significantly less load on the CSOM rate limits. | true | |||||||
(office365/office365-oauth2):restricted_folders | Comma delimited list of restricted folder names in Office365 | ||||||||
(office365/office365-oauth2):invalid_characters | A string list of invalid characters in Office365 | ||||||||
(office365/office365-oauth2):site_template_ids | Comma delimited list of site template ids to be returned when listing site collections | "STS#0", "GROUP#0", "EHS#1" | |||||||
SharePoint | |||||||||
(Specified SharePoint Connector):skip_batch_validation | The specified SharePoint connector will NOT perform a post batch upload validation using the Microsoft Graph API. The results of the batch job will be used to determine if files were successful. File metadata sent in the batch job is assumed to be correct and not validated. By skipping post batch upload validation, DryvIQ will be making fewer Graph API calls, which should reduce rate limits and increase job throughput. | false |
|
...