On This Page
Directories and configuration options reference SkySync. This is expected.
Overview
This page provides information about how to set global configurations for the DryvIQ platform. You configure variables in the database through the command-line interface or in the appSettings.json file. (The appSettings.json file is located in C:\Program Files\SkySync\appSettings.json unless a different directory was specified during installation).
Transfer Performance Factors!
It is important to understand the factors that influence transfer performance. These variables all have a significant effect, positive or negative, on the throughput that any migration will achieve. Please consult DryvIQ Constative Services or Customer Support before adjusting configuration options.
Changes to the appsettings.json file will not take effect until the DryvIQ service is restarted.
Environment Overrides and Precedence
Below is the order of precedence, in order from highest to lowest, in which DryvIQ reads the configuration once the database and service are online and installation is complete. If a variable is set in two locations, DryvIQ will override the setting in the lower source with the one in the higher source.
License first
Then database
Then command-line
Then environment variables
Then appSettings.json
All settings can be set through the command line (--settingname=value) or through (SKYSYNC_settingname=value).
When using environment variables, replace the colon (:) with double underscore (__). So server:port could be set using an environment variable named SKYSYNC_server__port=value.
Configuration Options
See Transfer Job Configuration Options for more information about using the transfer block.
Key | Description | Default Value | Applicable to appSettings.json Only |
---|---|---|---|
Audit | |||
transfer_audit_purge_after | The number of days to retain the audit records. | 90 | |
Catalog | |||
catalog:aspects:max_values | The maximum number of property values that will be stored in the catalog Postgres database. | 500 | |
catalog:query_max_export | The maximum number of records to include in an export file. Anything above this number will be truncated. | 100,000 | |
Channels | |||
channels:limits:buffer | The buffer size for remote site web socket channel | 4096 | |
channels:limits:max_message | The maximum size of a message that can be sent/received through a remote site channel | 1048576 | |
channels:timeouts:connect_retry | The remote site channel connection retry interval | 00:01 | |
channels:timeouts:keep_alive | The remote site channel keep-alive interval | 00:01 | |
channels:timeouts:stale | The interval before a remote site channel is marked as stale and forcibly closed | 00:05 | |
channels:timeouts:response | The remote site channel response timeout interval | 00:00:30 | |
Connectors | |||
connectors:default_client_redirect | Connectors Default Client Redirect The default OAuth2 client redirect URI | ||
connectors:hide_authentication_details | Hides the authentication block from the | true | |
Data | |||
data:provider | The database provider (sqlite, npgsql, sqlserver, mysql, oracle) | npgsql (sqlserver in development) | true |
data:connection | The database connection string | true | |
data:timeout | The default database command timeout interval | 00:05 | true |
data:directory | The application data directory (used for licensing, data, logging) | %LOCALAPPDATA%\SkySync\v4 | true |
Deployment | |||
deployment:packageDirectory | The directory to look for setup packages when building agent and remote site bundles | (null) | true |
Jobs | |||
jobs:retention:duration:type | Jobs Retention Duration Type Default job retention duration type (days, number, none, all) | days | |
jobs:retention:duration:count | Jobs Retention Duration Count Default job retention count | 21 (type=days); 50 (type=number) | |
jobs:retention:purge_empty | Jobs Retention Purge Empty A flag indicating whether empty job executions are purged by default | false | |
jobs:priority | Jobs Priority The default job priority | 5 | |
jobs:default_stop_policy:on_success | Jobs Default Stop Policy On Success The number of successful executions before terminating a job by default | (null) | |
jobs:default_stop_policy:on_failure | Jobs Default Stop Policy On Failure The number of failed executions before terminating a job by default | (null) | |
jobs:convention_stop_policy:on_success | Jobs Convention Stop Policy On Success The number of successful convention executions before terminating a convention job by default | (null) | |
jobs:convention_stop_policy:on_failure | Jobs Convention Stop Policy On Failure The number of failed convention executions before terminating a convention job by default | (null) | |
jobs:default_schedule:mode | Jobs Default Schedule Mode The default job schedule mode | auto | |
jobs:default_schedule:interval | Jobs Default Schedule Interval The default job schedule interval | 15m | |
jobs:default_schedule:max_execution | Jobs Default Schedule Max Executions The maximum amount of time that a job can run by default | (null) | |
jobs:convention_schedule:mode | Jobs Convention Schedule Mode The default convention job schedule mode | auto | |
jobs:convention_schedule:interval | Jobs Convention Schedule Interval The default convention job schedule interval | 6h | |
jobs:convention_schedule:max_execution | Jobs Convention Schedule Max Executions The maximum amount of time that a convention job can run by default | (null) | |
jobs:terminate_on_idle | The amount of time (in minutes) the DryvIQ worker node can be idle before it exits. To see the idle time for the scheduler, can go to: http://localhost:9090/v1/diagnostics/metrics?q=schedulers. The idle time returned in the diagnostics is in seconds. | (null) | |
jobs:monitoring:cancel_polling_interval | The interval to use when polling for jobs that require cancellation | 00:00:05 | |
LDAP | |||
ldap:server | LDAP Server Default LDAP server name if not configured | (null) | |
ldap:port | LDAP Port Default LDAP server port if not configured | (null) | |
ldap:dn | LDAP DN Default LDAP DN if not configured | (null) | |
ldap:user | LDAP User Default LDAP user name used for authentication with LDAP server | (null) | |
ldap:password | LDAP Password Default LDAP password used for authentication with LDAP server | (null) | |
License | |||
license:activation_key | The license activation key | (null) | true |
license:service_uri | The license service URL | true | |
license:directory | The directory containing the activated license | "data:directory"\License | true |
license:agent_key | Activation Key to use for Agents | 7784e901-0000-0000-0000-df4cfde55fb4 | |
license:site_key | Activation Key to use for Remote Sites | 7784e901-0000-0000-0000-df4cfde55fb4 | |
Logging | |||
logging:level | Audit Level or Log Level The application audit level and the default audit level for transfer jobs In order of most → least:
| info | |
logging:remoteLevel | The log level for remote log collection (currently Amazon Kinesis) | off | |
logging:retention_days | The log retention duration in days | 21 | |
Manager | |||
manager:host:url | Manager Host URL The manager URL (used for remote sites and agents) | (null) | true |
manager:client_id | Manager Client ID The client ID used when authenticating with a manager node | (null) | true |
manager:client_secret | Manager Client Secret The client secret used when authenticating with a manager node | (null) | true |
manager:mode | Manager Mode The application type used when authenticating with a manager node (i.e. site or agent) | (null) | true |
manager:site:user_id | Manager Site User ID The user ID to execute operations as on a remote site node | (null) | true |
Metrics | |||
metrics:graphite:host | The graphite server name to use when publishing metrics | (null) | true |
metrics:graphite:port | The graphite server port (can be empty and will default based on server format) | (null) | true |
metrics:graphite:id | A node identifier to prefix all metric key names (useful to distinguish metrics coming from multiple nodes in a cluster) | (null) | true |
Net | |||
net:timeouts:default | Net Timeouts Default Default timeout applied to most HTTP requests | 00:05 (5 minutes) | |
net:timeouts:activity | Net Timeouts Activity Sliding timeout applied to read/write HTTP requests | Value of "net:timeouts:default" or default (00:05) | |
net:fiddler:enable | Enable the FiddlerCore plugin to allow the embedded Fiddler to collect traces. You will still need to separately enable and disable trace collection as needed. | false | |
net:fiddler:output | Fiddler traces are output to the logger by default. If you want to output the Fiddler traces to the .saz file, you need to add | (null) | |
net:fiddler:collect | This is "false" by default. The Enable Capture toggle on the Performance tab in the Settings updates to this configuration to "true." Enabling capture through the CLI will also update this configuration value to "true." The configuration value is saved to the database so it will be picked up by all nodes through the configuration watcher system job. Since the new configuration is saved to the database, if Fiddler capture is enabled it will also stay enabled when the system restarts. There is another configuration value that is supposed to handle this scenario, | false | |
Notifications | |||
notification:sms:enabled | Allow sending notifications via SMS | false | |
notification:slack:enabled | Allow sending notifications via Slack | false | |
notification:msteams:enabled | Allow sending notifications via Microsoft Teams | false | |
notification:json:enabled | Allow sending notifications via generic webhook endpoint | false | |
Performance | |||
performance:retries | Performance Retries Unused (ideally this would tied into our recovery policies) | (null) | |
performance:parallel_writes | Performance Parallel Writes The default number of parallel writes to use during transfer execution The default parallel write value is set at 4, 8, or 12 based on CPU logical processors count of the machine running the DryvIQ service. If the CPU Logical Processors is 2, the default parallel writes value is 4. | Varies | |
performance:concurrent_transfers | Performance Concurrent Transfers The default number of jobs to run in parallel | 1 (SQLite); 6 (all others) | |
performance:throttle:upload | Performance Throttle Upload The default bandwidth limiter to apply on uploads | (null) | |
performance:throttle:download | Performance Throttle Download The default bandwidth limiter to apply on downloads | (null) | |
Security | |||
security:tokens:lifetime:access_token | The lifetime of OAuth2 access tokens (in seconds) | 1 hour | true |
security:tokens:lifetime:refresh_token | The lifetime of OAuth2 refresh tokens (in seconds) | 14 days | true |
security:tokens:validation:clock_skew | The clockskew to use when validating times in OAuth2 token validation (in seconds) | 5 minutes | true |
Server | |||
server:slave | Server Slave A flag indicating that a node is a slave in a DryvIQ cluster | false | true |
server:security | A flag indicating if security should be enabled for a DryvIQ node | true | true |
server:port | The port that the server will listen on for HTTP requests | 9090 | true |
server:ports:{0..N} | This allows you to expose the server over multiple ports | (null) | true |
server:certificate | The path to the SSL certificate | (null) | true |
server:certificate_password | The password for the server SSL certificate | (null) | true |
server:instance_id | Unique identifier of DryvIQ instance (check with engineering prior to use) | (null) | true |
server:gateway_url | Public facing URL of DryvIQ instance, used for security and link generation for notifications emails | ||
Transfers | |||
transfers:batch_mode | Transfers Batch Mode The default batch mode usage policy to use for transfer jobs (none, initial, always) | always | |
transfers:conflict_resolution | Transfers Conflict Resolution The default conflict resolution policy to use for transfer jobs (copy, latest, source, destination, failure) | copy | |
transfers:delete_propagation | Transfers Delete Propagation The default delete propagation policy to use for transfer jobs (mirror, ignore_source, ignore_destination, ignore_both) | ignore_both | |
transfers:duplicate_names | Transfers Duplicate Names The default duplicate name resolution policy to use for transfer jobs (warn, rename) | warn | |
transfers:empty_containers | Transfers Empty Containers The default empty container policy to use for transfer jobs (create, skip) | create | |
transfers:encode_invalid_characters | Encodes invalid characters instead of replacing them with an underscore The UTF8 bytes for invalid characters are used and converted to a hex string. Example: 123白雜.txt would be converted to 123E799BDE99B9CE8.txt | false | |
transfer:failure_policy | Transfer Failure Policy The default failure policy to use for transfer jobs (continue, halt) | continue | |
transfers:item_overwrite | Transfers Item Overwrite The default item overwrite policy (fail, skip, overwrite) | overwrite | |
transfers:large_item | Transfers Large Item The default large file handling policy (fail, skip) | fail | |
transfers:lock_propagation | Transfers Lock Propagation The default lock propagation option (ignore, mirror_owner, mirror_lock) | ignore | |
transfers:max_pending_batches | The maximum number of transfer batches that will be queued up with a destination platform before we start waiting for the batches to complete. | 1000 | |
transfers:permissions | Transfers Permissions The default permission preservation policy to use for transfer jobs (none, add, diff) | none | |
transfers:preserve_owners | Transfers Preserve Owners The default audit trail preservation option | false | |
transfers:rendition | Transfers Rendition The default rendition selection policy to use for transfer jobs (original, rendition) | original | |
transfers:restricted_content | Transfers Restricted Content The default restricted content handling policy (fail, warn, skip, convert) | convert | |
transfers:skip_preserve_owners_on_failure | Allow skipping of Owner Preservation if there was an error while uploading file/folder with mapped owner. If this option is set to true, after failed attempt of setting owner we would retry upload without owner preservation. | false |
|
transfers:segment_transform | Transfers Segment Transform The default flag indicating if segment transformation is enabled | true | |
transfers:segment_truncate | Shortens a path or segment name when it exceed the maximum number of characters allowed based on the platform. Truncation removes the right-most characters until the path meets the length requirements. | false | |
transfers:timestamps | Transfers Timestamps The default timestamp preservation policy to use for transfer jobs | true | |
transfers:tracking:detection | The default change tracking policy (none, native, crawl) | native | |
transfers:tracking:reset:on_increment | The default number of executions before resetting change tracking state The initial job run is not included in the increment count. For example, if you set the option to 2, the reset won't be triggered until the third run. | (null) |
|
transfers:tracking:reset:on_interval | The default interval before resetting change tracking state | (null) | |
transfers:tracking:reset:type | Reset setting Options are stats, soft, hard, and permissions. Can be set to more than one. If both soft and hard options are used, a hard reset will be performed instead of a soft reset. Either transfers:tracking:reset:on_increment or transfers:tracking:reset:on_interval must be set in order for this to take effect. { "transfers": { "tracking": { "reset": { "on_increment": 5, "type": "stats,soft,permissions" } } } } | ||
transfers:transfer_type | Transfers Transfer Type The default transfer type to use for jobs when not specified | sync | |
transfers:versioning:from_destination | Transfers Versioning From Destination The default number of versions to maintain on the destination platform. Not all platforms support version deletes. When a specific transfer value is set and the destination platform doesn’t support version deletes, DryvIQ will use the following logic to determine how it handles transferring the versions: If the file doesn’t exist on destination, DryvIQ will respect the version limit set and only transfer the set number of versions during the initial copy/migration. If the file exists on destination, DryvIQ will migrate all new versions of the file from the source to the destination, even if it results in exceeding the file version limit set on version count. This ensures all new content is transferred. DryvIQ will log a warning to inform the user that the transfer took place and resulted in the transfer count being exceeded. | (null) | |
transfers:versioning:from_source | Transfers Versioning From Source The default number of versions to maintain on the source platform | (null) | |
transfers:versioning:preserve | Transfers Versioning Preserve The default version preservation policy to use for transfer jobs none: Turns version detection off, and DryvIQ will not preserve versions in the transfer. Only the latest version of the file will be transferred on the initial job run. Version detection will not be used on subsequent job runs, so new versions added to the file after the initial run will not be identified or transferred. This will overwrite versions that exist on the destination. native: Turns version preservation on, and DryvIQ will preserve versions in the transfer. When this option is selected, use the “select” options to control which versions should transfer. This is the default “preserve” option. | native | |
transfers:versioning:select | Transfers Versioning Select The default version selection policy to use for transfer jobs (all, latest, published, unpublished) If you select “native” as the preserve option, use the "select" option to specify which versions should transfer. all: DryvIQ will transfer all versions of the file. However, if the number of versions exceeds the version limit of the destination platform, DryvIQ will transfer the most recent file version and continue to transfer the versions in order until the version limit is reached. You can configure the number of versions you want to maintain on the source and/or destination using the from_source and from_destination configuration options. latest: DryvIQ will only transfer the latest version of the file. If DryvIQ identifies a new version of the file has been added after the initial job run, DryvIQ will transfer the newest version on subsequent job runs, even if there are many versions on the source. This doesn't overwrite versions that are already on the destination. published: If a platform supports the concept of publishing files, DryvIQ will only transfer the “published” versions of the file. (This option can also be used for Dropbox connections to ensure transfer of true file versions. See Transferring Versions with Dropbox below for more information.) unpublished: If a platform supports the concept of publishing files, DryvIQ will only transfer the “unpublished” versions of the file (a version that was removed from a previously published state). | all | |
Quartz | |||
quartz:checkin_interval | How often quartz checks in with database (missed check in, sends all node's jobs into recovery on other nodes) | 00:01:00 | |
quartz:recovery_grace_period | How long to wait before running recovery jobs from node with missed check in (see above) | 00:15:00 | |
quartz:max_thread_count | The maximum number of threads that can be created for concurrent jobs. This setting should stay internal since there should be no reason to to higher than 30. We limit the UI to 20. | 30 |
Connector Specific Configurations
Key | Description | Default Value | Applicable to appSettings.json Only |
---|---|---|---|
Amazon | |||
amazon:version_buckets | Amazon Version Buckets Turn on bucket versioning globally by default for all Amazon S3 connections | false | |
Azure | |||
azure:concurrent_file_upload_chunks | Azure Concurrent File Upload Chunks Sets the number of concurrent chunks for the Azure chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection. | 3 | |
Box (If using a service account use the box-service prefix) | |||
box:concurrent_file_upload_chunks box-service:concurrent_file_upload_chunks | Box Concurrent File Upload Chunks Sets the number of concurrent chunks for the Box chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection. | 3 | |
box:suppress_notes_versions box-service:suppress_notes_versions | Determines if Box Notes versions should be transferred. By default, the value is set to true, which suppresses versions and only transfers the latest version. If set to false, all versions of the Box Note will transfer. | true | |
box:suppress_notifications box-service:suppress_notifications | Box Suppress Notifications Suppress notifications globally by default for all Box connections | true | |
box:metadata_template box-service:metadata_template | Box Metadata Template Default metadata template for all Box connections | (null) | |
Dropbox for Business | |||
dfb:concurrent_chunk_uploads | Dropbox for Business Concurrent File Upload Chunks Sets the number of concurrent chunks for the Dropbox chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection | 3 | |
dfb-teams:concurrent_chunk_uploads | Dropbox for Teams Concurrent File Upload Chunks Sets the number of concurrent chunks for the Dropbox chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection | 3 | |
Dropbox | |||
dropbox:concurrent_chunk_uploads | Dropbox Concurrent File Upload Chunks Sets the number of concurrent chunks for the Dropbox chunked uploader If you are setting the value for an individual connection, the parameter needs to be in the auth block when you PATCH the connection. | 3 | |
File System | |||
fs:network_drives | FS Network Drives A flag indicating if mapped network drives display in the root of file system connection | false | |
fs:junction_points | FS Junction Points A flag indicating if junction points should be followed | true | |
fs:simulate_libraries | FS Simulate Libraries A flag indicating if libraries should be simulated using platform defaults | true | |
google-suite:suppress_notifications | Google/Google Worskspace (formerly GSuite) Suppress Notifications | true | |
google-suite:new_account:password | Google/Google Workspace (GSuite) New Account Password | Sky$ync1 | |
google-suite:allow_shared_with_me | Google/Google Workspace (GSuite)Allow Shared with Me | false | |
google_suite:allow_file_discovery | This determines if files with shared links are searchable in Google Workspace (GSuite). | false | |
google-suite:suppress_external_notifications | This prevents external share notifications from being triggered specifically for Google Workspace (GSuite). This will prevent the retry that overrides the suppress_notification setting. | false | |
google-suite:add_owner_to_metadata | This allows a Google Drive migration to transfer only files created by the drive owner | false | |
Sharefile | |||
sharefile:suppress_notifications | ShareFile Suppress Notifications | true | |
sharefile:new_account:password | ShareFile New Account Password | Sky$ync1 | |
sharefile:max_segment_length | Max segment length for file names | defualt = 180 (version 4.10.1.1627 forward - previous releases will have a 100 character default + max) max = 256 | |
Syncplicity | |||
syncplicity:base_uri | Syncplicity Base URI | ||
syncplicity:new_account:password | Syncplicity New Account Password | Sky$ync1 | |
Office 365 | |||
(office365/office365-oauth2):item_crawler_mode | Indicates the crawl mode used with the Microsoft API. The options are
| auto | |
(office365/office365-oauth2):batch_item_limit | The default batch item limit in Office365 | 100 | |
(office365/office365-oauth2):batch_max_size | The default batch max total size in Office365 | (null) | |
(office365/office365-oauth2):batch_monitor_interval | The retry interval when monitoring Office365 batches for completion (ms) | 1000 | |
(office365/office365-oauth2/transfers):batch_monitor_max_retries | The maximum number of retries when monitoring transfer batches for completion | 86400 | 24 hr batch timeout |
(office365/office365-oauth2):force_csom_batch_validation | A flag that when "true" will force Office365 to leverage CSOM for batch validation. When it is "false", we will leverage the new Async Metadata Read API for batch validation. The default for the time being is "true" while we continue to put the new API through its paces. However, this will change once that effort is complete because leveraging the new API should put significantly less load on the CSOM rate limits. | true | |
(office365/office365-oauth2):restricted_folders | Comma delimited list of restricted folder names in Office365 | ||
(office365/office365-oauth2):invalid_characters | A string list of invalid characters in Office365 | ||
(office365/office365-oauth2):site_template_ids | Comma delimited list of site template ids to be returned when listing site collections | "STS#0", "GROUP#0", "EHS#1" | |
SharePoint | |||
(Specified SharePoint Connector):skip_batch_validation | The specified SharePoint connector will NOT perform a post batch upload validation using the Microsoft Graph API. The results of the batch job will be used to determine if files were successful. File metadata sent in the batch job is assumed to be correct and not validated. By skipping post batch upload validation, DryvIQ will be making fewer Graph API calls, which should reduce rate limits and increase job throughput. | true |
|
(Specified SharePoint Connector):use_csom_for_permissions | The specified SharePoint connector will use the CSOM API for getting/setting permissions. This ensures the full permission sets are passed. If this flag is not set, DryvIQ will use the Graph API, which has limitations to the permissions it will pass. For example, “Can download” permissions will not be provided, so DryvIQ will translate to the user permissions to “Can view.” | false |
Usage Examples
For each of the variable override options above, there exists a specific method of setting the variable that is called. In these examples, we will look at increasing the default concurrent transfer limit.
Database
To set the config option via the database, enter the following command with your desired option:
>skysync.exe config set performance:concurrent_transfers 10 --in-database
To clear the configuration option in the database, run the following:
>skysync.exe config clear --config-key performance:concurrent_transfers --in-database
Command-Line Interface
To set the config option via the command-line interface, enter the following with your desired option:
>skysync.exe --performance:concurrent_transfers=10
appSettings.json File
To set the configuration option via your local appSettings.json file, add the following snippet with your desired option:
{ "performance": { "concurrent_transfers": "10" }, }
To see this in full context of your local appSettings.json file with multiple options enabled:
{ "data": { "provider": "sqlserver", "connection": "Server=server_location;Database=database_name;User ID=sa;Password=passW@rd!;", "embedded": "false", "native_encryption": "false" }, "performance": { "parallel_writes": {"requested" : 6}, "concurrent_transfers": 10 } }
Changes to the appsettings.json file will not take effect until the DryvIQ services is restarted.