Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Current »



On This Page

Overview

Parallel writes is a configurable feature relating to the number of web service requests across an instance of DryvIQ, on a given node, that will operate in parallel. However, it is very important to note that increasing the number of parallel writes does not always equal faster/better. There is a long list of concepts that have to be taken into account. Please see DryvIQ Platform | Scalability and Performance Whitepaper which captures these areas of considerations.

Currently, you can only set the parallel writes for an individual job using the REST API. Global parallel writes can be set in the Performance Settings. 

Parallel Writes and Memory Usage

A job in DryvIQ does not use a fixed amount of memory. Memory usage for individual jobs will vary based on a number of factors, the most significant one is the number of files and how those files are distributed (all in one folder, throughout sub-folders, etc.). To avoid excessive memory usage related to how content is distributed, DryvIQ recommends preserving the system default for Directory Item Limits | max_items_per_container. The main factors for memory usage for a DryvIQ node will be number of concurrent jobs + Parallel Writes Per Job for each job and the memory impact of the specific jobs.

Addressing Memory Issues

If memory issues occur due to increasing the Directory Item Limit or Parallel Writes Per Job, there is no other mitigation other than reducing the number of current jobs or breaking up the source content in to multiple jobs. DryvIQ will keep using memory until it runs out (it will not self-limit) and it will eventually reach the environment max. Reaching an environment max may result in a non-graceful termination of DryvIQ that could result in jobs re-transferring files, permissions or metadata.  In the case of larger jobs being stopped in this manner, they will enter recovery mode, continue to use all the memory, then get stopped again, in a loop causing a loss of throughput.

Default Parallel Write Settings

The default parallel write value is set at 4, 8, or 12 based on CPU logical processors count for the machine running the DryvIQ service.

If the CPU Logical Processors is 2, the default parallel writes value is 4.

If the CPU Logical Processors is 8, the default parallel writes value is 8.

If the CPU Logical Processors is 32, the default parallel writes value is 12.

Set Parallel Writes for a Job

Add the following in the transfer block of your job.

{
  "performance": {
        "parallel_writes": {
                "requested": 4
         }
     }
}

Example

{
	"name": "Test Parallel Writes",
		"kind": "transfer",
		"transfer": {
			"audit_level": "trace",
			"transfer_type": "copy",
			"performance": {
				"parallel_writes": {
					"requested": 4
	}
},
	"source": {
		"connection": {
			"id": "{{cloud_connection}}"
},
	"target": {
		"path": "/MASTER_TESTS/BASIC TRANSFER TESTS"
	}
},
	"destination": {
		"connection": {
			"id": "{{cloud_connection}}"
},
	"target": {
		"path": "/SAP/LB/Test_ParallelWrites"
		}
	}
},
	"schedule": {
		"mode": "manual"
	}
}

To review your job to confirm your request parallel writes, use the following call.

GET {{url}}v1/jobs?include=all

Update Parallel Writes on an Existing Job

The following body in a PATCH request to {{url}}v1/jobs/{{job}} will update the parallel_writes value to 8.

{
    "kind": "transfer",
	"transfer": {
      "performance": {
            "parallel_writes": {
                "requested": 8
    	}
      }
    }
} 

  • No labels