Skip to main content
NetApp Stage KB

How to change the process and concurrency limits of scanner and transferrer in Cloud Sync

Views:
Visibility:
Public
Votes:
0
Category:
cloud-sync
Specialty:
NAS
Last Updated:

Helpful 

Applies to

Cloud Sync

Answer

  • Scanner/Transfer process concurrency can be changed from Cloud Manager UI or from the Data broker Command line. 
    • Using Cloud Manager UI :
      • On the Cloud Manager sync page click "Manage Data Broker" tab

  • Click on Settings icon next to the data Broker Groups

  • Set the process and concurrency limits and click "Unify Configuration"

  • Using Data Broker CLI:
    • This is how to change number of concurrent files being processed to 30 (default is 50) per transferrer process.
      • So in total, the Data Broker will process 120 files (30 per core) on a 4 core CloudSync instance
  • Access the broker(s) via SSH and execute the following (sudo maybe required to execute commands):

> pm2 stop all

> vi /opt/netapp/databroker/config/local.json

{

"workers":{

"transferrer": {

"concurrency":30

}
}
}

:wq!

> pm2 start all

Additional Information

  • Remember that the concurrency you set is per core. So if it has 4 cores and set to 30, you have 120 files being processed at once. 
  • Please note that this would need to be done per Data Broker

 

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.
da7770d8-5947-4872-ae6c-a78f098bf3c7