Large-Scale Data Transfers over the Internet - Past Project

We have studied the network and application tuning issues that need to be addressed and managed for performing large-scale data transfers over the Internet. Specifically, we focused on large-scale data transfer issues pertaining to Disaster Recovery (DR) data backup and retrieval operations. We performed experiments in a LAN and on a pilot testbed involving OARnet’s mass storage site and a Library site at the Wright State University. The unique DR requirements of Wright State University motivated us to develop automated scripts, which helped us address questions such as: (i) What is the optimum large-scale data throughput obtainable using popular file transfer applications such as FTP, SCP and BBFTP?, (ii) What are the improvements in large-scale data throughput measurements if specialized TCP stacks such as Reno, FAST, HS and BIC are used?, (iii) How many parallel TCP streams are required to attain optimum large-scale data throughputs?, (iv) What are the effects of different settings of TCP window sizes and application buffer sizes on large-scale data throughputs?, and (v) What is the impact of file sizes on large-scale data throughputs? The experimental results demonstrated that adequate network and application tuning coupled with relevant network management - could significantly improve the service response times of DR data backup and retrieval operations.

Bulk File Transfer

Figure: Disaster Recovery Testbed Setup

 

 

Documents
Prasad Calyam, Phani Kumar Arava, Nathan Howes, Siddharth Samsi, Chris Butler, Jeff Jones, "Network Tuning and Monitoring for Disaster Recovery Data Backup and Retrieval", OSC Technical Whitepaper, 2005. (Presented at SUN SuperG Meeting)