Disk-to-Disk at 96 Gbps over the WAN for SC12

Tags:

Project Press release

During the SuperComputing 2012 (SC12) conference November 12-16, an international team of high energy physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech), University of Victoria (UVic), and the University of Michigan together with Brookhaven National Lab, Vanderbilt and other partners, smashed their previous records for data transfers using the latest generation of wide area network circuits. Three 100 Gbps circuits were set up by CANARIE, BCNet, Internet 2, StarLight and CENIC all terminating at the Caltech Conference booth in Salt Lake City (Figure 1).

Figure 1: Wan Diagram showing the 3 100G circuits and their path to the conference show floor.

The UVic team focused on constructing an end site system that could read and write data-to-disk at 100G using a minimum set of hardware. UVic was able to produce disk reads above 95 Gbps ( Figure 2) using only 4 IBM x3650 M4 servers each populated with 16 OCZ Vertex 4 SSDs. An individual IBM server was to read from disk at 38 Gbps and write stably at 24 Gbps.

Figure 2: Disk to Disk throughput between UVic and the Caltech Booth show floor.

Each IBM server used a single Mellanox Connect-X3 ethernet card connected to a Juniper MX 480 with QSFP+ optics. The connection was carried out of the MX 480 via 100G LR4 CFP Optic to Ciena OME 6500  for Optical Transport over CANARIE and BCNet to Seattle. From Seattle the circuit was transported over the Internet 2 100G SDN Infrastructure to Salt Lake City. The established circuit was flawless with zero packet drops allowing for very high sustained throughput.

The combined team was able to achieve a total aggregate disk-to-disk throughput from all three sites peaking at 187 Gbps  (sustained 175 Gbps) and a peak aggregate memory-to-memory throughput of 339 Gbps.  Caltech achieved an 80 Gbps memory-to-memory transfer from to a single server with two 40GE interfaces at Salt Lake with nearly 100% use of the servers’ interfaces at both ends, and a transfer using Remote Data Memory Access over Ethernet between Pasadena and Salt Lake that sustained 75 Gbps with a CPU load on the servers of only 5%.

Figure 3: Peak memory-to-memory throughput of 336 Gbps achieved the Morning of November 15

Video interviews and animation describing the project

This entry was posted by igable. Bookmark the permalink.