Intercontinental OpenFlow demo at SC13

Tags:

During the Super Computing 2013 (SC13) conference November 18-21, an international team of high energy physicists, computer scientists, and network engineers from the California Institute of Technology (Caltech), the University of Victoria (UVic), and the University of Michigan , Vanderbilt , University of São Paulo, and the laboratories of KIT, CERN, LBNL and BNL performed a number of different demonstrations using a variety of next generation network equipment and servers.

Intercontinental OpenFlow Testbed formed for SC13. Links Victoria - Denver - Chicago - New York - Amsterdam - CERN (in Geneva)

Figure 1: Intercontinental OpenFlow Testbed formed for SC13. Links Victoria – Denver- São Paulo – Chicago – New York – Amsterdam – CERN (in Geneva) .

This year UVic with its partners BCNet and CANARIE focused on a demonstration of Intercontinental OpenFlow tested spanning Victoria – Denver – San Paulo – Chicago – New York – Amsterdam – CERN (in Geneva) as shown in Figure 1. The Western most node of the Canadian Software Defined Network testbed at UVic was connected with a 10G link leveraging the Internet 2 Advanced Layer 2 Services (AL2S) switch in Seattle. Two diverse paths were carried over Internet 2 terminating on two separate switches on the Super Computing show floor. Physics data was transferred over this network using OpenFlow link layer multipath switching to efficiently utilize the multiple network paths to form a loop free topology. The multipath utilization is completely transparent to the end hosts of the network and accomplished using the Caltech developed OLiMPS OpenFlow controller that resided at CERN in Switzerland. This methodology allows an automated in-network load balancing of various flows to increase network utilization and efficiency. Moreover, multiple network links can be arbitrarily added to a network to increase capacity and redundancy without the traditional challenges of Layer 2 link aggregation using LACP.

During the demonstration we were able to establish perfectly balanced network traffic across the multiple links of the testbed in a configuration that would normally cause a network loop. Figure 2 shows the results of transferring files from a single machine located on the show floor in Denver.

Figure 2: perfect balancing across diverse layer 2 paths between Victoria and Denver.

Figure 2: perfect balancing across diverse layer 2 paths between Victoria and Denver.

Disk-to-Disk at 96 Gbps over the WAN for SC12

Tags:

Project Press release

During the SuperComputing 2012 (SC12) conference November 12-16, an international team of high energy physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech), University of Victoria (UVic), and the University of Michigan together with Brookhaven National Lab, Vanderbilt and other partners, smashed their previous records for data transfers using the latest generation of wide area network circuits. Three 100 Gbps circuits were set up by CANARIE, BCNet, Internet 2, StarLight and CENIC all terminating at the Caltech Conference booth in Salt Lake City (Figure 1).

Figure 1: Wan Diagram showing the 3 100G circuits and their path to the conference show floor.

The UVic team focused on constructing an end site system that could read and write data-to-disk at 100G using a minimum set of hardware. UVic was able to produce disk reads above 95 Gbps ( Figure 2) using only 4 IBM x3650 M4 servers each populated with 16 OCZ Vertex 4 SSDs. An individual IBM server was to read from disk at 38 Gbps and write stably at 24 Gbps.

Figure 2: Disk to Disk throughput between UVic and the Caltech Booth show floor.

Each IBM server used a single Mellanox Connect-X3 ethernet card connected to a Juniper MX 480 with QSFP+ optics. The connection was carried out of the MX 480 via 100G LR4 CFP Optic to Ciena OME 6500  for Optical Transport over CANARIE and BCNet to Seattle. From Seattle the circuit was transported over the Internet 2 100G SDN Infrastructure to Salt Lake City. The established circuit was flawless with zero packet drops allowing for very high sustained throughput.

The combined team was able to achieve a total aggregate disk-to-disk throughput from all three sites peaking at 187 Gbps  (sustained 175 Gbps) and a peak aggregate memory-to-memory throughput of 339 Gbps.  Caltech achieved an 80 Gbps memory-to-memory transfer from to a single server with two 40GE interfaces at Salt Lake with nearly 100% use of the servers’ interfaces at both ends, and a transfer using Remote Data Memory Access over Ethernet between Pasadena and Salt Lake that sustained 75 Gbps with a CPU load on the servers of only 5%.

Figure 3: Peak memory-to-memory throughput of 336 Gbps achieved the Morning of November 15

Video interviews and animation describing the project