Utilizing 100G with one server at SC2014

 

The Canadian High Energy Physics network organization, HEPnet/Canada at the University of Victoria and CANARIE,  together with their colleagues at Caltech and University of Michigan, are gearing up for 100 GE demonstration using a single high performance server in Victoria and another in University of Michigan booth at SC2014.

High energy physics experiments, such as ATLAS and CMS at the LHC at the CERN Laboratory in Geneva, are collecting hundreds of petabytes of data. The data is distributed around the world using high-speed networks.   As the volume of data increases, we need to develop faster networks and storage systems.

At the University of Victoria, this will require the use of the Brocade MLXe-4 and CFP2 module to establish a dedicated 100G circuit to the Super Computing show floor. The path goes over BCNet and CANARIE to PacificWave in Seattle where it will be carried by a dedicated CenturyLink 100G path to the Caltech Booth and the University of Michigan Booth via the SCinet (SC 2014) network.

The high performance data transfers nodes by the Victoria and Michigan groups are Dell R920 servers populated with FusionIO (SanDisk) solid state storage cards and will connect to the MLXe-4 using 3x40GE NICs. An OpenDaylight controller employing custom multipath extensions will be established on the show floor and used to control the MLXe-4 at Victoria.

100G WAN Transfer nodes

Figure shows the network from the University of Victoria to the Caltech and University of Michigan booths at the SC 2014 Show Flow .

 Results

November 18: Work is ongoing to maximize the disk-to-disk transfer rate from the SanDisk Fusion ioMemory system.

November 17:  Challenges due to unexpected changes in switches lead to issues in the link between UVIC and SC during Monday.  These were identified and corrected in the evening.   The plot shows the transfer from UIVC (red) to the Michigan(yellow) booth.

display

November 15: The attached plot showing a perfectly stable 96 Gbps mem-to-mem transfer between one machine on the show floor and a single machine at UVic using only 9 tcp streams.

2014-11-15-both directions-mem-to-mem

 

Transatlantic 100GE Demonstration with CANARIE

Tags:

The Canadian High Energy Physics network organization, HEPnet/Canada at the University of Victoria and CANARIE,  together with their colleagues at Caltech, showed that researchers can fully exploit the new Advanced North Atlantic 100GE (ANA-100) 100 gigabit/second transatlantic link between NYC and Amsterdam provided by CANARIE and their partners Internet2 (USA), NORDUnet, (Nordic countries), ESnet (U.S. Department of Energy), SURFnet (Netherlands).

High performance networking testbeds at CERN in Geneva Switzerland and Ottawa, Canada were connected with the ANA-100 transatlantic circuit and a series of increasing bandwidth memory-to-memory network transfer were performed. The team was able to reach a maximum sustained network throughput from CERN to Ottawa of 94 Gbps using only two specially configured Linux servers in each location. In addition a peak bidirectional average throughput of 127 Gbps was sustained over a period of 11 hours.

ANA-100 94 Gbps

ANA-100 traffic while moving data from CERN to Ottawa on May 5, 2014. The rate peaks at a sustained 94 Gbps.

Each server used in the demonstration was configured with two Mellanox 40GE Network cards. The transfers were performed using only 3 pairs of cards and 25 parallel TCP streams in total. All transfers were completed using the high performance, open source, data transfer tool FDT developed by Caltech and others.

Optical transport across the ANA-100 link and within Canada is done using Ciena Equipment. The 100G transatlantic demo was enabled by Juniper Networks, who loaned the group a 100G router, and by CERN, who gave us access to their networks and computing facilities. Scalar Decisions acted as supplier for the servers used in Ottawa.

Figure 1: ANA-100 CANARIE Testebed located in Ottawa Canada.

ANA-100 CANARIE Testbed located in Ottawa Canada.