Windows Server 2012 Beta with SMB 3.0 – Demo at Interop shows SMB Direct at 5.8 Gbytes/sec over Mellanox ConnectX-3 network adapters

Windows Server 2012 Beta with SMB 3.0 – Demo at Interop shows SMB Direct at 5.8 Gbytes/sec over Mellanox ConnectX-3 network adapters

  • Comments 6
  • Likes

The Interop conference happened this week in Las Vegas (see http://www.interop.com/lasvegas) and Mellanox showcased their high-speed ConnectX-3 network adapters during the event. They showed an interesting setup with Windows Server 2012 Beta and SMB 3.0 that showed amazing remote file performance using SMB Direct (SMB over RDMA). The short story? 5.8 Gbytes per second from a single network port. Yes, that’s giga bytes, not giga bits. Roughly one DVD per second. Crazy, right?

It’s really not a complicated setup, with a single SMB Server and a single SMB client connected over one network port. The unique thing here is the combination of Intel Romley motherboards each with two CPUs each with 8 cores, the faster PCIe Gen3 bus, four FusionIO ioDrive 2 drives rated at 1.5 Gbytes/sec each and the latest Mellanox InfiniBand ConnectX-3 network adapters. Here's what the different configurations  look like:

clip_image002

To better compare the different networking technologies, I worked with Mellanox to gather information on traditional (non-RDMA) 10Gbps Ethernet, QDR InfiniBand (32 Gbps data rate) and FDR InfiniBand (54 Gbps data rate). All done with the same network adapter, just using different cables. You can see in the picture below the back of the server showing the four FusionIO cards and the ConnectX-3 card with two cables connected (the top connector with a QSFP to SPF+ adapter for the 10GbE SFP+ cable and the bottom one using an the InfiniBand FDR cable with a QSPF connector). Both are passive copper cables, but fiber optic versions are also available.

WP_000065

The results on the table below speak for themselves. The remote throughput is nearly identical to the local throughput for 512KB IOs at 5,792 Mbytes/sec. The results for smaller 8KB IOs is also impressive, showing over 340,000 IOPS on the remote system. Note that these are 8KB IOs, typically used by real workloads like OLTP systems. These are not tiny 512-byte IOs, so commonly used to produce large IOPS numbers but that do not match common workloads. You also can’t miss how RDMA improves the numbers for % Privileged CPU utilization, fulfilling the promise of low CPU utilization and low number of cycles per byte. The comparison between traditional, non-RDMA 10GbE and InfiniBand FDR for the first workload shows the most impressive contrast: over 5 times the throughput with about half the CPU utilization.

clip_image004

Here is some of the output for Performance Monitor in each configuration, for the anyone looking for the nasty details (you can click on the pictures to see a larger version).

Configuration \ Workload 512KB IOs, 8 threads, 8 outstanding 8KB IOs, 16 threads, 16 outstanding
Non-RDMA (Ethernet, 10 Gbps) clip_image005 clip_image006
RDMA (InfiniBand QDR, 32 Gbps) clip_image007 clip_image008
RDMA (InfiniBand FDR, 54 Gbps) clip_image009 clip_image010
Local clip_image011 clip_image012

Note: You’ll find slight differences in the bandwidth and IOPS numbers between the two tables. The first table (with the blue background) is more accurate, since it shows a 60-second average and it uses base 2 for the bandwidth (multiples of 1024). The second table (with the performance monitor screenshots) shows instant values with base 10 (multiples of 1000).

If you want to try this scenario in your own lab, all you need is similarly configured machines and Windows Server 2012 Beta (available as a free download). For a complete list of required hardware for the InfiniBand configuration and step by step instructions on how to make this happen, see this blog post on Deploying Windows Server 2012 Beta with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-2/ConnectX-3 using InfiniBand – Step by Step.

I also delivered a short presentation at the conference covering this demo. The presentation is attached to this blog post (see link to the PDF file below).

 

-----

Update on 5/7: Added picture of the server.
Update on 5/11: Presentation attached to this blog post.

Attachment: Interop - WS2012 and SMB Direct.pdf
Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • <p>The Interop conference is happening this week in Las Vegas (see <a rel="nofollow" target="_new" href="http://www.interop.com/lasvegas">www.interop.com/lasvegas</a> ) and</p>

  • <p>If you&rsquo;re reading up on Windows Server 2012, you probably saw that using file storage for your</p>

  • <p>Would be interesting to see the performance of the Mellanox InfiniBand ConnectX-3 in 40Gbps Ethernet mode with and especially without RDMA in comparison to the benchmarks above.</p>

  • <p>@andreaserson</p> <p>Unfortunately we did not have time to do 40GbE testing this time.</p> <p>The actual demo system is now running 10GbE and FDR (54GbIB).</p> <p>Maybe in the next demo opportunity...</p>

  • <p>Its great demonstrating the smb 3.0 and infiniband but storage being a fusion io not so good for a real production system</p>

  • <p>@ash</p> <p>We have seen great performance also with other storage configurations, including multiple RAID controllers, each connected to a tray of disks, and Storage Spaces using multiple SAS HBAs, each connected to a JBOD.</p>