A blog by Jose Barreto, a member of the File Server team at Microsoft.
All messages posted to this blog are provided "AS IS" with no warranties, and confer no rights.
Information on unreleased products are subject to change without notice.
Dates related to unreleased products are estimates and are subject to change without notice.
The content of this site are personal opinions and might not represent the Microsoft Corporation view.
The information contained in this blog represents my view on the issues discussed as of the date of publication.
You should not consider older, out-of-date posts to reflect my current thoughts and opinions.
© Copyright 2004-2012 by Jose Barreto. All rights reserved.
Follow @josebarreto on Twitter for updates on new blog posts.
The Interop conference happened this week in Las Vegas (see http://www.interop.com/lasvegas) and Mellanox showcased their high-speed ConnectX-3 network adapters during the event. They showed an interesting setup with Windows Server 2012 Beta and SMB 3.0 that showed amazing remote file performance using SMB Direct (SMB over RDMA). The short story? 5.8 Gbytes per second from a single network port. Yes, that’s giga bytes, not giga bits. Roughly one DVD per second. Crazy, right?
It’s really not a complicated setup, with a single SMB Server and a single SMB client connected over one network port. The unique thing here is the combination of Intel Romley motherboards each with two CPUs each with 8 cores, the faster PCIe Gen3 bus, four FusionIO ioDrive 2 drives rated at 1.5 Gbytes/sec each and the latest Mellanox InfiniBand ConnectX-3 network adapters. Here's what the different configurations look like:
To better compare the different networking technologies, I worked with Mellanox to gather information on traditional (non-RDMA) 10Gbps Ethernet, QDR InfiniBand (32 Gbps data rate) and FDR InfiniBand (54 Gbps data rate). All done with the same network adapter, just using different cables. You can see in the picture below the back of the server showing the four FusionIO cards and the ConnectX-3 card with two cables connected (the top connector with a QSFP to SPF+ adapter for the 10GbE SFP+ cable and the bottom one using an the InfiniBand FDR cable with a QSPF connector). Both are passive copper cables, but fiber optic versions are also available.
The results on the table below speak for themselves. The remote throughput is nearly identical to the local throughput for 512KB IOs at 5,792 Mbytes/sec. The results for smaller 8KB IOs is also impressive, showing over 340,000 IOPS on the remote system. Note that these are 8KB IOs, typically used by real workloads like OLTP systems. These are not tiny 512-byte IOs, so commonly used to produce large IOPS numbers but that do not match common workloads. You also can’t miss how RDMA improves the numbers for % Privileged CPU utilization, fulfilling the promise of low CPU utilization and low number of cycles per byte. The comparison between traditional, non-RDMA 10GbE and InfiniBand FDR for the first workload shows the most impressive contrast: over 5 times the throughput with about half the CPU utilization.
Here is some of the output for Performance Monitor in each configuration, for the anyone looking for the nasty details (you can click on the pictures to see a larger version).
Note: You’ll find slight differences in the bandwidth and IOPS numbers between the two tables. The first table (with the blue background) is more accurate, since it shows a 60-second average and it uses base 2 for the bandwidth (multiples of 1024). The second table (with the performance monitor screenshots) shows instant values with base 10 (multiples of 1000).
If you want to try this scenario in your own lab, all you need is similarly configured machines and Windows Server 2012 Beta (available as a free download). For a complete list of required hardware for the InfiniBand configuration and step by step instructions on how to make this happen, see this blog post on Deploying Windows Server 2012 Beta with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-2/ConnectX-3 using InfiniBand – Step by Step.
I also delivered a short presentation at the conference covering this demo. The presentation is attached to this blog post (see link to the PDF file below).
Update on 5/7: Added picture of the server.Update on 5/11: Presentation attached to this blog post.
The Interop conference is happening this week in Las Vegas (see www.interop.com/lasvegas ) and
If you’re reading up on Windows Server 2012, you probably saw that using file storage for your
Would be interesting to see the performance of the Mellanox InfiniBand ConnectX-3 in 40Gbps Ethernet mode with and especially without RDMA in comparison to the benchmarks above.
Unfortunately we did not have time to do 40GbE testing this time.
The actual demo system is now running 10GbE and FDR (54GbIB).
Maybe in the next demo opportunity...
Its great demonstrating the smb 3.0 and infiniband but storage being a fusion io not so good for a real production system
We have seen great performance also with other storage configurations, including multiple RAID controllers, each connected to a tray of disks, and Storage Spaces using multiple SAS HBAs, each connected to a JBOD.