Hi Folks –
I’m often asked how to deploy inexpensive, reliable, cluster-connected storage using Windows Storage Server 2012—without the cost of expensive RAID adapters, external RAID arrays, and SAN switch fabric. Fortunately, the answer is an easy one. Other than a couple servers running Windows Storage Server 2012 Standard, all you need are SAS host bus adapters and a certified JBOD enclosure (just a bunch of disks). Everything else you need is built into the OS—namely, Storage Spaces and Failover Clustering, which you can use to implement highly available, clustered storage.
How it works:
With Storage Spaces, if you specify that you want mirrored copies of the data and plenty of disks are available, Windows Storage Server will utilize the disks and spread the copies around intelligently. If a drive goes out, Storage Spaces will automatically begin creating a new copy using other available disks to ensure you still have the desired number of copies. When you replace the disk, Storage Spaces will add it to the pool and begin using it again.
Storage Spaces works well in a standalone system, however, you can also use it together with Failover Clustering, Cluster Shared Volumes (CSV) and one or more JBOD enclosures to create a Scale-Out File Server (SOFS). Here is a deployment view of application servers connecting to a highly-available storage cluster.
To implement clustered Storage Spaces, you’ll need some hardware:
After you have all the parts, you can find instructions on how to configure a Clustered Storage Space using Windows Server 2012 here. These instructions also tell you how to configure the storage using Windows PowerShell scripts. A great overview on storage pools and Storage Spaces can be found here and this FAQ highlights the new features found in Windows Storage Server 2012 R2.
If you’re planning to deploy a clustered Storage Space, I can’t stress enough how important it is to use one of the Windows Server 2012-certfied JBOD enclosures listed here, in the Windows Server Catalog. The reasons you’ll want to use a certified enclosure is to ensure that:
As of the posting of this blog article, the list of certified enclosures on the market included: DataON Storage: DNS-1640 2U 24 Bay 2.5" 6Gb/s SAS JBOD and DNS-1660 4U 60 Bay 3.5" 6Gb/s SAS JBOD
Fujitsu: PRIMERGY SX980 S2
Super Micro Computer: SuperChassis 847E26-RJBOD1
Of course, new vendors and models are being added to the catalog as they become certified, so if you’re reading this post after its publication date, don’t assume that the above vendors and models are your only options. Check the Windows Server Catalog for the most recent list of products.
These other resources may also be useful:
The speed and horsepower of modern CPUs coupled with the continual drop in memory and hard disk drive costs have really made it possible to integrate storage into an industry standard server without having to offload the storage calculations to a dedicated processor that increases total costs.
When used together, Storage Spaces, Failover Clustering, and external JBODs are a great way to get reliable storage that is easy to manage and doesn’t require a mortgage to afford—inexpensive, reliable storage built using Windows Storage Server is really a great recipe for your data storage needs! Cheers, Scott M. Johnson Senior Program Manager Windows Storage Server
What about the "SAS Host Bus Adapter (HBA)". Does it have to be the LSI Syncro CS to enable load balancing? Would any certified HBA do if you're only looking for Active/Passive?
Any normal SAS HBA should work. LSIs standard SAS HBA adapters are the most popular ones.
I love idea of JBOD + Storage Spaces + SOFS but last time i checked performance was terrible especially for parity spaces. It disqualifies it when comparing to hardware RAID.
ArekD - There's write back cache for SSD. Not entirely sure how this works compared to other venders but usually you'd mirror SSD levels to minimise any write penalties but remain resilenant and this layer becomes your working set. The lower level RAID 5 etc... may be slow but its masked by the SSD up front. You may need considerable amount of SSD to maintain amazing performance mind. Also, don't forget about the RAM cache as well, though I suspect this will be write through. So RAM = read cache, SSD = write cache and slowly dripped to spinning rust.
Do you know how to create the cluster between two JBOD and two cluster node? Some ideas ?
Regarding the second image in your blog post as an example of a 4-node cluster connecting to 4 JBOD enclosures. I've seen variations of this image in both Microsoft and non-Microsoft presentations, including a couple at TechEd. However, while adding 4 HBAs to a server is easy enough, I haven't found a storage spaces enclosure that has dual controllers *and* accepts four connections per controller as pictured. Seems rather misleading, no?
Anonymous, regarding the 4-node cluster comment being misleading. You only had to pay a little attention to realize that on the 4-node example they are using the 60-bay DataON enclosures. Now click on the link provided for that DataON enclosure shortly below that and you will clearly see that those dual controllers have 4 SAS ports on them each.
What is the recommended h/w spec of the cluster nodes ?Do we need to maximize memory and processing ?
I would also be interested in the recommended server CPU and RAM specs.
You could significantly reduce cost and hardware count by dropping SOFS and just using Storage Spaces. What does SOFS add to this scenario?
Also interested in recommended CPU and RAM specs.
I have to agree with Ken here that SOFS doesn't seem to be doing anything and certainly is NOT what I would consider a SOFS if everyone has to connect to the SAME JBOD.
Overall based on this post and how it explains WSS, I am very dissapointed.
I would have much rather seen a true SOFS based on a HDFS type architecture or how VMware has done VSAN.
High speed 10+Gbps interconnect between nodes and then have each node use its own storage but cluster data by writing it among multiple nodes. This would be a true SOFS as with a DL380 or R720 with a SAS HBA instead of RAID controller you can just keep scaling
until you run out of money or 10Gb ports.
I have to design a large solution – have currently not sufficient equipment to test it first. Depending on your kind comment we might go to set-up a test lab. Requirement is not large IOPS number, but one single share as big as possible:
2 nodes storage spaces cluster with 4 JBOD each with 80 disk 6TB (mirror spaces) compliant to:
•Up to 80 physical disks in a clustered storage pool (to allow time for the pool to fail over to other nodes)
•Up to four storage pools per cluster
•Up to 480 TB of capacity in a single storage pool
That gives us roughly 960TB formatted space in one CSV?
Next would be to set-up 2 or 4 nodes SoFS. And here follows most crucial question:
Is it possible to extend one single CSV over two storage spaces clusters? By that we would get rougly 1,92PB volume? Can we add even more storage spaces clusters? If yes – what would be the limit?
Thank you very much for your help.
Great article. Does anyone recommend JBOD that accepts sata and ssd drives and both servers in the cluster see the same disks ? I need something cheaper than 1500.