Contoso Labs-Storage Purchasing (JBODs)

Contoso Labs-Storage Purchasing (JBODs)

  • Comments 9
  • Likes

Contoso Labs Series - Table of Contents

Today we're looping back to our first decision point: Storage. We already made our commitment to Storage Spaces on JBODs, now we had to find the right parts and vendors to fit our needs and build a good solution.

JBODs

Our search had to start with the JBODs themselves. We wanted something that was tested and certified to work with Spaces, since we are treating this as a production service as much as possible. That meant while we could have pieced something together with unnamed white box equipment, support would be on our heads. While we have access to the developers and engineering groups who create Spaces, that is not the purpose of our exercise. We're trying to view the world through customer's eyes, and that means doing the things the way a customer would. After discussing it with other groups inside Microsoft, we had our vendor: DataON.

DataON has been an early and enthusiastic supporter of Storage Spaces, and have multiple JBOD enclosures that are certified for use with Spaces, covering an array of options. We seriously considered all three and had to do some real math and pro/con evaluation to pick the right one.

DNS-1660

The DNS-1660 is an awesome bit of equipment. 60 individual 3.5" drive bays, dual controllers with 4 SAS ports each, all in a compact 4U package. Amazing or not, it had a few drawbacks for us. Foremost was that it was overkill. One enclosure with a mix of SSD's and HDD's could handle the storage needs of one of our SOFS clusters. Sounds great, but it leaves us vulnerable to an enclosure failure. You need 3 enclosures to ensure resiliency in the case of an entire JBOD failure. That would have been far too much storage, or a lot of needlessly empty disk slots. Another important factor we considered was that the top-loading nature of the drives makes servicing in the datacenter difficult. It's too easy to cause an outage by pulling the entire JBOD out of the rack each time you need to swap a failed drive…and when you have 60 of them, there are failed drives a LOT. That's just reality.

DNS-1640

The DNS-1640 is a 2U unit designed for 24 individual 2.5" drives. This would have made for a tidy storage stack with our requisite 3-enclosure layout, but presented a different issue. The highest capacity 2.5" SAS drive available right now is 1.2TB. In combination with the 6-8 SSD's we felt we needed for IOPS, this left our storage design dangerously low on overall capacity. While there's no plan to allow users to host large amounts of data, there needs to be some reasonable room for growth. We felt this was cutting it too thin and it needed more capacity.

DNS-1600

The DNS-1600 is a 4U unit designed for 24 individual 3.5" drives. This ended up being the best choice for our purposes. It allows the use of the generous 4TB 3.5" nearline SAS drives for raw capacity, along with a good sized pool of SSD's for IOPS. The front-load design of the drive bays eliminates our serviceability concerns. The main drawback of this choice is that 3 enclosures mean 12U of rack space is consumed by "only" 72 drives, where the other two JBODs could get far more raw spindles into the same rack space. This isn't something to scoff at, as we have limited rack space to work with, but after weighing all options, this was the best place to make sacrifices.

Since the time these choices were made, Dell has announce support for Storage Spaces in some of their storage products. This is good news for customers who use Dell storage and server products, and the Spaces ecosystem in general, but came too late to play a part in our deliberations.

Next up: We'll discuss the servers that these JBODs will be connected to.

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Great series, thank you for sharing.
    Is there a link with all posts of this series? Would be nice to have these link in each post to easily jump back and forth.

  • Hi Eole210,

    I just added a link at the top of this article to the Table of Contents post. That post will be updated every time a new one of these goes out. You can also always use this URL: http://aka.ms/CLabs

  • Thank you Carmen!

  • I am watching these posts with interest because we will be designing a greenfield HyperV cloud in the near future.

    What is the breakdown of SSD vs HDD in the JBOD?

  • @Tristan

    Each JBOD contains six 400GB SSDs, and eighteen 4TB HDDs. There will be posts in the near future that specifically cover the design of the Storage Spaces, VDisks, CSVs, and Shares.

  • Only 6 SSD's per JBOD cabinet? It'll be interesting to see how you're designing your storage spaces. :)

  • @Martin Yes, we had some long discussions and some IO testing to nail down what we're doing. We'll cover that, probably in a handful of posts. Short version: 1 pool, 6 Vdisks, 3-way mirrors, 6-columns. It's holding to our expectations pretty well. :)

  • Hi Carmen

    Do you know when will you be publishing information regarding your Storage Spaces Configuration and the performance you are seeing.

    I am currently designing a Storage Spaces solution for a Cloud provider and I am looking at getting 4 x Dell Power Vault M3060e JBODs each with 8 x 400GB SSDs and 52 x 4TB Near line SAS drives. I am wondering what kind of performance I could expect from this setup.

    Will you also publish information regarding Write Back Cache Sizes, Column size etc?

    Thanks

    John.