Well, dealing with the creative responses of my co-workers after the first video was so much fun -- comments like "The man may excel at computational data storage and analysis, but he needs to learn a thing or two about whiteboard real estate" So, why stop? The answer is: we already taped the second one and the marketing team really doesn't understand the concept of sunk costs.
In the new video we spend some time talking about benefits of the replication based storage model over backups for protecting your data.
Basically the issue comes down to a couple key benefits of any replication based strategy:
1) Since you are replicating continuously, you don't have a discontinuous process that is intrusive to regular operation that needs to run regularly and more importantly with replication you only need to back up each piece of content ONCE. With a backup model you have to backup each piece every time you do a full backup. This means that the cost and complexity of backup becomes unsupportable once mailboxes get very large. Replication really only copies the mail once so it is continuous and the cost and complexity are proportional to delivery rates NOT how long each mail is kept.
2) Because the copies are fully up-to-date all the time and are true, validated replicas, the time it takes to restore (i.e. get the backup up and running) is much faster in a replication model.
There are more benefits of replication that aren't covered. Things like being much more secure against logical corruption in the storage stack because the write paths are so different on the primary and replicas and the ability to spread the replicas around a continent and get a true disaster recovery benefit. But we can go into more detail on those issues another time.
At this point, I suppose some might be wondering if classic backups are good for anything. Well some people might very well think that. I couldn't possibly comment. At least right now... But I'd like to hear what you think!
There were some good questions raised from my first blog post which I'll answer here.
First, yes Exchange 2010 should seem a little snappier on the similar hardware compared to Exchange 2007 and especially compared to 2003. But the snappiness should be almost a small side effect of the overall IO efficiency improvements if the hardware was properly sized for the original version. Overall we have seen an improvement of 2-4x in the number of io's necessary to support a given user profile between 2007 and 2010. To get the full benefits of the new version you do need to design your hardware deployment properly following both our guidance and that of your storage vendor's.
Ok, so how did we do it? Getting such a large change in the performance characteristics of the system required a lot of changes. However the conceptual core of the changes the 'theme' as it were was about finding ways to taking small random io's and make them bigger and more sequential. Disk drives are so dense that the amount of time it takes for the head to sit and read 64kB off the platter compared to reading 4kB of the platter is small compared to the time it takes to move to the right track and then wait for the stream to rotate under the head (this is especially true if you are using green drives that use much less power because they rotate slower). Since that is the case if you can combine sixteen 4kB io's into one big 64kB io you get close to a 16x io improvement. The biggest changes we made to get this win were to:
We also got some of our wins by improving our cache efficiency there were a couple of wins there:
This webcast goes into more detail about the Exchange 2010 storage changes -- https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032418921
Now, what about virtual storage?
I have been accused of being a bit of an anti-virtualization bigot. But the truth is I am a huge fan and I have seen the potential benefits first hand when I spent some time working in Microsoft's IT department. There are many LOB applications in most companies that consume relatively small chunks of storage and CPU. However, in a dedicated model there is a minimum practical amount of storage that can be deployed per application if it is on its own hardware. So our own IT group used to have thousands of applications with storage utilization rates at less than 20%. By creating a central storage utility that is shared across many applications (the disk drives that the servers connect to are 'virtual') it is possible to get much higher average utilization rates while still providing room for spikes in load and growth. The cost savings can be dramatic.
However, with most Exchange deployments, the amount of data involved is so high that there isn't usually a problem getting very good utilization factors and often the SANs are used in a very dedicated model for their Exchange deployments. Without getting great capacity utilization wins it is difficult to overcome the large per spindle and per bit cost overheads associated with these approaches and the complexity of the systems can be significant.
Typically with a SAN deployment you would design it using a RAID-1 configuration for your primary system with some sort of backup to disk using snaps and then an offload to tape and have a redundant site if you were concerned about geo-scaling. In a JBOD approach, the model is to map a single drive to a single database. Then, you choose the number of replicas you want of each database and the number of physical locations you want across those copies. When you lose a spindle, load is transferred to another spindle on another system. In addition to the reduced complexity of not having a shared storage fabric, you get added availability benefits because the full hardware stack is protected by the application level replication.
This webcast goes into more detail about the Exchange 2010 High Availability changes -- https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032416677