Microsoft Enterprise Platforms Support: Windows Server Core Team
EPS Team Blogs
Product Team Blogs
What do you suggest to use instead of mount points when there is no logical place to divide up data into multiple volumes? We have many clients for which we need to load and process data, but each clients' needs are variable, and often unknown at the outset. One client project could make up the 90% of the data for all clients, but we don't know that when the project starts, and given that the data distribution for that project is also variable, there is no obvious place to segment the data.
One question I've had is whether it is supported to create a spanned volume consisting of multiple VHDs (on a CSV) in a clustered VM, but I've not been able to find anyplace that suggests that it is (or isn't) or what the implications might be if you do. That approach also sounds messy. Maybe the real answer doesn't come until ReFS? 2 TB is just not enough these days.
Very informative article. It gave me a very good insight.
2TB these days isn't all that much, you're right. Remember the olden days where a 100gb drive was considered huge? Ah, the good ole days =)
You're also right where MPs aren't always possible to get setup and running in an existing datacenter or in situations where space consumption can't be quantified until it starts beign used. It's always in the best interest of everyone involved to try to get some sort of scoping or useage metrics but it's not always possible.
AS for specific configurations, I can't really help in depth as I'm not aware of all the possible OEM solutions or configurations, but typically I see people run with volumes presented from a SAN to specific departments or customers, that can then be dynamically grown on the back end as more space is needed. This can still lead to a massive volume that will take a long time for chkdsk to run through, but it's at least better than the alternative.
Using mount points is a general suggestion to help keep customers from throwing out a single 2+tb volume for use; becasue they can.
Doing that can lead to much gnashing of teeth and lots of downtime when a major file system issue is hit and chkdsk needs to run its full course.
As for the spanend VHD question. I think I know what you're asking? :)
A VM with, say, 4 VHDs, 1 OS, 3 data, that you have booted into the OS and created a Spanned set from those 3 data VHDs will be completely fine to store on a CSV volume. Cluster has no clue what's *in* the VHDs, and doesnt' care. The Spanned Volume and Dynamic Disk information is within the VHD file itself and is only valid to the OS that boot straps the file system in those VHDs.
Now, if I'm wrong, and that's not what you were asking, just clarify for me and I'll be glad to respond.
When following best practices for mountPoints:
•Try to use the root (host) volume exclusively for mount points. The root volume is the volume that hosts the mount points. This practice greatly reduces the time that is required to restore access to the mounted volumes if you have to run the Chkdsk.exe tool. This also reduces the time that is required to restore from backup on the host volume.
The question I have is when a root volume is flagged dirty, does chkdsk run only on the root volumee or does it run on all of the mount point under the root volume.
Is the information in the article still valid for 2012 R2 ?
Sorry for the 6 month delay in responding Lionel!
Yes, these steps should work for 2012 - they have worked in my testing anyway.
If you find differently, let me know and I'll see what I can dig up.