Sharing of thoughts and information is what blogging is all about. This way we can learn from each other. Post A Comment!These postings are provided "AS IS" with no warranties, and confers no rights. You assume all risk for your use.
Anthony Bartolo Twitter | LinkedIn
Pierre Roman Twitter | LinkedIn
Okay, I am asking for a show of hands: How many of you remember 100MB hard drives? 80? 40? While I remember smaller, my first hard drive was a 20 Megabyte Seagate drive. Note that I didn’t say Gigabytes…
Way back then the term Terabyte might have been coined already as a very theoretical term, but in the mid-80s most of us did not even have hard drives – we were happy enough if we had dual floppy drives to run our programs AND store our data. We never thought that we could ever fill a gigabyte of storage, but were happier with hard drives than with floppies because they were less fragile (especially with so many magnets about).
Now of course we are in a much more enlightened age, where most of us need hundreds of gigabytes, if not more. With storage requirements growing exponentially, the 2TB drives that we used to think were beyond the needs of all but the largest companies are now available for consumers, and corporations are needing to put several of those massive drives into SAN arrays to support the ever-growing database servers.
As our enterprise requirements grow, so must the technologies that we rely on. That is why we were so proud to announce the new VHDX file format, Microsoft’s next generation virtual hard drive files that has by far the largest capacity of any virtualization technology on the market – a whopping 64 Terabytes.
Since Microsoft made this announcement a few months ago several IT Pros have asked me ‘Why on earth would I ever need a single drive to be that big?’ A fair question, that reminds me of the old quote from Bill Gates who said that none of us would ever need more than 640KB of RAM in our computers. The truth is big data is becoming the rule and not the exception.
Now let’s be clear… it may be a long time before you need 64TB on a single volume. However rather than questioning the limit, let’s look at the previous limit – 2TB. Most of us likely won’t need 64TB any time soon; however over the last couple of years I have come across several companies who did not think they could virtualize their database servers because of 2.2TB databases.
Earlier this week I got an e-mail from a customer asking for help with a virtual to physical migration. Knowing who he reached out to, this was an obvious cry for help.
‘Mitch we have our database running on a virtual machine, and it is running great, but we are about to outgrow our 2TB limitation on the drive, and we have to migrate onto physical storage. We simply don’t have any other choice.’
As a Technical Evangelist my job is to win hearts and minds, as well as educate people about new technologies (as well as new ways to use the existing technologies that they have already invested in). So when I read this request I had several alternate solutions for them that would allow them to maintain their virtual machine while they burst through that 2TB ‘limit’.
With all of these options available to us, the sky truly is the limit for our virtualization environments… Whether you opt for a VHDX file, Storage Pool, Software- or Hardware-SAN, Hyper-V on Windows Server 2012 has you covered. And if none of these are quite right for you, then migrating your servers into an Azure VM in the cloud offers yet more options for the dynamic environment, without the capital expenses required for on-premises solutions.
Knowing all of this, there really is no longer any reason to do a V2P migration, although of course there are tools that can do that. There is also no longer a good reason to invest in third-party virtualization platforms that limit your virtual hard disks to 2TB.
Adaptable storage the way you want it… just one more reason to pick Windows Server 2012!
One other use case for the 64 TB size is that Windows backup does use the VHDX format now ==> so you can doe backups bigger than 2TB in total. Simple example: 25 VM with a 100GB of storage on a hyper-V host and you already need that capability.