• Q&A With Fabian Uhse, Program Manager for Work Folders in Windows Server 2012 R2

    Hi Folks –

    A few months back I wrote a blog article on Work Folders, one of the new “hero” features in Windows Server 2012 R2 and Windows Storage Server 2012 R2. In this post, I’ll share the perspective of Fabian Uhse, a Program Manager for Work Folders at Microsoft.

    image
    Fabian Uhse

    Q: Why did Microsoft develop Work Folders? What customer pains did you set out to solve?

    Put simply, we set out to make it easier for people to access their files. When we looked at all the ways that people access data on a file server, we realized that more and more people are transitioning to a sync-based approach, which offers benefits such as offline access. We already had a sync-based approach for accessing data on SharePoint Server, and wanted to provide modern, sync-based access to file server data as well.
     

    Q: What exactly is Work Folders, and how does the feature work?

    Work Folders gives users access to files on a company file server while allowing organizations to maintain control over that data. Here’s an overview:

    • It works by syncing the data between end-user devices and the file server, which means that IT admins can use all the same familiar file server management tools to manage and secure that data. 
       
    • From a technical perspective, there are two parts to Work Folders: on the server, it’s a sub-role under File and Storage Services that can be enabled on any file server running Windows Server 2012 R2 or Windows Storage Server 2012 R2. 
       
    • On user devices, the Work Folders “client” is what connects to the server and handles synchronization of the files in the background, without the user needing to do anything else. All the user needs to think about are his or her local files.
       

    Q: How does Work Folders compare to other Microsoft sync technologies?

    Here’s how I suggest people look at the different sync options we provide:

    • First of all, there’s OneDrive—formerly called SkyDrive. OneDrive is a consumer solution, so it’s really not appropriate for business data. 
    • Second, there’s OneDrive for Business—formerly called SkyDrive Pro. You can think of OneDrive for Business as a similar, almost-parallel technology to Work Folders. OneDrive for Business offers a rich collaboration solution, which is great for SharePoint-based data. So if you’re looking for a file sync solution for data that’s already stored on SharePoint, then OneDrive for Business is all you need. It’s worth noting the difference between Work Folders and OneDrive for Business as both can exist in parallel; Work Folders is a feature of Windows Server 2012 R2 that allows data sitting on a file server to be synced, whereas OneDrive for Business syncs data stored on SharePoint—including both SharePoint deployed on-premises and as part of Office 365. By offering Work Folders in parallel with OneDrive for Business, we’re bringing new and improved sync capabilities to organizations that have opted for file servers over SharePoint.
       
    • Third, there’s Offline Files. You can think of Offline Files and Folder Redirection as predecessors to Work Folders. Both of them use the SMB protocol to provide users with access to their data and require a VPN connection to the corporate network. In comparison, Work Folders has a new and more efficient sync protocol and does not require a VPN connection.
       

    Q: How does Work Folders work with Offline Files? Why would someone use both?

    We don’t recommend deploying Work Folders and Offline Files at the same time on the same device because it presents a user education problem and potential for lost work. Here’s why:

    • On a device where both are deployed and Offline Files and Folder Redirection are redirecting “My Documents”, the user will see his or her data in “My Documents” as expected.
       
    • If the admin chose to enable Work Folders on this same data set, the user may also see the contents of My Documents in the Work Folders folder. As administrators, we know that Offline Files with Folder Redirection and Work Folders are both pointing to the same data set. 
       
    • However, the user might think one copy is redundant and delete it from the device to free-up space. Because doing so would cause this change on the local device to sync with the server, the data would be deleted from the other location on the user’s device as well.
       

    That said, Work Folders and Offline Files can work together in parallel to share the same files—they’re just two different sharing mechanisms, with different protocols. So if you’re looking to provide data access for devices that can’t take advantage of Work Folders, you can grant them access via SMB—so that those devices can continue to use Offline Files.
     

    Q: Which platforms and/or devices does the Work Folders client support?

    Support for Work Folders is an integrated component of Windows 8.1, including Windows RT. We’re currently working on releasing a download for Windows 7, which will bring this functionality to the large number of devices still running Windows 7 in existing environments. Stay tuned for more information on client platform support as this work continues.
     

    Q: What opportunity does Work Folders present for Microsoft partners who help companies deploy Windows Server 2012 R2?

    Work Folders presents an excellent opportunity for Microsoft partners to promote the adoption and use of Windows Server 2012 R2. Here’s why:

    • One of the biggest pain-points for IT departments is how to address the BYOD (“Bring Your Own Device”) movement that is happening in companies around the globe. 
       
    • With Work Folders, we help solve this problem via sync and appropriate clients for user devices.
       
    • Our vision is to support all the devices people might want to use for accessing and working with corporate data, while at the same time giving IT a set of controls ensuring compliance with corporate data governance policies.
       

    Q: How large is the opportunity for adoption of Work Folders?

    It’s important that Microsoft—and its partners—do not underestimate just how many organizations continue to rely on file servers for information-worker files:

    • We see Exabytes of new storage being deployed on file servers every year. 
       
    • Most organizations have also customized their file server environments to meet their exact needs, utilizing the vast ecosystem of file server tools and utilities that exists today. 
       
    • Work Folders integrates seamlessly into this environment and protects all the investments in it that companies have made, while at the same time allowing IT to continue to use Windows-based file servers for the very same reasons they chose them in the first place—including dependability, cost efficiency, and scalability, to mention just a few.
       

    Q: What’s involved in deploying Work Folders? Is it more complex with some customers than with others?

    There are a few steps involved in deploying Work Folders; none of them are highly difficult and most organizations already have the necessary infrastructure. Here’s what I recommend:

    • Start with an evaluation setup: A server running Windows Server 2012 R2 with Work Folders configured and two clients running Windows 8.1 as physical or virtual machines.
       
    • After these VMs or physical machines have been created, the part that takes the most time is already over—setting up Work Folders and a successful sync partnership should only take a matter of minutes.
       

    We have a great deployment guide that helps setting up the evaluation and gives further introduction into how to set this up for a production environment. Some organizations’ IT environments may be a little more complex than others and thus require a bit more planning; however, we have great documentation and a vast partner and consulting network to help even those companies with complex IT infrastructures quickly and efficiently deploy an optimal Work Folders solution.
     

    Closing Thoughts and Additional Resources

    My thanks to Fabian for taking the time to share his perspective on Work Folders. If you’d like to learn more about this cool new feature, here are some additional resources:


    Cheers,
    Scott M. Johnson
    Senior Program Manager
    Windows Storage Server
    @supersquatchy

  • Dell PowerVault NX Now Available with Windows Storage Server 2012 R2!

    Hi Folks –

    I’m pleased to announce that Dell is now offering Windows Storage Server 2012 R2 on its PowerVault NX Windows NAS Series. PowerVault NX Windows NAS Series appliances come in three base models—all based on 12th generation Dell PowerEdge hardware.

     

    1) PowerVault NX3200 – a 2U NAS appliance based on Dell’s PowerEdge R720xd rack server. With up to two Intel Xeon processors and up to 48 terabytes of raw disk space, the NX3200 is an ideal storage solution for small and midsized companies.

    clip_image001
    Key specs on the NX3200 include:

    • Dual-socket, 2U rack mount NAS server
    • Intel Xeon processor E5-2600 product family
    • Up to 32GB memory
    • SATA, NL-SAS, or SAS drive options
    • Up to 12 hot-swap 3.5” drives for data; two 2.5” drives for OS
    • Up to 48TB raw capacity when using 4TB NL-SAS drives

     

    2) PowerVault NX3300 – a 1U gateway appliance based on Dell’s PowerEdge R620 rack server. Two NX3300s are typically deployed as a clustered frontend to a Dell Compellent, EqualLogic or PowerVault MD3 primary storage array—ideal for large companies that require high capacity and availability.

    clip_image002

    Key specs on the NX3300 include:

    • Dual-socket, 1U rack mount NAS server and gateway
    • Intel Xeon processor E5-2600 product family
    • Up to 32GB memory
    • Two or four hot-swap 2.5” NL-SAS or SAS drives for the operating system (depending on the level of redundancy desired)

    3) PowerVault NX400 – a 1U NAS appliance based on Dell’s PowerEdge R320 server platform. It pairs an Intel Xeon E5-2400 series processor with up to 16 terabytes of raw disk capacity, making it a great entry-level NAS solution as well as an ideal one for remote offices and branch offices.

    clip_image003

    Key specs on the NX400 include:

    • Single-socket, 1U rack mount NAS server
    • Intel Xeon processor E5-2400 product family
    • Up to 16GB memory
    • Up to four hot-swap 3.5” NL-SAS or SAS drives for data and OS (OS partition is 120GB)
    • Up to 16TB raw capacity
    • The NX3200 and NX3300 ship with Windows Storage Server 2012 R2 Standard preinstalled, and the NX400 comes with either Windows Storage Server 2012 R2 Standard or Workgroup


    Easily Expandable, Highly Functional and Versatile

    All three PowerVault NX Windows NAS Series models can be easily expanded. Options include:

    • Dell PowerVault RBOD and JBOD expansion arrays (for the NX400 and NX3200)
    • Dell Compellent, EqualLogic, and PowerVault MD3 expansion arrays (for the NX3300)

    To help you understand how its PowerVault NX models can be deployed and expanded, Dell created this PowerVault NX Windows NAS Series Configuration Guide. It provides recommended deployment models and accompanying diagrams for small and midsized businesses, large businesses, and remote/branch office scenarios. The guide also provides a list of recommended solution components for each deployment scenario—including client-side switches, SAN switches, host bus adapters (HBAs), and external expansion arrays. Finally, it includes full specifications on each model.

      

    clip_image005 

     

    Here’s a deployment diagram from the NX400 guide for a remote office/branch office scenario.


    Note how the solution employs several built-in features in Windows Storage Server 2012 R2 Standard to deliver a highly functional, robust, and efficient solution:


    Additional Resources

    Of course, there’s a lot more great new features and functionality in Windows Storage Server 2012 R2 than the above. If you’re interested in learning more about all the new storage-related innovations in Windows Storage Server 2012 R2 (and by extension, in Windows Server 2012 R2), here are some additional resources.

    Finally, you may also want to check out this blog post by Oliver Kaven, Senior Product Marketing Manager for Dell's Enterprise Storage Group. In it, he discusses several of what he considers to be the “most notable” new and improved features in Windows Storage Server 2012 R2.


    Closing Thoughts

    Dell remains a leader in bringing all the new storage innovations from Microsoft to its customer base. I urge you to check out the PowerVault NX Windows NAS Series, which, as Dell puts it, “combines the simplicity of Windows Server together with the power of Dell hardware—all in a ready-to-deploy NAS appliance.”

    Cheers,
    Scott M. Johnson
    Senior Program Manager
    Windows Storage Server
    @supersquatchy

  • Q&A With Matthias Wollnik, Senior Program Manager for Data Deduplication in Windows Server 2012 R2

    Hi Folks –


    Back in December, I wrote a blog article on Data Deduplication, which was first introduced in Windows Server 2012 and Windows Storage Server 2012. It’s been improved in Windows Server 2012 R2 and Windows Storage Server 2012 R2, and it is quickly becoming a very popular feature. In this post, I’ll share the perspective of Matthias Wollnik, a Senior Program Manager for Data Deduplication at Microsoft:

    clip_image002

    Q: Why did Microsoft develop Data Deduplication? What customer pain(s) did you set out to solve?

    When we looked at what we could do to best serve our customers’ storage needs, we saw that:

    • The amount of data being created is continuing to grow very rapidly—to the point that companies are facing increased storage costs even as the average cost-per-gigabyte continues to fall.

    • Many companies are turning to deduplication to ease these pains, but until now those solutions have been fairly expensive.

    Based on these factors, we saw an opportunity to deliver new customer value by building Data Deduplication into Windows Server—in a way that would both help companies save money on storage and reduce related storage management costs.

    Q: Where can I get Data Deduplication and what does it cost?

    Data Deduplication is built into Windows Server 2012 R2 Standard, Windows Server 2012 R2 Datacenter, and Windows Storage Server 2012 R2 Standard—as well as the 2012 (pre-R2) versions of those editions. It’s a configurable feature under the File and Storage Services role and can be managed via Server Manager or Windows PowerShell. And because it’s built-in and ready to use, there’s no additional cost beyond that of the operating system.

    Q: Are there any system requirements for using Data Deduplication?

    Unlike some other solutions, Data Deduplication does not require any additional hardware. System requirements and considerations include the following:

    • It uses half of the available server memory at most.
    • We recommend 2 GB of usable working memory for each terabyte of data in a volume. So if you want to deduplicate a one-terabyte volume, you’ll want at least 4 GB of total server memory.
    • Data Deduplication is also compute-intensive, which is why using it for live VDI workloads requires the storage and compute nodes to be connected remotely via SMB.

    Because it works entirely at the file server level, the clients that connect to the server can be running any operating system. Data Deduplication doesn’t care if you’re using SMB3, SMB2.1, or NFS file protocols to access a share where the data is stored, or if it’s just local data that isn’t exposed for remote access.

    Q: What are the recommended use cases for Data Deduplication?

    Data Deduplication is recommended for—and delivers significant results—on home directory shares, group file and collaboration shares,  software deployment shares, and VHD libraries. There are several things to consider when determining whether to use Data Deduplication:

    • It is designed for NTFS data volumes. 
    • It works on any “cold” files that are not currently in use. 
    • It can also be used to optimize virtual disks for running VDI workloads—provided that the storage and compute nodes for the VDI infrastructure are connected remotely via the SMB protocol. (Note: This capability is new for Windows Server 2012 R2 and Windows Storage Server 2012 R2. Everything else discussed in this Q&A applies to pre-R2 versions as well.)
    • It does not support boot or system drives.
    • Microsoft does not recommend or support using it on SQL Server and Exchange Server files, which, even if cold, will not benefit much from deduplication.

    Q: Microsoft says Data Deduplication can reduce required disk space by up to 90 percent. What kind of results can a company expect?

    The amount of disk space you’ll save depends on the type of data being stored:

    • From both Microsoft internal testing and that performed by ESG Lab, Data Deduplication has shown a savings of 25-60 percent for general file shares and 90 percent for operating system VHDs.
    • You can use the Deduplication Evaluation Tool to determine the expected savings that you would get if deduplication were to be enabled on a particular local or remote folder. (Note: More information on this tool can be found here and here.)

    Q: How does Data Deduplication work?

    Data Deduplication reduces the amount of physical disk space required to store a given amount of logical data. During the deduplication process,

    • Examines and segments files into variable-sized chunks.
    • Identifies duplicate chunks that appear in more than one file.
    • Maintains a single copy of each chunk in a compressed format in a central repository.
    • Replaces each deduplicated file with a much-smaller reference that indicates which chunks are used by the file.

    When a deduplicated file is read, a filter in the read-path reassembles the file in a manner that is transparent to the calling application or user.

    Q: What opportunity does Data Deduplication present for Microsoft partners who help companies deploy Windows Server?

    Data Deduplication is a great thing to offer to setup for customers for two reasons:

    • It can help them save significantly on storage costs.
    • It’s low-touch—just turn it on, optionally adjust the default configuration parameters such as file age, and walk away.

    That said, the larger opportunity for Microsoft partners is that Data Deduplication is a great enabler for new VDI solutions, as supported by the ability to deduplicate running VDI workloads that we added in R2. Because it greatly reduces the amount of storage that’s required for VDI VHDs, it makes new VDI solutions a lot more cost-competitive.

    Q: Can you provide an example of how much a company might save in a VDI scenario?

    Let’s assume that you want to deploy 100 VDI VMs at 40 GB per desktop, and that for performance and reliability considerations you want to use mirrored, high-performance, dual-port SAS2 SSD drives:

    • Without Data Deduplication, you would need 8 TB of storage, which is 40GB x 100 VMs x 2 copies of the data. Given that a one-terabyte SAS2 SSD drive costs upwards of $2000 today, that’s at least $16,000 in SSD costs, not counting the cost of the storage server or appliance itself.
    • With Data Deduplication, which provides up to a 90 percent reduction in required disk space for VDI VHDs, you could reduce the total required disk space to about one terabyte and just pay a few thousand dollars for the SSDs.


    Thanks to Matthias for taking the time to discuss some of the most commonly asked questions about Data Deduplication. For more information on Data Deduplication see my blog here and for more info on how to deploy it for VDI storage, see Matthias’s blog articles
    here and here.

    Cheers,
    Scott M. Johnson
    Senior Program Manager
    Windows Storage Server
    @supersquatchy

  • Windows Server 2012 R2 – What’s New for Deployment and Management?

    Hi Folks –

    Over the past few months, I’ve covered eight of the items on my list of Top 10 New and Improved Features in Windows Storage Server 2012 R2. Here are the links to the entire blog series:

    This sixth and final post in the series covers the last two items on my list, which are improvements we’ve made to further ease deployment and management. Unless otherwise noted, features discussed in this post are applicable to all editions of Windows Server 2012 R2 and both editions of Windows Storage Server 2012 R2 (Standard and Workgroup).

    OEM Appliance OOBE

    In August 2012, I blogged about how the OEM Appliance OOBE (out-of-the-box experience) deployment tool, originally developed for Windows Storage Server, was being included in certain editions of Windows Server 2012—as well as both editions of Windows Storage Server 2012. At that time, it supported the deployment of standalone servers or 2-node failover clusters. This included selecting a language, keyboard and regional settings, server names, passwords, domain join, shared storage, and cluster creation. Clustered appliances built using the OEM Appliance OOBE can be deployed in about 30 minutes from a single pane of glass (i.e., a single application) running on any one of the cluster nodes.

    In May 2013, we shipped an update to the OEM Appliance OOBE to support 4-node deployments on Windows Server 2012. This update was developed in partnership with Dell’s PowerEdge VRTX Cluster-in-a-Box (CiB) team, and I blogged about how it enabled Dell to deploy their 4-node cluster in about 45 minutes. The updated code was included in Windows Server 2012 R2, so the 4-node Windows package update (KB2769588) is not needed for Windows Server 2012 R2 deployments.

    The OEM Appliance OOBE is in Windows Server 2012 R2 Standard and Datacenter, as well as Windows Storage Server 2012 Standard and Workgroup. It supports the deployment of both storage server clusters and Hyper-V host clusters. It was designed for CiB appliances that are pre-connected via the appliance’s mid-plane, and it also works great on discreet servers that are cabled together.

    Here are some additional resources on the OEM Appliance OOBE:

    New and Improved Functionality

    New and improved functionality for the OEM Appliance OOBE in Windows Storage Server 2012 R2 includes:

    • Support for 4-node clusters. OEMs can now deliver intuitive cluster setup experiences for clusters with 2, 3, or 4 nodes. 5 or more nodes should work since the OOBE framework supports an index value; however, this has not been tested by Microsoft and is not officially supported.

    • Initial Configuration Tasks (ICT). The OEM Appliance OOBE includes a WPF-based application—called Initial Configuration Tasks. This launch pad of tasks includes everything an administrator needs to do to configure the machine for an environment, including the setup of the storage and creating the cluster.

    • Index-based tasks. An index-based task control enables OEMs to configure X number of tasks based on the index value supplied. So if you want to deploy an eight node cluster, the ICT application will show eight task links that enable you to configure each node separately.

    • Windows Welcome. The screen you see when you boot a new Windows server for the first time is called Windows Welcome. The OEM OOBE records all of the selections made by the user, including the acceptance of the EULA (End User License Agreement), region, language, and keyboard settings. After Windows Welcome is completed, an XML file with the settings is created and copied to the registry of all the cluster nodes that were discovered by the IP address discovery process. The nodes are expecting this XML file and, when it arrives, the OOBE replays the selections on each node so you don’t have to login to each one and repeat your choices.

    • IP address discovery. As soon as you finish Windows Welcome, the first node of the cluster tries to find the IP Addresses of all the other nodes on the internal heartbeat network, expecting to find as many nodes as you indicate in the registry key. If an incorrect number of nodes is discovered, the wizard will notify the administrator, who should check that all the cables are plugged in and all the nodes of the cluster are powered up, and then they can try to discover all the nodes again. All of the auto-configured IP addresses assigned by APIPA are discovered by the node you are working on and the IP Addresses are inserted in the registry so the OOBE can target the machines for automatic configuration.

    • Domain controller setup. A simple domain controller setup process allows administrators to use an existing domain or to create a fresh domain controller in a local Hyper-V virtual machine that can be used to support the cluster. The wizard enables the administrator to specify a root domain name, static IP address of the domain controller, cluster management name, machine names, and passwords for all nodes in the cluster.

    The Experience

    Here are a few screen shots that illustrate the process:

    When you run the OEM Appliance OOBE, you’ll see a link to the Domain Join wizard. After you provide the necessary inputs for the wizard, you’ll see this very long confirmation page, which provides status as the wizard executes. The entire automated process of creating the domain controller, promoting it, joining all the cluster nodes to the domain, and setting local and domain credentials takes about 15 minutes, which saves me about 2 hours on a 4-node cluster.

    image

    After a quick reboot, the administrator configures and formats the shared storage, and then prepares to build the new cluster. First the OOBE validates that each of the nodes were setup correctly and the domain controller is running:

    image

    After the cluster validation test completes, the cluster can be created:

    image

    The OOBE will then automatically:

    • Prepare the domain controller’s CSV volume.
    • Migrate the VM storage, create the file server instance.
    • Update the machine pool (which is a list of all the servers that are monitored and managed via Server Manager).

    Microsoft recommends that administrators setup a live replica of the DC VM, perform regular backups, and consider running a failover instance of Active Directory on dedicated hardware to prevent service interruption.

    The OEM Appliance OOBE enables our storage partners to deliver a setup experience that automatically handles all the mundane tasks involved in setting up a cluster, while you enjoy a cup of coffee. And you won’t be sitting around enough for that cup of coffee to cool; within 30 minutes, your cluster can be configured, running, and delivering continuously-available services.

    Storage Management API

    Manufacturers building systems running Windows Server can use the Windows Storage Management API to manage a wide range of storage configurations, from single-disk desktops to massive external storage arrays. Similarly, storage subsystem manufacturers can support Windows-based storage management for their products by implementing a Storage Management Provider (SMP). For more information, see How to Implement a Storage Management Provider.

    The Windows Storage Management API supersedes the Virtual Disk Service API beginning with Windows Server 2012. This new API is designed for WMI developers who use C/C++, Visual Basic, or other scripting language that can handle ActiveX objects.

    After a Storage Management Provider is installed on the server, Windows Server Manager wizards can carve up LUNs from raw disk space, create storage pools, and create virtual disks that can be formatted using NTFS or REFS file systems. The same three storage wizards can be run from inside the OEM Appliance OOBE, as well.

    SM-API Improvements for Windows Server 2012 R2 include:

    • Up to 10x faster enumerations of physical disk resources.
    • Cluster-awareness.
    • Remote management for new Storage Spaces features like tiering and caching.


    PowerShell

    Windows PowerShell is a great way to manage storage on Windows Server 2012. Here is a list of all the PowerShell commands you can use: http://technet.microsoft.com/en-us/library/hh848705.aspx

    You should also check out Roiy Zysman’s blog on storage management, which links to this awesome laminate-worthy Storage and File Services PowerShell Cmdlets Quick Reference Card for Windows Server 2012 R2. It provides examples of commonly used PowerShell scripts for Storage, Data Deduplication, iSCSI Target Server, iSCSI Initiator, Failover Clusters, Shares, Work Folders, File Server Resource Manager, DFS Namespaces, and DFS Replication.

    Final Thoughts

    The deployment and storage management experiences for Windows Server 2012 R2 have taken a huge step forward. I encourage you to try out the new features and start building your own library of storage management commands. After you have a perfect set of scripts for a given deployment, you can whip up even the most complex cluster, share, and storage configurations from the beach while you enjoy your beverage and figure out what to do with that cute little paper umbrella that’s in your drink.


    Cheers,
    Scott M. Johnson
    Senior Program Manager
    Windows Storage Server
    @supersquatchy

  • New Storage Paper: Maximizing Storage Efficiency on Windows Server 2012 R2 and Windows Storage Server 2012 R2


    image

    Hi Folks –

    I’d like to highlight a new storage paper: Maximizing Storage Efficiency on Windows Server 2012 R2 and Windows Storage Server 2012 R2, which you can download here. This paper highlights storage features that enable you to get more out of your existing storage capacity—and to easily scale that capacity when needed. Here is a brief summary of the topics covered:

    The Need for More Efficient Storage

    For most companies, data volumes are continuing to grow at a rapid pace. Many of those companies are looking for ways to minimize storage-related costs. But where can you get the technologies needed to maximize storage efficiency, how do they work, how are they deployed, and what do they cost?

    With Windows Server 2012, All You Need to Maximize Storage Efficiency is In-the-Box

    Windows Server 2012 R2 integrates SAN features with the power and familiarity of Windows Server, enabling you to easily scale up to meet growing storage needs on low-cost, industry-standard hardware. Even better, it provides everything that you need for highly efficient storage that can scale to support the largest workloads. Key technologies that contribute to this capability include:

    • Data Deduplication, which can reduce the amount of disk space required for common storage workloads by 30-90 percent based on Microsoft internal testing. 
    • Storage Spaces, Microsoft’s implementation of storage virtualization, which enhances storage scalability through the abstraction of logical storage from physical storage.
    • Thin Provisioning and Trim, which enable you to deploy only the disk space you need today, expand dynamically when needed, and automatically reclaim storage that is no longer needed.

    The paper then dives into each of these technologies in greater detail.

    Benefits of Improved Storage Efficiency

    The benefits of using Windows Server 2012 R2 to maximize storage efficiency include:

    • Reduced/deferred storage costs. With the technologies in Windows Server 2012 R2, you can store more logical data in less physical disk space, purchase only the physical storage you need today, expand dynamically as needed, and avoid operating costs associated with supporting unused disk capacity until it is actually needed.
    • Low acquisition costs. All features needed to maximize storage efficiency are included in-the-box with Windows Server 2012 R2 (Standard and Datacenter) and Windows Storage Server 2012 R2 Standard, and can be used without any additional hardware, software, or licensing fees.
    • Fast, easy deployment. With Windows Server 2012 R2, there’s no new hardware or software to deploy. Technologies for maximizing storage efficiency are built into the operating system and can be turned-on and configured in just a few minutes.
    • Ease of management. Server Manager in Windows Server 2012 R2 can provide a single view of all your storage, across your datacenter. System administration tasks can also be performed using Windows PowerShell, enabling you to automate provisioning of new storage spaces and virtual disks whether you have one server or multiple servers.

    Deployment Scenarios

    The technologies for maximizing storage efficiency in Windows Server 2012 R2 are designed to work equally well in your main datacenter or a remote office/branch office (ROBO) scenario. In the ROBO scenario, two additional features built into Windows Server 2012 R2 are often useful: BranchCache, and DFS-Replication. Both of these are discussed in the paper, along with high-level deployment diagrams.

    SAN Performance at a Fraction of the Cost

    The Enterprise Strategy Group (ESG) tested the Microsoft Storage Stack against a traditional SAN, concluding that the Microsoft Storage Stack provides comparable performance at a fraction of the cost. The paper contains a summary of these results, and the entire report can be found here.

    Final Thoughts

    If you’re not already thinking about adopting a highly efficient storage infrastructure based on industry standard hardware, you may want to consider one. This paper provides a concise overview of how Windows Server 2012 R2 and Windows Storage Server 2012 R2 can help you save on storage costs by using the latest Microsoft storage technologies to increase storage efficiency. You’ll not only save on disk space, but you’ll be able to achieve performance similar to a traditional SAN at a fraction of the cost.

    The entire Maximizing Storage Efficiency paper can be downloaded here. I encourage you to read the paper and learn how you can use Windows Server to create a lean and mean storage infrastructure!

    Cheers,
    Scott M. Johnson
    Senior Program Manager
    Windows Storage Server
    @supersquatchy