Geo Replication in Hybrid Cloud Storage
According to the Gartner Group, enterprise IT ‘s biggest challenge today is double-digit data growth. In fact, data is growing in enterprise storage banks at the alarming average of 40% per year, as estimated by IDC. Other industry experts place the number even higher. “Data is growing in enterprise IT departments at 50% per year,” says Tam Dell'Oro, founder and president of Dell'Oro Group, a research and consulting firm.
While this is going on, the storage hardware industry giants continue to claim that the best way to deal with this issue is to keep purchasing more and more expensive Storage Area Network (SAN) arrays. This is like a car dealer telling a customer that if they can’t fit enough stuff in their recently purchased car, they just need to buy another of the exact same car to help carry all the stuff around.
In addition, this solution no longer resonates with enterprise IT departments because:
They are already spending a very high percentage of their total IT budget on SANs
Too many of them have contracted a severe case of “Storage Paranoia”, i.e. they know they will run out of storage at some point, but don’t know exactly when
They are also experiencing “Storage Catch 22” - How do you know exactly how much new SAN capacity to buy ? If you buy too little, you run out and have a storage crisis on your hands. If you buy too much, you are accused of wasting resources and budget dollars. Either way, it’s a losing proposition for the IT person in charge
Infinity Pharmaceuticals, a rapidly growing drug development company, found themselves in this exact situation. “Our amount of data is growing at a remarkably rapid pace,” says Zac Saunderson, Network Engineer at Infinity Pharmaceuticals. “In our industry, we produce data at an even greater rate as we get closer to releasing product lines.” In the past, Infinity primarily stored its data on a SAN in its datacenter and it also stored some data locally on file servers. Keeping up with the demands of high data growth was sapping the time and resources of their entire IT staff. As a result, they decided to take a fresh look at their IT infrastructure to discover if there were new technologies available which could help them operate more efficiently. What they found was hybrid cloud storage from StorSimple.
In 2011, Infinity decided to start using cloud technologies—specifically Microsoft Azure—for several business functions. As part of the move to the cloud, the company adopted StorSimple hybrid cloud storage. By storing infrequently accessed data in the cloud, Infinity was able to greatly reduce the amount of data that needed to be stored on their SANs, freeing up SAN space for more important IT projects. Also, StorSimple combines local enterprise storage with Azure for cloud economies and DR to reduce overall cost and complexity of traditional storage and data protection. In 2014, Infinity took part in an early adopter program for the latest generation of the solution, the StorSimple 8000 series hybrid cloud storage arrays. “We believe that there’s nothing else in the market that’s as easy to use with as much control as StorSimple,” says Saunderson. “We’re continually finding new ways to use our StorSimple devices.” One key result of the test Saunderson said was, “We don’t have to deal with tapes, augment our SAN, or seek outside help with storage management, so we estimate we will save about $65,000 in just the first year of using StorSimple. It’s heartening to have such a reliable, complementary way to control data growth and keep storage costs from skyrocketing.”
Other StorSimple customers have experienced similar results in the battle against runaway data growth. MedPlast, a rapidly growing medical device manufacturer, was suffering from acute data growth symptoms. In June 2011, MedPlast switched its data-storage environment to StorSimple hybrid cloud storage. “The concept of essentially bottomless data storage was very appealing to us,” says MedPlast IT Director, Dan Streufert. With StorSimple, MedPlast can use three tiers of storage for different classes of data: solid state disks for the most frequently accessed data; hard disk drives for less frequently accessed data; and cloud storage for seldom used and inactive data as well as off-site data protection. After a few settings are chosen by MedPlast, StorSimple automatically tiers data to the selected locations, including the cloud, without their IT staff having to worry about it.
The company began an upgrade of its data-storage environment in August 2013 by participating in a technology adoption program for the new Microsoft Azure StorSimple 8000 series hybrid storage arrays. “We were very interested in this new version of StorSimple because of its tight integration with Microsoft Azure,” says Streufert. “We found the combination of a new centralized management portal, enhanced alerting features, automated archiving, and new virtual appliances very compelling.”
MedPlast uses StorSimple to store almost 95 percent of its data, including all of its departmental databases. The company uses deduplication and compression features in StorSimple to help minimize storage consumption. And by using the cloud snapshot feature to automate off-site data protection, Streufert’s IT team eliminated the need to manage tape backups and streamlined their disaster recovery program. “When we store databases on a StorSimple array, we can snapshot it to the Microsoft Azure cloud and simplify the backup and recovery procedures for those databases,” says Streufert.
Walbridge, a large global construction company, first heard about StorSimple hybrid cloud storage while attending a Microsoft regional seminar on new cloud technologies and unique solutions for enterprise storage. Microsoft recommended the StorSimple hybrid cloud storage solution to solve the company’s storage problems. After comparing this advanced solution to more conventional storage arrays, the company was convinced that StorSimple was the right solution.
As AVP of Information Technology, Cynthia Weaver, puts it, “We needed to find a cost-effective new technology that could help us cope with the explosive data growth that we are experiencing. Storage space that used to last us 5 years now only lasts for 3 years. With StorSimple and Microsoft Azure hybrid cloud storage, we found exactly what we were looking for.” The StorSimple solution has saved Walbridge 65% in overall storage costs over the SAN alternative and has turned out to be a much more efficient and cost effective way to handle the company’s massive data growth.
After deploying the StorSimple solution, Walbridge’s backup and restore functions are 50% faster than before and the administrative overhead of data protection has been cut in half. It also provides a much faster way to recover deleted files. For example, to recover a lost file, all that is needed is the date and time the file was deleted and the file is restored in under 2 minutes, compared to 3 days using the old tape system. In addition, individual database restore times have dropped by 80%.
Clearly, there now exists a much more efficient and cost effective way to handle run away data growth in the enterprise.
Today we are excited to announce availability of the StorSimple Adapter for SharePoint (SASP) for the StorSimple solution. The StorSimple Adapter for SharePoint brings the hybrid cloud storage capabilities of Microsoft Azure StorSimple to on premise SharePoint farm deployments. StorSimple's distinctive hybrid storage features such as dynamic capacity expansion to Azure storage, automated off-site data protection with cloud snapshots, and inline deduplication and compression combine together with SharePoint's rich collaboration and management offerings. The result is a compelling synergistic solution from Microsoft, that is fully supported and is ready to deploy into datacenters today.
The solution uses SharePoint Remote BLOB Storage to seamlessly offload BLOB content stored in SharePoint's SQL Server based content databases onto iSCSI volumes stored and managed by the StorSimple appliance. This solution relieves IO load on the SQL Server to improve end user responsiveness and enables the user data to get the benefit of data protection and disaster recovery features that are the hallmark of StorSimple. The collaborative, content-driven nature of SharePoint benefits from this solution. In partnership with Kroll Ontrack, IT administrators can search and restore individual SharePoint items without conducting a full database restore by using Ontrack PowerControls (version 7.2). All these capabilities are available and supported on SharePoint 2010 and SharePoint 2013.
For more information about the StorSimple Adapter for SharePoint please visit the documentation page.
Woot! It's a huge day for us at StorSimple/Microsoft! We are announcing our new Microsoft Azure StorSimple 8000 series arrays, the StorSimple 8100 & 8600, that integrate with two new Azure services, the Microsoft Azure StorSimple Manager and the Microsoft Azure StorSimple Virtual Appliance. The 8100 and 8600 are hybrid storage arrays in every sense of the word with automated data tiering between SSDs and HDDs - but with an additional cloud storage tier with the lowest online storage cost and cloud-scaling capacity. If you are tired of buying and installing storage and rebalancing workloads all the time, you really need to check out StorSimple. The same is true for managing data protection, because off-site data protection is completely automated with StorSimple. And if you can't test DR because it's too disruptive and takes too long, you need to look into non-disruptive, thin recovery with StorSimple. Then there is managing double digit data growth - that too is something StorSimple solutions excel at. Check out what our customers have to say about this revolutionary storage technology by looking at our customer case studies.
If hybrid cloud storage had existed years ago, we wouldn't have needed the complex storage and data management technologies and processes that we have today. When you integrate off-site cloud storage with local enterprise storage many of the difficult storage management problems disappear. A comparison of non-cloud hybrid storage and StorSimple hybrid storage is summarized in the graphic below. The addition of an integrated cloud tier off-site gives the StorSimple the advantages of automated data protection, archiving and online capacity for dormant, inactive files.
The big shift to a data-centric ROI
StorSimple has been reducing the cost of storage for customers for several years. With the introduction today of Microsoft Azure StorSimple solutions, a new vision of hybrid cloud storage is taking shape - one where cloud services and data mobility is front and center. The new economic equation of Microsoft Azure StorSimple features expanded access to enterprise data by cloud-resident applications as well as the greater management efficiencies customers appreciate.
The new StorSimple Manager is an Azure management portal that controls all functions of all StorSimple 8000 series arrays across the enterprise. It provides a single, consolidated management point that uses the Internet as a control plane to configure all parameters of StorSimple 8000 series arrays and for displaying up-to-the-minute status information in a comprehensive dashboard. There is never a need to visit a remote site to manage removable data protection media or upgrade storage capacity because data protection and capacity expansion are done automatically by the array using cloud storage. Data management tasks, such as configuring data retention polices is accomplished from the StorSimple Manager for all sites, enabling centralized compliance with corporate standards. The StorSimple Manager's reach is powerful and a great example of how hybrid management concepts bring not only real cost savings, but more accurate control of infrastructure equipment and processes.
Overcoming location limitations to leverage data
The location where data is stored is typically determined by the performance requirements of the most latency-sensitive applications. This is certainly understandable, but it also restricts data from being effectively leveraged by systems and applications in other locations such as cloud datacenters. Discussions of using enterprise data in the cloud have often been in the context of cloud bursting, but there are many other scenarios where the StorSimple Virtual Appliance will provide access to enterprise data in Azure.
Data mobility with the 8000 series arrays and the Virtual Appliance starts with the data that has been uploaded to Azure by StorSimple cloud snapshots. Cloud snapshots are like snapshots in other storage products, except they are stored off-site in the cloud without the capacity limitations that can compromise data retention with on-premises disk-based backup systems. Cloud snapshot jobs not only effectively backup data on the StorSimple array, they also synchronize the volume on-premises with the virtual volume in Azure. Cloud snapshots contain deduped data that requires less storage capacity in the cloud, but needs to be rehydrated to be used again. The StorSimple Virtual Appliance rehydrates virtual volumes in Azure, making them available to VMs and applications there. This allows customers to use the data they have in Azure for the following scenarios:
There are two ways to transition virtual volumes in Azure for use by VMs there: failover and cloning. Both processes are managed quickly and easily from within the StorSimple Manager, which identifies the volume that will be transitioned and the host (or VM) that will then be granted access to it. Failover takes a virtual volume in the cloud and directly assigns it for use by a VM in order to continue processing the same application that had been using it previously. For instance, a disaster that halts operations on-premises could be recovered from by failing over effected volumes to VMs in the cloud. If, or when, it is desired to move the volume back on-premises at a later time, another "inverse" failover operation can be performed to re-assign the virtual volume to a host (which can be a guest VM) on-premises.
Cloning is similar to failover, except that a second, identical virtual volume is created that is assigned to a VM in Azure or a host/guest VM on-premises. The creation of clone volumes from source virtual volumes allows IT teams to setup temporary VMs and applications in an IaaS "sandbox" for specific objectives such as searching historical data or for application experiments using real, enterprise data. Cloned volumes are not kept in sync with source volumes, but there are many applications that don't need real-time data as much as they need real data. For instance, data analytics and discovery applications can be run against day-old data and identify the exact same trends or find the exact same content as it would if running against live data. Using cloned volumes in the cloud with temporary systems and networks is much cheaper and less disruptive than trying to do the same thing on-premises.
IT teams will be happy to know that the environment for accessing data in Azure using the Virtual Appliance models on-premises SAN connections. The Virtual Appliance connects to Azure VMs using a virtual iSCSI Ethernet network and the same platform volume and storage management tools (such as Windows Disk Management) and iSCSI initiators that are used on-premises. That means many of the same system management skills used on-premises are used in Azure to do the same things there.
We are looking forward to seeing how our customers use the Virtual Appliance in the months and years to come. There is a lot of potential to do great things together to build storage that is not only cheaper to own and operate, but also unlocks more of the value inherent in corporate data.
Check out our new 8000 series solutions at http://www.microsoft.com/storsimple
As you are probably aware, Microsoft is becoming a "devices & services" company. Don't assume that devices refer just to phones, tablets and consumer services, because they can also be data center infrastructure products. The same way that client/server architectures reshaped enterprise computing in the 90s, device and service designs will change the future of enterprise IT. The broad interest in Software Defined Networking (SDN) shows how eager customers and vendors are to integrate on-premise devices (both real and virtual) with centralized management services. Why? To respond faster to changes and to increase the utilization of the physical infrastructure.
The hybrid cloud storage solution from Microsoft is an excellent example of how on-premises devices and cloud storage services can be integrated to achieve a more flexible IT infrastructure. Last year Microsoft acquired StorSimple and with it, enterprise storage devices that help customers manage unstructured data growth - something that is an unsolved problem for most IT organizations. The StorSimple device automates time consuming storage processes and transparently uploads data to Windows Azure Storage services, allowing the IT team to focus on other problems.
The device part of this solution is an on-premises iSCSI storage array with a cloud-connected back end that accesses objects in Windows Azure Storage services. Changed data is encrypted and backed up automatically to an off-site Windows Azure data center, where 3 copies are made for redundant protection. If a disaster strikes the customer's data center, StorSimple devices have the intelligence to perform deterministic recoveries that eliminate unnecessary data downloads that waste time. Unlike dedupe appliances that have data retention limitations based on their internal capacity, the storage services of Windows Azure can be used to retain as much backup data as desired. The StorSimple device manages storage capacity by relocating stale data to Windows Azure Storage. In most cases, the data selected for relocation is already in the cloud because it had been previously backed up there, which means capacity relief is a matter of redirecting data pointers in the StorSimple device. Compared to adding storage capacity to traditional enterprise storage arrays, the device and service architecture is a significant improvement.
There will undoubtedly be other hybrid management architectures that integrate enterprise-class devices with cloud-based services. For example, the Windows Azure Hyper-V Recovery Manager is an example of how Microsoft is integrating (virtual) devices with Azure services to create a DR solution for Windows Server 2012 customers.
Next week's VMworld conference in San Francisco will be full of new announcements of their software defined data center (SDDC) initiative. If you find yourself wondering what the all this talk is about "data planes" and "control planes" you might try translating that inflated jargon into something a little more concrete - such as devices and services.
Here's a whiteboard video I made introducing the Microsoft hybrid cloud storage solution. It's based on the technologies Microsoft acquired with StorSimple and their integration with Windows Azure Storage. Here's a link to our web page at Microsoft if you want to find out more.
Hybrid cloud storage uses object storage in the cloud. Data on-premises is uploaded to cloud object storage until it is accessed again. Many people have first hand experience using cloud object storage with file sharing apps/sites like Dropbox, SkyDrive, YouSendIt, etc, but cloud object storage it also done very effectively with enterprise-level hybrid cloud storage (HCS) like the Microsoft HCS solution. In this case, a StorSimple CiS system, which is iSCSI-based, integrates with Windows Azure Storage, which is object based.
Obviously, a data translation process turns block data into objects. This happens during a data deduplication process, when incoming data exits the input queue in a CiS system. The deduped block objects in a CiS system are called fingerprints and have object properties such as being content-addressable and immutable. From that point on, block data is managed as objects, whether it is on-premises or in the cloud. In other words, the Microsoft HCS solution is a hybrid object store for block data. There are a lot of benefits to working this way, including:
When you look at solving the most vexing problems in your storage environment, especially data growth, backup data protection and the inability to test DR plans, the benefits of a hybrid object storage architecture are indeed powerful.
Last Sunday Gretchen and I were cruising Hwy 1 south of Half Moon Bay looking for migrating whales and loving the splendor of this coastline when we came across this group of kite boarders at Waddell Creek. As you can see, it was a windy day and the boarders were catching great waves and getting pretty good air. The beginning of the video shows one pretty good flight that was close to shore. The video was taken with my Windows 8 Phone - a Nokia 820 - and has an audio track made with Sony Acid Studio.
David Isenberg wrote his famous and controversial paper, The Rise of the Stupid Network in 1997. Its a short and historically interesting read. If you have never read it, follow the link there now. It will take you less than 10 minutes. If you want the Cliff notes version, the gist of his paper is copied below:
A new network "philosophy and architecture," is replacing the vision of an Intelligent Network. The vision is one in which the public communications network would be engineered for "always-on" use, not intermittence and scarcity. It would be engineered for intelligence at the end-user's device, not in the network. And the network would be engineered simply to "Deliver the Bits, Stupid," not for fancy network routing or "smart" number translation.
Fundamentally, it would be a Stupid Network.
I've thought about corollaries in storage for many years. Networks and storage are much different. Storage is much more tightly coupled with data management in a way that networks will never be. Data management takes intelligence to make sure everything gets put in its optimal place where it can be accessed again complying with corporate governance, legal requirements and workers expectations. Networks don't really have these sorts of long-term consequences and so apples to apples comparisons aren't very useful.
But that doesn't mean there wouldn't be ways to eliminate unnecessary aspects of storage and lower costs enormously. As soon as data protection and management could be done without needing specialized storage equipment to do the job, that equipment would be eliminated. Cloud storage changes things radically for the storage industry, especially inventions like StorSimple's cloud-integrated storage (CiS) and a solution like Microsoft's hybrid cloud storage. But StorSimple was a startup and Microsoft isn't a storage company and so it wouldn't start becoming obvious that sweeping changes were underfoot until a major storage vendor came along to make it happen.
That's where EMC's ViPR software comes in. EMC refers to it as software-defined storage, which was predictable, but necessary for them. FWIW, Greg Schulz does a great job going through what was announced on his StorageIO blog.
One of the things ViPR does is provide an out-of-band virtualization layer that Greg's blog describes that opens the door to using less-expensive, stupid storage and protecting the data on it with some other global, intelligent system. This sort of design has never been very successful and it will be interesting to see if EMC can make it work this time.
The aspects of ViPR that are most interesting are its cloud elements - those that are expected initially and those that have been strongly hinted at, including:
If EMC wants their technology to run on the cloud, and it's clear they do, they needed all three of these things. For instance, consider remote replication to the cloud - how would the data replicated to the cloud be stored in the cloud? To a piece of hardware? No. Using storage network/device commands? No. To what target? The backend to a hypothetical EMC VSA in the cloud uses object storage services and cloud APIs. There is no other way to do it. They could have a VSA that uses iSCSI to a facility like EBS, but that would be like putting the contents of a container ship on rowboats. So, a VSA that accesses object storage services using cloud APIs is the only way. It is a clear signal that ViPR will be their version of CiS. They probably won't call it that, but that's beside the point.
The important thing is what happens to data protection after ViPR is made fully cloud-capable? Once you start using cloud services for data protection, there are a few things that immediately become obvious:
Those are all things that hybrid cloud storage from Microsoft does today by the way, but that's beside the point too. What's interesting is what will happen to EMC's sizeable data protection business - how will that be converted to cloud solutions and what value can they add that enhances cloud storage services? The technologies they have available for hybrid cloud data protection are already mostly in place and there will undoubtedly be a transformation for Data Domain products in the years to come, but these are the sorts of things they need to figure out over time.
It's going to be a slow transition for the storage industry, but EMC has done what it usually does - it made the first bold move and is laying the groundwork for what's to come. It will be interesting to watch how the rest of the storage industry responds.
I am genuinely excited by the surprising news this morning about Dell's acquisition of Enstratius. I don't have any first hand knowledge of the deal or Enstratius' technology, but the company has an excellent reputation and its CTO George Reese has been providing excellent, provocative thought leadership about cloud computing for a long time. Congratulations to both Dell and Enstratius for making this milestone decision.
What I know is that Enstratius develops cloud management technology that allows customers to manage cloud installations that span cloud boundaries, whether those clouds are private or public. They have built up a broad list of cloud partners that includes Windows Azure, Rackspace, AWS and others, which undoubtedly made them compelling to a company like Dell that wants to give their customers a lot of viable options. Giving customers management tools that span different cloud vendors is great for customers who are concerned about being locked in by one of the major cloud platforms. In the long run, it will make all cloud service providers work harder to attract and keep customers - and that competitive drivers make better industries and markets. Putting the technology within Dell should make it much more broadly available, assuming Dell will invest more in Enstratius. I suspect Dell will want to accelerate the business they acquired, just as Microsoft is accelerating our StorSimple business. It's a great recipe for success - these kinds of deals can work extremely well, something I have witnessed first hand a couple of times in my career. It's not that Enstratius couldn't have grown themselves over time, but this acquisition will compress that time by a few years.
I believe this deal is considered to be strategic for Dell - something they can build a business around - as opposed to kind of deal that fills a spot in the company's product line. That should be energizing for the people at Enstratius, who will find themselves travelling even more than they were previously - something they might not think is possible. Get ready George, you just went worldwide in a way that is difficult to imagine.
Last week Pivotal came out of stealth mode and announced themselves to the world. It got a fair amount of attention because Paul Maritz is the guy there and because the company is being built on jettisoned technology assets that had been previously acquired by EMC and VMware. It was also interesting that General Electric was involved by making an investment in the company. Today it was reported that VMware was also selling off WaveMaker, but not to Pivotal, which begs the question - why not? I suspect it's because the ROI for VMware is deemed to be better selling it to somebody else as opposed to adding another ingredient to Pivotal's melting pot.
The question you always need to ask about any new venture is when it is going to start making money and how much of it they will be able to make and how they are going to take their products and services to market. Pivotal is no exception, unless you include the fact that they have a lot of overhead in people trying put all these disparate pieces together. Some see the amassed talent as great talent, but a CFO sees it as a whopping huge payroll that isn't being offset by anything right now. There are no existing product lines to leverage and there isn't even a new product line that can carry the company through it's development efforts. In other words, it's a science project made of plausible components that could possibly work.
So I'm sticking my neck out, but not very far, by saying I don't think there is a light at the end of the tunnel for Pivotal. Just because there are a lot of smart people, it doesn't mean there is a business. You may ask "What about GE, doesn't their endorsement mean something?" Yes, it certainly does. GE is making a big push to be a software company that tackles large scale data processing for various verticals they have a vested interest in such as health care (those Hugo Weaving ads are fantastic), manufacturing, energy and others I can't think of right now. Getting visibility for their business is probably well worth the effort and I'm not talking about publicity. As an investor in Pivotal they will get to see more technology from more ecosystem companies than they might have otherwise. It's a good move for a company that wants to increase its software business.
I could be wrong, but that's how I see it.
Technology marketing can resemble a shell game where customers are left to guess what's going on until a vendor releases products or services that can be acquired and used. Even then, there is still a lot of guesswork on the part of the consumer as product directions and roadmaps are often opaque. There are many good business reasons why technology vendors are inclined not to disclose what they are working on, including the fact that sometimes plans don't work as expected. Customers and vendors alike are better off with solutions that are available and work, as opposed to the next big thing that does neither.
But, nobody in technology ever wants to be uninformed, especially where the major trends are concerned. Vendors feel the need to be trend-setters and to dictate what the trends are. That's how we define leadership in our industry - the ability to envision technology's future, to articulate that vision and then execute the vision with products and services. Sometimes the magic works and sometimes it doesn't.
So there is a schism between practical business based on things that work and future business based on yet unfulfilled vision. Sometimes there are separate technology threads that are so broad there is nothing that can keep them from colliding. That is the case today with cloud technology - specifically hybrid cloud - and big data. Hybrid cloud is all about balancing resource costs, computing capabilities and management between on-premises data centers and cloud data centers. Everything we know about running data centers is part of the vision for hybrid cloud and there will likely be unexpected efficiencies to be gained and problems to overcome as this technology unfolds.
Big data is all about making better use of information - particularly the relationships between different data. One of the confusing elements of big data is that it has two fundamentally different branches - the first where the relationships between data are unknown and unpredictable and are discovered through a process that is akin to information refining - and the other where the relationships between data are either known or predictable and can be programmed specifically to achieve an instantaneous result. The unpredictable/unknown/refining stuff is part of the discussion surrounding Hadoop and map reduce algorithms. The predictable/programmable/instantaneous stuff is part of the discussion surrounding in-memory database (IMDB) technology.
If you look at the world like I do - from a storage perspective - this is all a bit puzzling. Hybrid cloud computing requires a way to make data portable between the on-premises data center and the cloud. Data portability is mostly seen today as moving VMDKs or VHDs for virtual machines between earth and sky. Depending on the size of the data sets involved and the bandwidth available, this could take a short time or a long time. There will undoubtedly be some very interesting technologies developed in the next several years that narrow the time gap between on-premises and cloud production services. The size of the data sets in big data is in some ways the worst case scenario for hybrid cloud. There is lots of data in one place that has to be moved through a cloud connection to be able to be processed in another.
The real-time needs of the predictable/programmable/instantaneous type of big data makes it seem very unlikely that hybrid cloud data portability will ever be possible. If you need real-time access to data, can you afford to wait days, hours or even weeks for a full data upload? In contrast, the unpredictable/unknown/refining type of big data can very likely be fulfilled by hybrid cloud data portability. After all, if you are going to be refining data over the course of days, weeks or even months, then data synchronization latencies are mostly insignificant.
Then there is the whole problem of disaster protection and backup for big data. If the business becomes dependent on big data processes - especially the predictable/programmable/instantaneous type, there has to be a way of making it work again when humpty dumpty has a great fall. This is where the brute force analysis of hybrid data portability falls apart - there will obviously be delta-based replication technology of some sort that incrementally updates data to a cloud recovery repository. Customers are clearly looking for this sort of hybrid cloud data protection for their data - and will want it for big data too.
We have a scenario today that is ripe for hype - and technology vendors are creating positions they hope will influence the market to not only buy products, but to design architectures that will dictate future purchases for many years. There is a great deal at stake for vendors and customers alike.
By Mark Weiner, Director of Product Marketing, Microsoft
Cloud storage is one of the hottest segments of cloud computing. Enterprise customers are increasingly looking at cloud storage as a way to leverage the benefits of cloud economics and agility without giving up performance and improving data protection capabilities. The Microsoft-StorSimple answer is to take a hybrid approach to allow you to extend existing on-premises storage investments while getting the strong benefits of the cloud.
When comparing cloud storage providers (CSP) to create a hybrid cloud storage solution, you should look carefully at both the storage service offering and the on-premises storage component. In the remainder of this blog, I’ll share a few guidelines and differentiators that buyers should look at when evaluating this on-premise storage component.
The first thing to note: CSPs’ on-premise storage components vary widely in architecture, functionality, cloud integration and form factors. There are huge differences between what are called cloud storage gateways and cloud-integrated storage. Gateways provide an access point from on-premises to cloud data transfers, whereas cloud-integrated storage is an automated solution to some of IT’s biggest problems such as managing data growth, backup and recovery, archiving data and disaster preparation.
Gateways – whether offered by cloud providers or standalone vendors – serve two basic functions: 1) caching most frequently used data on-premises, and 2) translating from local to cloud storage protocols and data formats. Some gateways are configured to serve as a proxy in front of existing storage infrastructure (DAS/NAS/SAN) for the sole purpose of copying data to the cloud, while risking the performance, reliability and availability levels of on-premises data access.
Cloud-integrated storage (CiS), like Microsoft’s recent StorSimple acquisition, is a full-featured on-premises storage system that integrates with the cloud (i.e. Windows Azure Storage). Enterprises can create a complete hybrid cloud storage architecture where they store their data locally and protect it with snapshots on-premises and in the cloud, and where dormant data can be seamlessly tiered to the cloud to make room for new data locally. This way, CiS gives IT an automated “safety valve” to prevent running out of disk capacity, while also providing rapid disaster recovery via cloud snapshots.
Let’s get a little deeper in this comparison of gateways vs. CiS, and what each solution provides. A key buying trigger for many IT infrastructure teams is storage infrastructure sprawl – too many devices for data management. Gateways do eliminate the need for tape backups. But what they don’t do is further eliminate the need for related backup software + support licenses, as well as the even bigger cost + time requirement of primary storage as data keeps growing at 50-60%+ per year.
The next issue is complexity. Cloud storage is supposed to *reduce* the complexity of storage infrastructure (along with overall costs). Yet a hybrid cloud storage solution built with software gateways can actually *increase* infrastructure complexity at even a modest degree of scale. That’s due to the limitations of software-based gateways with volume sizes (e.g. some maxed at 1TB/stored volume), the requirement for virtual machines (VMs) to host the gateways, gateways per VM (e.g. 12 gateways per VM), lack of integration w/ backup software, etc.
Lastly, buyers should also carefully examine the reliability of different hybrid cloud storage solutions and their on-premise components. Is the on-premise component an enterprise-level offering, with key storage features like full hardware redundancy, local snapshotcapability, non-disruptive upgrades, etc. – or is it software-only and itself reliant on virtual machines, underlying servers and other layers of potential failure?
Net-net: as you migrate your traditional storage infrastructure to integrate with the cloud – most likely a hybrid cloud storage architecture – look closely at the on-premise system that will be deployed and make sure it fully enables both the savings and data protection goals you have for your cloud storage project + strategy.
If you are involved with managing your company’s storage infrastructure, you might be tired of hearing about how your company can use IaaS to improve software development. It might sound promising, but as a storage person they won’t help you solve your worst storage problems such as backup and data growth.
It’s probably not clear how enterprise cloud storage, like Windows Azure Storage, with its longer-than-local latencies and less-than-local bandwidth can be used to manage storage. After all, storage management typically involves transferring a lot of data in as short a period of time as possible. It’s clear that if enterprise cloud storage is going to help solve your data center storage problems, a number of things in the equation need to change. But what would those things be?
For starters, there has to be a way to lighten the workload of daily data protection so you are uploading less data. Another necessity is to make cloud storage available to systems and applications in a way that aligns better with its performance characteristics. This means finding ways to integrate enterprise cloud storage as something other than a long-distance storage container on the other side of a “cloud chasm” the way cloud gateway products do. A couple ideas for reducing the volume of daily data uploads are to work only with changed data (also called deltas) and the other is to use data reduction technologies like deduplication and compression. Limiting uploads to deltas can work with backup, but is problematic on the restore side if you have to download hundreds or even thousands of virtual backup tapes to achieve a full restore. Restores are always much more difficult than backups due to the many-to-one relationship of media involved where many tapes are used and far more data is processed than necessary to create a final restored image. Data reduction can certainly help, but these techniques are only effective up to the point where the time needed to upload the reduced data exceeds the backup window. So lightening the workload can generate incremental benefits but it is only effective up to a point.
Sometimes it helps to look at things from the other end of the telescope. So instead of thinking about longer latencies, think about how SSDs are being used in the hybrid storage model (not to be confused with hybrid cloud) where the most active data is stored on SSDs and the rest of the data is stored on rotating disks. Now add enterprise cloud storage to the mix and consider using it for the opposite end of the activity spectrum – storing dormant, unstructured data. Most companies have a large amount of this stuff, filling up their storage arrays, getting backed up unnecessarily and lengthening recovery times during restores. What would happen if this dormant data were no longer on-premises and didn’t need to be backed up any longer? Offloading dormant data to enterprise cloud storage lightens the backup load and helps you deal with data growth. It’s not enough by itself, but it’s a big step in the right direction.
Another assumption that needs to be challenged is that backup is the only technology that can protect data from a disaster. It’s the best choice we’ve had, but that doesn’t mean something new could be better. For instance, an alternative to backup is snapshot technology, which is widely used to periodically capture deltas and is much faster and easier to use for restoring data. The fatal shortcoming of snapshots has always been that they reside on the array alongside live data - and if the array fails or is destroyed the snapshots will be lost too. For that reason, on-premises snapshots are inadequate for disaster protection.
But what if on-premises storage could take daily snapshots and upload them to enterprise cloud storage and what if those cloud snapshots could be mounted the same as on-array snapshots for restoring data? This certainly satisfies the off-site requirements for disaster recovery protection and is a scenario where uploading deltas every day can be very successful. All that’s needed is a way to know which files would need to be downloaded for a full restore.
This is what Hybrid Cloud Storage from Microsoft is all about. It combines the Cloud-integrated Storage technology that was acquired with StorSimple and combines it with Windows Azure Storage. It puts enterprise cloud storage technology in your data center where it filters dormant data and uploads it to the cloud as well as creating daily snapshots that are also uploaded to the cloud. That’s a whole different approach to managing backup and data growth. The cloud is not a disk drive “over there” somewhere, it is right next to you helping to solve your most vexing storage problems.
You might be thinking “how do I locate data after it has been uploaded to the cloud and how do I mount and restore it?” The answer is metadata, a topic that will be discussed in my next blog post.
I just finished reading a good post on Wikibon by the "other" Scott Lowe where he discusses the differences between vSphere and Hyper-V and what the adoption rate of Hyper-V will likely be. He delves into the cost comparisons being made by both VMware and Microsoft as well as broadly touching on the differences of their management approaches. I encourage people to read his article, it's not that long, but the take away is that he thinks Hyper-V is catching up to vSphere in functionality, that vSphere is better suited for enterprise installations and that pricing pressure will impact the margins of both companies.
Smart CIOs and IT leaders will try to find ways to run both hypervisors in their data centers. For starters, the only way to future proof your data center from vendor lock-in is to have skills in competing technologies that can replace each other. Moreover, shrewd negotiators know the most powerful word in their vocabulary is "no". But no doesn't mean no if you can't actually implement the changes you decide to make and that takes a commitment and that has costs associated with it.
Freedom from lock-in isn't free, neither is negotiating leverage. It's prudent to avoid painting your organization into a corner and having to deal with being stuck in a place where the only way out is long and expensive.
As the saying goes, "when you’re up to your neck in alligators, it’s easy to forget that the initial objective was to drain the swamp."
IT professionals are plenty familiar with compelling interruptions that need to be dealt with quickly, but keep them from getting high-priority work done. That's one of the reasons IT leaders are looking for SAAS solutions - to decrease the potential for technology alligators to delay the projects they are being measured on.
A big advantage of SAAS is offloading the infrastructure needed to run applications in-house. In a traditional data center, application workload changes may require infrastructure changes that have secondary impacts that disrupt productivity on other applications and systems. SAAS circumvents both primary and secondary infrastructure impacts by isolating the application and it's infrastructure on an external site. The SAAS provider keeps customers up to date on the newest capabilities but also manages the bug fixes, workloads and all the infrastructure elements needed. That leaves a lot more time for the IT team to focus on delivering the technology solutions that business leaders want. SAAS is a terrific solution, but there are many applications - especially line of business applications - that are not available or do not otherwise fit the SAAS model. The whole concept of Hybrid IT is based on this reality. SAAS works great for some things but not others.
Hybrid Cloud Storage is similar to SAAS in several ways. It is an infrastructure enabler that transfers time-consuming management tasks, processes and their secondary impacts to the cloud. Like SAAS, there are some applications Hybrid Cloud Storage is not a good solution for, such as low latency transaction processing, but there are many where it works extremely well.
Managing storage capacity growth is a great example of a time consuming storage management process that data center managers know well. As application workloads scale, the available storage capacity is consumed, threatening the ability to meet service levels. Storage administration is largely an exercise in planning and implementing the response to this endless cycle. Sometimes it involves upgrading the capacity in arrays and re-balancing workloads, sometimes it involves migrating workloads with virtualization technologies, sometimes it involves acquiring additional arrays and sometimes it involves all of these. My friends at 3PAR used to call this Storage Tetris. The process becomes increasing difficult over time until there are few options left but to acquire additional arrays, along with the associated costs of data center footprint/power and data protection they impose. When you couple this dynamic with the limited life-span of most storage products it's easy to see why storage consumes such a large part of the IT team's attention and budget.
Hybrid Cloud Storage circumvents this capacity-growth cycle by uploading dormant data to the cloud and freeing capacity on-premises for new, active data and workloads. This is a fundamentally different approach than traditional storage where data consumes primary storage capacity regardless of whether it is being used or not. Dormant, unstructured data is very difficult to manage with traditional storage, but is automatically and transparently managed by Hybrid Cloud Storage. At some point the cloud storage containers used with Hybrid Cloud Storage run out of capacity too but they have scaling limits that are many times larger than on-premises arrays. That means capacity management in the cloud is done far less frequently and because it happens in the cloud it does not have secondary impacts on other applications and systems on-premises. Tetris is much easier when there is a lot of space to work with and when you don't have to worry about upsetting other workloads in the mix.
Putting data on Hybrid Cloud Storage also transfers the associated costs of footprint and power to the cloud. For some corporate data centers, this isn't that big a deal, but for others it's critical to get more done with the facility limitations they have. Also, when you consider the additional data protection hardware that is typically needed to backup the data that is stored on new arrays, the ability to move backup data to the cloud is also an important secondary benefit of Hybrid Cloud Storage (as opposed to being a secondary problem).
I will have other blog posts soon that discuss the significant changes that Hybrid Cloud Storage brings to data protection in much greater detail.
The excitement around software-defined networking (SDN) this year has had a domino effect on the rest of IT infrastructure industry and spawned many discussions about the future of the industry, including the implications for companies like Cisco and EMC and VMware. A couple days ago, Christos Karamanolis from VMware published a blog post saying he thinks 2013 will be the year for software-defined storage (SDS). That got me thinking.
I don't know about 2013 being the year for SDS, but I suspect 2103 will be the year of SDN and SDS hype and confusion. It's bad enough having one marketing battle royale (SDN) but having two of them at the same time will drive many of us crazy. I shudder to think where the whole thing will stop - SD Zombies?
Here's how I see things shaping up for SDS next year:
On December 5th Microsoft announced a pricing reduction for Windows Azure Storage. One of the more noticeable aspects of the announcement was the breakdown of storage costs between Geo Replicated Storage and Locally Redundant Storage. To summarize, Geo Replicated Storage costs approximately 28% more for the additional service of replicating your data to a remote secondary Azure data center. When you understand the details of how data Azure Storage works it means there are six copies of data stored - three locally and three remotely. This is an example of an extremely robust design where an awful lot has to go wrong to lose data and it is part of the reason why Windows Azure Storage has such an excellent track record.
If you are considering a Hybrid Cloud Storage solution using StorSimple Cloud-integrated Storage (CiS) and Windows Azure Storage, my advice is that you plan to use Geo Replicated Storage. The additional 28% price premium for Geo Replication is a small amount to pay for remote replication with automated failover. If you compare the cost for Azure's Geo Replication with other forms of data replication that conservatively double the cost of storage, it is an incredible bargain.
So here is how the connections and data flows work with a CiS Hybrid Storage Cloud. Thanks to Avkash Chauhan for posting about this previously in his blog -the graphic below came from there.
When data is uploaded by the on-premises CiS solution to Azure Storage, three copies of the data are written to separate domains within the primary data center and an acknowledgement is sent to the CiS on-premises. Some time afterwards, which can be several minutes later, the data is replicated to the secondary data center and another three copies are written to three different domains there. This is done transparently in the background, without involving the CiS system in any way.
With CiS-powered Hybrid Cloud Storage, uploads to Windows Azure Storage occur when nightly CiS Cloud Snapshots are taken but they also happen when inactive data it is tiered to Azure Storage. Under normal conditions, the amount of traffic between CIS on premises and Azure Storage is negligible. Exceptions to that occur during the initial Cloud Snapshot for a volume when the entire volume's data is snapped or during DR scenarios when a lot of data may need to be downloaded from Azure Storage to CiS. If you are concerned about the amount of bandwidth that might be consumed by Hybrid Cloud Storage traffic, CiS provides scheduled bandwidth throttling. Many of our customers use it to assure they have all the bandwidth they need for other production applications. Geo Replication between Azure data centers does not consume bandwidth between the customer site and the primary Azure data center, so there is no need to avoid Geo Replication in order to conserve bandwidth.
When you think about the economics of cloud storage, make sure to include the incredible value of Geo Replication.
The term "hybrid cloud" has been defined many different ways. At Microsoft hybrid cloud refers to data center functionality that spans on-premises and cloud service boundaries. At least that's how I'm understanding it now after having been part of the company for a few weeks. To clarify my perspective, my appreciation of cloud is slotted narrowly into IAAS functionality and the things that are likely to appeal to data center types. In this context hybrid cloud services will augment the things that customers are already doing on-premises with the cloud offloading tasks and workloads that are under-served on-premises. Where data center operations are concerned, the cloud represents a new kind of enterprise plug-in. If you think this sounds like poppycock, keep reading because I'll tell you how it is already being used this way every day by a growing number of companies.
One of the misunderstandings people have about enterprise cloud storage is that it must be similar to consumer file sharing apps like Dropbox, Box or Microsoft's own SkyDrive. To begin with, much of enterprise storage works on block processes and if you are going to offload enterprise storage you need to provide block-level functionality. As for file sharing, data center managers are not looking to share corporate data as much as they are to secure it. BTW, I fully expect to get comments here about the great virtues of file sharing for enterprises. Rest assured, there are probably few companies who use the cloud for file sharing as much as Microsoft does internally with SkyDrive and SharePoint, but that's not what I'm discussing here in this post.
StorSimple developed technology called Cloud-integrated Storage (CiS) that is implemented as a SAN appliance that acts like a hybrid cloud storage plug-in for enterprise storage. CiS packages and indexes blocks along with accompanying metadata and stores them in the cloud. These block packages may be generated by snapshots or as archives that need to be stored for an extended period of time or as dormant unstructured data that is no longer being accessed and can be vacated to reclaim on-premises storage capacity. Different customers use this technology every day because their backup systems are under-serving them, their archiving processes are too cumbersome and they don't want to use tier 1 storage for data that is no longer active. The thing that is a little bit hard for some to understand about CiS is that the data transfers to the cloud are all automated, requiring no effort on the part of system and storage administrators.
The other key to understanding the plug-in nature of CiS is that the ability to access and download data from cloud storage is also transparent because data in the cloud is either viewable in an online file system or mountable as a snapshot the same way local snapshots are mounted for restoring older versions of files. I'll explain how that all works in future blog posts, but for now I'll say it's a function of the metadata system in CiS.
Cloud-integrated Storage really is different. It breaks the mold for enterprise storage by seamlessly integrating on-premises enterprise storage with Windows Azure Storage services and the incredibly valuable Geo Redundant Storage it provides. CiS doesn't do everything you might want it to, but the things it does well are revolutionary.
Note: this blog is the short version of a white paper that was published by StorSimple. Click this link to see the PDF version of the original white paper (opens in a new window)