Cloud Power IT Insights

  • Enabling Supercomputing in the Cloud

    One of the capabilities delivered by true cloud computing solutions is the power to dynamically scale up and down based on the needs that applications and users are placing on the service.  This also correlates to one of the key benefits that businesses can realize when investing in cloud computing as an infrastructure and platform choice, which is helping to minimize the time and costs associated with idle IT resources that may only be called upon when there is an occasional usage spike maximizing those resources.

     

    Related to the scalability of cloud platforms, such as Windows Azure, there’s an interesting article by ZDNet’s Mary Jo Foley, who recently spoke with Microsoft’s Bill Hilf (General Manager of the Technical Computing Group), touching on the topic of providing supercomputing resources which can scale to manage and analyze enormous amounts of data.  In the article Hilf talks about Windows Azure eventually including high-performance computing projects like Microsoft’s Dryad, a set of components developed by Microsoft Research which simplifies the running of complex data-analysis applications.  Bill also posted an interesting, related piece late last year here on the big challenges that technical computing in the cloud can help address for the scientific research, finance and manufacturing segments.

     

    While the ongoing evolution of cloud computing will continue to advance its capabilities rapidly in the near term, there are certainly businesses already benefitting from adopting cloud solutions today to address their various Infrastructure (IaaS), Plaform (PaaS) and Software (SaaS) as a Service needs, such as T-Mobile and Xerox.  If you’re interested in finding information about what cloud computing can offer your business and how other companies are already using Microsoft’s enterprise cloud offerings, please check out our Cloud Power site here.

     

    Thanks – larry

  • Thought Leaders in the Cloud: Talking with Reuven Cohen, Founder and CTO of Enomaly

    Reuven Cohen is the founder and CTO of Toronto-based Enomaly Inc. Founded in 2004, Enomaly develops cloud computing products and solutions, with a focus on service providers. The company's products include Enomaly ECP, a complete revenue-generating cloud platform that enables telcos and hosting providers to deliver infrastructure-on-demand (IaaS) cloud computing services to their customers. Reuven is also the founder of CloudCamp, which takes place at cities worldwide, and Cloud Interoperability Forum. He has consulted with the US, UK, Canadian, and Japanese governments on their cloud strategies.

    In this interview, we cover:

    -Cloud spot pricing.

    -The places for commoditization and differentiation in cloud computing.

    -While people think that with cloud, datacenter location doesn't matter, but the opposite is true. The cloud will allow ultra-localization.

    -There will be many spot markets for cloud. Some private, and some public.

    Robert Duffner: Could you take a moment to introduce yourself?

    Reuven Cohen: Sure. I'm the co-founder and CTO at Enomaly Inc., here in Toronto, Ontario. I'm also the instigator of several other cloud-related activities, including Cloud Camp, which is a series of advocacy events held around the globe, in something like 150 locations at this point.

    In very broad terms, I am involved with advocating the use and adoption of cloud computing, and I've been very involved in the cloud world for the last several years. The first version of our software was created in 2004, which predates things like Amazon EC2 by a number of years. We also helped define the U.S. Federal definition for cloud computing.

    Most recently, we've launched SpotCloud, a spot market for cloud computing.

    Robert: You've done a lot of thinking about cloud compute as a commodity. What are your current thoughts on this subject?

    Reuven: Most providers of cloud computing resources don't want to be treated as commodity brokers. It's important to make the distinction that, while there is an opportunity around applying a sort of commodity economic model, that doesn't mean that all offerings are necessarily the same.

    I've been trying to create a method in which you can commoditize certain computing resources, while not commoditizing the providers of those resources. That's the challenge we face.

    Robert: Do you think that offerings in the cloud market are inherently something that's identical, no matter who produces them?

    Reuven: The answer is yes and no. We fragment the cloud computing market into a few different pieces. First, at the top, there's software as a service, and to take advantage of software as a service offerings, you need to have a platform. So typically, you're building scalable systems on a platform provided as a service.

    Underneath that, you need the actual interface to things like storage, networking, and CPU. Finally, you need an infrastructure that can be provided in an autonomous, easily managed way, so you need an infrastructure as a service. Those are, very generally speaking, the three main parts.

    One key issue is the lack of standardization in any of those three parts at this point, although there are similarities. I have focused on the bottom-most layer, and my company provides infrastructure-as-a-service software to hosting providers.

    The most basic capabilities required by all infrastructure-as-a-service providers are the ability to start a machine or a virtual machine, the ability to stop a virtual machine, and the ability to handle networking requirements.

    When we started to look at commoditizing these functions, we focused on the idea that any infrastructure-as-a-service provider is going to have those three basic requirements, and we created a marketplace that commoditizes those three things.

    So we provide the ability to find a cloud provider anywhere in the world based on price and geographic location. That location could be as broad as a whole continent or as narrow as a specific city. For example, we could choose Seoul and deliver a raw disk image that could run on any virtual environment or even actually potentially on a physical environment. We commoditized it based on those criteria.

    Robert: You have also said, "To avoid directly competing with regular retail sales of cloud services, spot cloud uses an opaque sales model." Can you take a minute to explain what that means?

    Reuven: To understand that, I should probably start with a bit of background on how the spot cloud product came to be. As I mentioned previously, we were one of the first infrastructure-as-a-service companies out there. We created the first version in 2004 and adapted over the years based on the emergence of cloud computing.

    Our current customers are generally public cloud-service providers, and most of them are outside of North America. Many of them have just built clouds and hoped that customers would adopt their platforms, but the reality is that they actually need to market their platforms, services, and so forth.

    That created a dilemma for us, in the sense that, in order to be successful, we need our customers to grow, and grow quickly. In order to do that, they need to increase their utilization levels, and that doesn't always happen. We were seeing fluctuations in utilization levels based on factors such as time of day, how successfully they were selling services in various parts of the world, and so on.

    We needed a way to help our customers increase their utilization rates, making them successful, which ultimately benefits us as well, of course. We had to avoid cannibalizing their retail sales, and in looking at various models, one that really jumped out at us was the concept of an opaque market.

    In this model, the buyer specifies what they want to buy, in terms of a quality rating, an amount of RAM, a number of processors, and so on, but they don't actually know who they're buying from until after they've agreed to buy.

    And that provides the ability to avoid cannibalizing your existing retail sales, which makes it attractive to service providers who are obviously not keen to sell something that could sell otherwise at a higher margin.

    Robert: You recently tweeted that it looks like the top one percent of spot cloud buyers represent 99 percent of the capacity purchased. Can you expand on that?

    Reuven: I freely admit that I'm learning a lot in this whole process. First of all, I never expected the amount of interest in this platform that we've received. It's been astounding. We've had so much interest from both the buy and sell sides that it's been spectacular.

    And in a sense, we're the first to ever really try this opaque spot market for excess compute capacity, so we don't really have a lot to base the actual business model on. We're learning as we go, and we're also getting a lot of market research done in terms of people who are signing up for it on both the buy and sell sides.

    There appear to be some really interesting use cases on the buy side. People who want to buy capacity often need very large amounts of tasking for a very short time, but they're very concerned about where that tasking is.  

    Consider the case where I need to task a platform or application that I am going to be launching next week. If I get a million users from a particular city, let's say Paris, I want to know how that application performs for a million users from Paris. Although we have a maximum number of people firing up on the buy side, there are likely to be a few in particular who utilize significantly more capacity than the others.

    Robert: So infrastructure tends to gravitate toward being a commodity, whereas solutions are differentiated specialty items. Platform as a service is probably somewhere in between. Where do you see platform and software as a service fitting in the future where a significant amount of raw compute is available as a commodity?

    Reuven: I was talking to one of our partners a while back, and I asked him what he thinks the future holds for of infrastructure as a service. His answer was, "Platform as a service." The value is in the implementation of the infrastructure, and infrastructure becomes a commodity because that's what it's there to do.

    Where some people look for globalization, I look at it the opposite. I look at ultra-localization, or regionalization, which is the ability to adapt to the constraints and fluctuations on the ground in particular places. If I am having a lot of sales success in Tokyo, it makes little sense for me to scale my infrastructure in London. It makes sense for me to scale my infrastructure where my customers are.

    If my customers have a better experience on my platform or my application, I'm going to have a happier customer, better sales, more return users, and a more successful company. And that's the opportunity that this cloud of clouds, or this regionalization of compute resources, is really enabling.

    The reality of today is that we've got one-size-fits-all clouds, but that's not where we're going. We can't just blindly scale for everyone, anywhere.

    Robert: When I'm out meeting customers, we talk about infrastructure, platform, and software as a service. I'm seeing the lines blurring between infrastructure and platform as a service. Heroku, which offers a platform as a service, but built on Amazon infrastructure, is a really good example of that.

    Reuven: That's true, and the fact that Microsoft has been building data centers around the globe is a good indication of where things are moving. We're moving to a network-based world where unfortunately, the desktop is less important than the app. The Internet is the platform, and the location matters. I think you're right that there is a blurring of where the underlying infrastructure is and where the platform is.

    Ultimately, I think that when we talk about cloud computing, we're really talking about the Internet as the operating environment.

    Robert: Last February, James Urquhart wrote a post called "Cloud Computing and Commodity," where he states that commoditization will happen at a granular level. He says, "technologies will standardize various components and features, but services themselves can and will be differentiated and often add significant value over those base capabilities." What are your thoughts on that?

    Reuven: I completely agree. I think he's saying that it's the application that matters, not the infrastructure. If you look at the companies that are most successful today, they are the ones who are able to adapt quickly. They are the ones who are able to take mountains of data and transform them into information, because it's not the data that matters, it's the information. That transformation of data requires an adequate amount of computing resources to actually work on that transformation.

    The cloud provides the basic engine that enables anyone with a credit card to compete with the largest companies on the globe. Anyone with a really good idea can scale quickly and efficiently, and that is revolutionary in a lot of ways.

    Robert: Some data centers meet certain levels of regulatory compliance, and others don't. Because some applications are governed by very specific regulations, I run into a lot of examples where customers are fundamentally prohibited from putting certain kinds of data in a public cloud. How does that factor into a spot market?

    Reuven: As I mentioned before, we're learning as we go here, and a lot of the specifications for clouds being built are the result of requests from end customers. One interesting possibility is a sort of private exchange, or private spot market, for companies that are all governed by similar regulatory controls.

    Those requirements could be based on an industry vertical, or they could be based on geography, for example. The European Union has introduced requirements that compute capacity on exchanges must be located within the EU.

    I think we're going to see rapid evolution in the area of specialized requirements such as these, and that's going to be quite an exciting part of this new opportunity.

    Robert: One of our major data centers is located in San Antonio, and one of the customers there is the taxing authority for their country, i.e., the equivalent of our IRS. Starting this year, they're going to automate the electronic storage of invoices for tax purposes. So, for example, if you go to Wal Mart and purchase office supplies, all those receipts get stored in the cloud. It blows me away that they're storing them in the United States.

    As far as I know, they are the only government that's doing this, but I wonder what it could mean, as a case study to other governments. They might see the need to look at getting past regulations to deliver innovative solutions to their citizens.

    Reuven: It shows that their government is treating their tax IT system in the same way a business would. So when you look at the opportunities in business, you'll see a French company hosting their data in United States, not because it is in the United States but because they can get the best price, so they can get the best bandwidth, and they can get a deal that works best for them as a business.

    The problem for a lot of governments is that they're constrained by the regulatory controls of their own country. Particularly in countries that may be referred to as having fast-growing or emerging economies, they don't have that infrastructure in place. They've got one of two options: either to build infrastructure at great cost or go find the infrastructure somewhere else and hopefully not put their mission-critical data in there.

    In Canada, we actually have the exact opposite problem, although we do have a much more developed infrastructure. Basically the Canadian government says you can't host Canadian government data, websites, and so on outside the Canadian geography.

    That's also an opportunity for cloud providers, and that's why Microsoft is spending billions of dollars on data centers all over the world. You might build cloud data points in Mexico, Japan, and Korea so you can serve those local populations better.

    Robert: Do you see certain aspects of cloud, such as storage for example being commoditized faster than things like compute?

    Reuven: Well, storage has always been more easily commoditized because of the file system. The big differentiation we have today is the object-based approach versus that traditional approach that I always call the POSIX style.

    They both solve different problems, but POSIX is a perfect example of commoditization. We've got a general way to interact with file system. I think it certainly is easier in some regards, because it is lower in the stack, and I think the lower in the stack you get, the easier it is to commoditize. The object-based approach was popularized by things like Amazon S3.

    Robert: Obviously, it's a lot harder to take existing applications and move them to the cloud versus architecting a brand new application for the cloud. At the same time, the specific operating system and type of hardware that you're running are becoming less important.

    How do you think that's going to impact how development languages are going to play into cloud application development? I'm anxious to see if that's going to change the adoption of certain languages that are used to write cloud applications. Do you have any thoughts on that?

    Reuven: That inevitably leads to the question of lock in, and specifically where and how you are going to be locked in. The answer isn't a simple one. Regardless of the platform, at some point you're always going to have to choose a programming language, a development environment, and a number of other things.

    The question is what that means down the road, when the technologies inevitably evolve. Generally, my rule of thumb is that the value is going to be in your information and how easily and readily you can work with that information.

    Developers making those sorts of decisions must always ask themselves how easily they can take their information, move it, and work on it somewhere else. Where possible, you should develop in such a way that you don't care about any particular machine, whether it's physical or virtual. You should build applications to consider that the underlying architecture may come and go.

    Likewise, you should build fault tolerance into applications that takes into consideration that you may lose a node, or even part of the world. That shouldn't affect the overall availability of the application. The Internet should be applied as an architectural model, and the cloud is a metaphor for the Internet in a lot of ways.

    Robert: Where are you seeing the most uptake or developer interest with regard to cloud apps; is it Ruby, Java, Python?

    Reuven: Programming has always been a personal relationship, and it goes through phases. Ultimately, most programming languages do the same thing, just in slightly different ways. For example, we're a big Python shop. I like Python as a CTO because I can read it and understand it. I don't do much programming anymore, and so I just look at it occasionally.

    I don't think you should be constrained by the popularity of any given particular programming language. You should choose what works best with your brain.

    Robert: Do you have any closing thoughts?

    Reuven: I think we are on the verge of the transformation to really treating computing resources like a commodity, the same way we treat energy. Eventually, we will have the ability to do things like futures, derivatives, and buying and selling things that may not exist today.

    But you can't do that until you have a spot market, so the first step in this commoditization of computing is to be able to sell what is available right now, and that's the spot market. The next step is going to involve things like futures and derivatives. I think that that's going to happen in the short term, and there's a lot of interest in this from a whole variety of sectors.

    I believe that computing resources are going to be the next big commodity market. All the signs I see today are pointing in that general direction, so it's a pretty exciting place to be.

    Robert: Thanks, Reuven. This has been a great talk.

    Reuven: Thank you.


  • Hybrid Organization with Office 365 and SharePoint Online

    21apps is a Microsoft BizSpark Partner and a SharePoint business value, solutions and platform consultancy from the United Kingdom.  In this guest blog, Ant Clay, Chief Strategy Officer of 21apps, shares his excitement around the launch of Office 365, specifically on how SharePoint Online and Office 365 should be at the heart of your company's business value agenda.

    --

    "Microsoft Office 365 will be the Poster-Child platform for a new world of Hybrid Organisations."

     

    So what is this Hybrid Organisation that I speak of and why does Microsoft Office 365 have such great synergies with it?

    Like an ever increasing mix of start-ups, 21apps doesn't own any physical server infrastructure; actually we don't have a physical office either! Yet our success as a business, like many others, is built upon being fully engaged across our internal employees and externally with our clients, partners and alliances across the globe. We collaborate, undertake business development, market ourselves, manage our business affairs, communicate with our partners and delight our customers without the overhead of traditional physical constraints and financial investments.

    We're using a pragmatic investment in technology that delivers 21apps and our client's business value and organisational efficiencies. We have made a significant shift in our mindset in order to compete in this world of increasing competition and economic pressures, and feel we are very much a Hybrid Organisation, as described in detail by Microsoft www.microsoft.com/uk/about/hybrid-organisation.mspx. Go on, have a read and then come back here and read on!

    Based on this thinking, we are working closely with our clients in adapting their business models, technology investments and cultural shifts to accommodate the future. This future, for us, is based upon the poster-child platform, Microsoft Office 365.

    Up until now, working effectively (Hybrid Organisation) by unleashing your employees from the constraints and costs of a physical office day after day took considerable cultural change, financial investment, infrastructure, time and for the most part required a heap of disjointed technologies. But with Microsoft's investment in the cloud and the release of Microsoft Office 365, the vision seems poised to be delivered, setting free organisational and physical constraints both inside and outside the enterprise, enabling business advantage for those willing to make the move, for a very palatable financial investment.

     

    Organisational Shift

    So what has changed, where did the shift come from? We are in agreement with the "wisdom of the crowds" that work has switched from a physical place (the office) to an activity we personally are responsible for.

            Work is no longer somewhere we go to Monday through to Friday!

    There are a plethora of reasons for this cultural change, just a few key contributing factors that we think have truly enabled us to be standing here today on the brink of an office work revolution:

    * Economic Pressure - Forcing all organisations, large and small across all sectors to re-evaluate their investments in technology and strive to deliver clearer business value and a return on investment for their stakeholders

    * Cultural Shift - The "on-line world" has now very much blurred its boundaries with both the personal and working activities, "Generation Z" has significantly forced the pace of cultural and technological change in the workplace.

    * Physical Constraints - Resources are limited, office space is at a premium and more and more employees are questioning the value of a long commute to a physical place of work; so a new vision of a working life has emerged to the fore that offers flexibility, trust, loyalty, respect and where value is measured on outcomes bereft of physical constraints.

    With this emerging desire to deliver more value for less, adopt changing business models and embracing fundamental changes in our outlook of the working environment, Microsoft Office 365 rocks-up fully enabling the rich, integrated information worker functionality of Microsoft Office, Exchange, SharePoint and Lync through its cloud-based infrastructure and subscription models, and man are were ready to embrace this stuff!

     

    The Office 365 Value Proposition

    Microsoft Office 365 is surely a game changer! Let's first dissect the fundamentals a little bit:

    * It's in the Cloud - You get to have the core information worker functionality you need to do business, access anywhere, with your phone, desktop, laptop, browser and web connectivity

    * It's subscription based - You can invest in an appropriate level of technology for the employees you have and the roles they perform, scaling up as your business grows

    * You get familiar Office productivity tools- Microsoft Office on subscription, but only if you need it! And you have the power of Exchange, SharePoint and Lync all integrated into one platform

    * Always On 365 * 24 * 7 - All of this rich business functionality available and supported 365 days a year, across the globe, across your employees and your partners, clients and external stakeholders.

    I don't see any competitive offerings that provide such a compelling level of choice from a deployment, functional and investment perspective.  We are definitely all in!

    How are we using it and what business value is it delivering? We're on the beta program and not all functionality and integration has been rolled out just yet. But our experience to date tells us that what we have in Office 365 gives us the tools we need at a price we can afford. With what's coming down the line early next year Office 365 will simply be awesome for us.

     

    Reduce your Bottom Line

    Most organisations have Microsoft Office client licenses, actually most organisations probably have too many! One of the greatest innovations that Microsoft has brought out with Office 365 is the ability to have a subscription license for their desktop products. For a small consulting organisation like ourselves, with a fluid workforce, the potential for us to scale our licensing for growth or long term projects is fantastic , for larger organisations the ability to license employees with the relevant level of technology i.e. full package or kiosk worker will make a real tangible difference to the financial bottom-line.

     

    Focus on your business, not your infrastructure:

    We've been using versions of hosted Exchange as the email solution of choice for 21apps since we were founded, why would we waste money on servers and all that hassle to manage it, when we've got work to do?

    Having presence information baked into the email client via Lync and the ability to move from email message to IM conversation to On-line Meeting will be fantastic for collaborating, engaging and consulting with all our clients, stakeholders and associates. The new administration interface makes it a great deal easier for us to loosely manage Exchange and easy create the distribution groups which we use for sales activities and support email aliases. 

    The full features of Office 365 Exchange exceed the needs of our small organisation, but are perfect for most Enterprises, although we will definitely be looking to exploit the unified messaging features when they integrate with our mobile provider.

     

    Don't go to work, do your work:

    Across all the products within Office 365, access anywhere really drives end-user productivity. Microsoft SharePoint Online is most definitely our sweet spot, both for internal use and for delivering business value to our clients. Office 365's predecessor, BPOS-S, provided basic SharePoint capabilities, but wasn't quite on par with the on-premise version.

    With Office 365 and the new version of SharePoint Online the game has changed significantly. Full MySite capabilities, Enterprise Search, the rich Managed Meta Data across site collections within a tenancy and more means that along with Sandboxed Solutions, this will enable us to both manage our important knowledge and processes effectively within the platform, but also give us significant functional capability to enable us to develop rich applications to support our consulting engagements.

    A fantastic new feature that we are really excited about is the ability to easily create a site collection level extranet environment!  This ability to quickly reach out to our clients and associates to share and collaborate with them in an ad-hoc manner will be of significant value to us, and is a key tenet of the Hybrid Organisation.

     

    Engage with your business world, not just your head office world:

    For us, instant messaging has always been via a mix of tools from MSN, Skype and the internal implementation of OCS (part of BPOS).  For most organisations supporting that level of tools and managing the appropriate interactions between employees, clients, associates and personal connections is not tenable.

    The introduction of instant messaging federation capabilities of Microsoft Lync Online means that all of these communication channels can be consolidated into a single application enabling seamless presence information and communication channels with ourselves, our partners, people we work with and also a range of other instant messaging clients. Add to this the great Live Meeting capabilities being integrated to Lync Online, which will allow us to collaborate and deliver work in a joined up way across the globe.

     

    Conclusion

    It's clear that the functionality and integration offered across Office, Lync Online, SharePoint Online and Exchange Online within the Microsoft Office 365 platform can fundamentally change the way you work within your organisation and the way that you do business with your clients.

    With the economic challenges we all face, global competition, increases in regulation and governance, gaining a business advantage, delivering value and differentiation over your competitors is becoming an ever increasing challenge for us all. Making this shift in working practices towards a Hybrid Organisational model, approaching work as an activity and not a place and the adoption of integrated cloud based productivity tools, such as those within Microsoft Office 365, is the only way forward for organisations to be successful.

    If you'd like to learn more about the value 21apps can provide to your organisation, visit our site (www.21apps.com), read our blog (www.21apps.com/blog) or follow us on Twitter @21apps.

     

    Cheers,

    Ant Clay

    Chief Strategy Officer @21apps

     

    Other Great Resources for SharePoint Online

  • Exchange Online in Office 365

    This is a repost from the Office 365 blog from Jon Orton, product manager for Exchange Online.

    --

    I'm Jon Orton, product manager on the Exchange team here at Microsoft. Welcome to our product insight series on Exchange Online. For our first post, I want to cover two things: an overview of Exchange Online and a look at some of the cool features coming to Exchange Online with Office 365.

    Quick Overview of Exchange Online

    You've probably heard of Exchange Server, and with Exchange Online, we've taken the capabilities of Exchange Server and offered them as a service hosted by Microsoft. With Exchange Online, you run your email on Microsoft's geo-redundant servers, protected by built-in antivirus and anti-spam filters and a 99.9%, financially-backed uptime guarantee. You get enterprise-grade reliability and high availability with IT-level phone support in your native language. 

    With Exchange Online, users can access their mailboxes from wherever they go, with full support for Outlook, a premium web browser experience, and access from a wide range of mobile phones. Users get 25 gigabyte mailboxes and enjoy familiar Exchange capabilities, including robust calendaring, conference rooms, and shared contacts.

    Exchange Online in Office 365

    In Office 365, Exchange Online adds the capabilities of Exchange Server 2010 to the benefits described above. Here's just a few of the new features to look forward to:

    * Compliance and archiving: Exchange Online provides the robust archiving and eDiscovery capabilities of Exchange Server 2010, with built-in personal e-mail archives, multi-mailbox search, retention policies, transport rules, and optional legal hold to preserve email.

    * Management Tools: The Web-based Exchange Control Panel from Exchange Server 2010 is available in the cloud, so you can manage policies, security, user accounts. You can also use PowerShell to manage all aspects of your hosted Exchange environment remotely across the Internet.

    * Role-based access control: You can delegate permissions to responsible users based on job function, without giving them access to the entire management interface. This means tasks such as performing multi-mailbox searches no longer have to be the sole responsibility of IT.

    * Enhanced web experience: The premium Outlook Web App experience is available in Internet Explorer, Firefox, and Safari. Instant messaging integration allows users to chat from right within OWA.

    * Coexistence/migration: You can move users to Exchange Online over a weekend with new lightweight, cloud-based migration tools. Or, you can connect your Exchange 2003/2007/2010 environment to the cloud and enjoy rich coexistence, which lets you share calendar free/busy data between cloud and on-premises users, and migrate at whatever pace you want.

    These are just a few of the exciting features that are coming to Exchange Online with Office 365.

     

    In upcoming posts, we'll talk about:

    - How to choose between on-premises, online, or hybrid deployment options

    - Migration from various Microsoft and non-Microsoft email solutions to Exchange Online

    - Mobility, web access, hosted voicemail with Unified Messaging, ....and much more!

     

    Let us know what you'd like to cover by leaving a comment or joining us on Facebook, Twitter or LinkedIn.

    -Jon, Exchange Online Product Manager

  • Windows Azure Platform Lowers the Barrier to Innovation

    While many customers are still in the education and evaluation phase when looking at ways that cloud computing can deliver benefits to their companies, there are certainly examples of how prominent businesses are already taking advantage of the new platform.  Some of these real-world examples are featured today, companies such as T-Mobile USA and Xerox, highlighting the work they’re doing to deliver value to their customers using the Windows Azure platform.

    With the one year anniversary of the availability of the Windows Azure platform, these businesses and others have begun to experience the benefits of cloud computing delivered by the platform, such as improved scalability, rapid application development and deployment, and reduced hardware and software management overhead.  In turn, the Windows Azure platform helps provide these organizations with more time to spend on innovation, creating new features, and delivering improved satisfaction to their customers.

    T-Mobile USA was able to create and launch their Family Room social networking, mobile software application in just six weeks, built on Windows Azure and benefiting from the integrated development experience Microsoft Visual Studio 2010 delivers.  There's also a customer case study and video available here which summarizes T-Mobile USA's experience developing on Windows Azure.

    Xerox also took advantage of the benefits offered by Windows Azure and SQL Azure to create their Xerox Cloud Print solution, which allows end users to route a printing job to any available public printer directly from their mobile devices.  Xerox was able to develop this service in just four months.  In the customer spotlight piece, Eugene Shustef, Chief Engineer of Global Document Outsources at Xerox says, “For a very small investment, we can try a new project and see if it works, close it down tomorrow, or ramp it up immediately.  The Windows Azure platform enables us to do that…Cloud computing lowers the barrier to innovation.”  A video and case study providing an overview of the Xerox Cloud Print solution development is available here.

    These are just a couple of examples of customers delivering real solutions using the Windows Azure platform.  You can find other customer examples here. For general information on Microsoft’s enterprise cloud offerings, the Cloud Power site is a good place to start. 

    Thanks for your time and if you have questions or comments I look forward to hearing them and will try to respond when possible.

    Thanks - larry

     

  • An Independent Perspective on Private Cloud

    I recently saw a great article by Greg Shields on TechTarget, called Defining the Hyper-V Cloud. (You'll need to register [free] to read it, but TechTarget is an excellent third-party source for IT insight, so it's well worth the effort.) In it he not only describes the Hyper-V Cloud programs, but has a good take on how CIOs and IT managers need to regard cloud computing and private cloud computing in particular.

    Over the last two years, cloud computing implementations have necessarily fragmented the concept into several specific categories. The private cloud concept is heavily based on virtualization, and many IT managers wrongly dismiss it as simply another virtualization perspective. But as Shields points out, virtualization is merely one of the key mechanisms that define a private cloud - merely virtualizing your data center won't get you there. A true private cloud provides a virtualized computing resource that offers end-users near immediate response times to new business requirements via optimized authentication, self-service technology and often access to off-premise computing resources (with or often without their knowledge). Implementing such an engine is several steps beyond simple virtualization, and IT managers need to explore these new areas not only to learn how to build a private cloud infrastructure, but also to get a handle on the exciting new possibilities this computing model can offer their users and businesses.

    The Hyper-V Cloud programs that Shields cites are an excellent place to start. Utilizing this program will allow you to access technical resources with which to begin building your own private cloud pilots, check pre-validated configurations from numerous OEM partners who've become part of Hyper-V cloud, and even locate a private cloud partner or consultant in your area. Leverage these programs and you'll get first-hand information on how Microsoft customers and partners are deploying private cloud, with direct views into technical assessments, proofs-of-concept and actual deployments. You'll also get a much better understanding of how to extend you private cloud to a fully-powered public cloud resource, like Windows Azure.

    Cloud computing is the future of enterprise IT - that's a given across the industry. Programs like Hyper-V Cloud are a great way to get your head wrapped around these concepts early and make the best use of this new model as soon as possible.

  • Thought Leaders in the Cloud: Talking with Randy Bias, Cloud Computing Pioneer and Expert

    Randy Bias is a cloud computing pioneer and recognized expert in the field.  He has driven innovations in infrastructure, IT, Operations, and 24×7 service delivery since 1990. He was the technical visionary on the executive team of GoGrid, a major cloud computing provider. Prior to GoGrid, he built the world's first multi-cloud, multi-platform cloud management framework at CloudScale Networks, Inc.

    In this interview, we discuss:

    -Cloud isn't all about elasticity.  Internal datacenters run about 100 servers for each admin.  The large cloud providers can manage 10,000 servers per admin.

    -Users can procure cloud resources on an elastic basis, but like power production, the underlying resource isn't elastic, it's just built above demand.

    -Just doing automation inside of your datacenter, and calling it private cloud, isn't going to work in the long term.

    -Laws and regulations are not keeping pace with cloud innovations.

    -Startups aren't building datacenters.  In the early days, companies built their own power generation, but not any more.  Buying compute instead of building compute is evolving the same way.

    =The benefit of cloud isn't in outsourcing the mess you have in your datacenter.  It's about using compute on-demand to do processing that you're not doing today.

    Robert Duffner: Could you take a minute to introduce yourself and your experience with cloud computing, and then tell us about Cloudscaling as well?

    Randy Bias: I'm the CEO of Cloudscaling. Before this, I was VP of Technology Strategy at GoGrid, which was the second to market infrastructure-as-a-service provider in the United States. Prior to that, I worked on a startup, building a cloud management system very similar to RightScale's. 

    I was interested very early in cloud technology and I also started blogging on cloud early in 2007. Prior to cloud I had already amassed a lot of experience building tier-one Internet service providers (ISPs), managed security service providers (MSSPs), and even early pre-cloud technology solutions at Grand Central Communications.

    Cloudscaling was started about a year and a half ago, after I left GoGrid. Our focus is on helping telcos and service providers build infrastructure clouds along the same model of the early cloud pioneers and thought leaders like Amazon, Google, Microsoft, and Yahoo.

    Robert: On your blog, you recently stated that elasticity is not cloud computing. Many people see elasticity as the key feature that differentiates the cloud from hosting. Can you elaborate on your notion that elasticity is really a side effect of something else?

    Randy: We look at cloud and cloud computing as two different things, which is a different perspective from that of most folks. I think cloud is the bigger megatrend toward a hyper-connected "Internet of things." We think of cloud computing as the underlying foundational technologies, approaches, architectures, and operational models that allow us to actually build scalable clouds that can delivery utility cloud services.

    Cloud computing is a new way of doing IT.  Much in the same way that enterprise computing was a new way of doing IT compared to mainframe computing. There is a clear progression from mainframe to enterprise computing and then from enterprise computing to cloud computing. A lot of the technologies, architectures, and operational approaches in cloud computing were pioneered by Amazon, Microsoft, Google, and other folks that work at a very, very large scale.

    In order to get to a scale where somebody like Google can manage 10,000 servers with a single head count, they had to come up with whole new ways of thinking about IT. In a typical enterprise data center, it's impossible to manage 10,000 servers with a single head count.  There are a number of key reasons this is so.

    As one example, a typical enterprise data center is heterogeneous. There are many different vendors and technologies for storage, networking, and servers. If we look at somebody like Google, they stated publicly that they have somewhere around five hardware configurations for a million servers. You just can't get any more homogeneous than that. So all of these big web operators have had to really change the IT game.

    This highlights how we think of cloud computing as something fundamentally new. One of the side effects of large cloud providers being able to run their infrastructures on a very cost effective basis at large scale is that it enables a true utility business model.

    The cost of storage, network, and computing will effectively be driven toward zero over time. Consumers have the elastic capability to use the service on a metered basis like phone or electric service, even though the actual underlying infrastructure itself is not elastic.

    It's just like an electric utility. The electricity system isn't elastic, it's a fixed load. There's only so much electricity in the power grid. That's why we occasionally get brown-outs or even black-outs when the system becomes overloaded. It's because the system itself is not elastic, it's the usage.

    Robert: That's actually a great analogy, Randy. You mentioned that public cloud is at a tipping point. There are obvious reasons for organizations wanting to go down a private cloud path first. Are you sensing that many organizations will go to the public cloud first? And then re-evaluate to see what makes sense to try internally?

    Randy: In our experience, a typical large enterprise is bifurcated. There is a centralized IT team focused on building internal systems that you could call private cloud as an alternative to the public cloud services. On the other side are app developers in the various lines of business, who are trying to get going and accomplish something today. Those two constituencies are taking different approaches.

    The app developers focus on how to get what they need now, which tends to push them toward public services. The centralized IT departments see this competitive pressure from public services and try to build their own solutions internally.

    We should remember that we're looking at a long term trend, and that it isn't a zero-sum game. Both of these constituencies have needs that are real, and we've got to figure out how to serve both of them.

    We have a nuanced position on this, in the sense that we are neither pro-public cloud nor pro-private cloud. However, we generally take the stance that probably in the long term, the majority of enterprise IT spending and capacity will move to the public cloud. That might be on a 10 to 20 year time-frame.

    If you're going to build a private cloud that will be competitive, you're going to have to take the same approach as Amazon, Google, Microsoft, Yahoo, or any of the big web operators. If you just try to put an automation layer on top of your current systems, you won't ultimately be successful.

    We know the history of trying to do large-scale automation inside our data centers over the past 20 or 30 years. It's been messy, and there's no reason to think that's going to change. You've got to buy into that idea of a whole new way of doing IT. Just adding automation inside your data center and calling it a private cloud won't get you there.

    Robert: Some of the people that we've spoke to have expressed the notion that clouds only work at sufficient scale. When we talk about Azure and the cloud in the context of ideal workloads or ideal scenarios, we always talk about this idea of on and off batch processing that requires intensive compute or a site that's growing rapidly. And then of course your predictable and unpredictable bursting scenarios. In your experience, is there some minimum size that makes sense for cloud implementation?

    Randy: For infrastructure clouds, there probably is a minimum size, but I think it's a lot lower than most people think. It's about really looking at the techniques that the public cloud providers have pioneered.

    I see a lot of people saying, "Hey, we're going to provide virtual machines on demand. That is a cloud," to which I respond, "No, that's virtual machines on demand." Part of the cloud computing revolution is that providers like Amazon and Google do IT differently, like running huge numbers of servers with much lower head count.

    Inside most enterprises, currently IT can manage around a 100 servers per 1 admin. So when you move from a 100:1 to say a 1000:1, labor opex moves from $75 a month for a server to $7.50 per month. And when you get to ten thousand, it's a mere $0.75 a month.

    These are order of magnitude changes in operational costs, or in capital expenditures, or in the overall cost structure. Now what size do you have to be to get these economies? The answer is ... not as big as you think.

    When some people consider economies of scale, they believe it means the ability to buy server hardware cheaply enough. But that's not really very difficult.  You can go direct to Taiwanese manufacturers and get inexpensive commodity hardware that is very reliable.  These hardware has the same components as the hardware you could get from IBM, Dell, or HP today and is built by the same companies that build these enterprise vendors hardware.

    For hardware manufacturers, especially the original Taiwanese vendors, there is only so much of a discount they can provide, so Amazon doesn't have significantly more buying power than anybody who's got a few million bucks in their pocket.

    There are also economy of scale comes from more subtle places, such as the ability to build a rock star cloud engineering team.  For example, Amazon Web Services cloud engineering team iterates on a rapid pace and they have designed software so they can actually manage a very large data system efficiently at scale.

    You could do that with a smaller team and less resources, but you've got to be really committed to do that. Also finding that kind of talent is very difficult.

    Robert: You've also talked about how cloud is fundamentally different from grid and HPC. How do you see that evolving? Do you see them remaining very separate, for separate uses and disciplines? Or do you see the lines blurring as time goes on?

    Randy: I think those lines will blur for certain. As I say in the blog post, I view cloud more as high scalability computing than as high performance computing. That actually means that the non-HPC use cases at the lower end of the grid market already make sense on public clouds today. If you run the numbers and the cost economics make sense, you should embrace cloud-based grid processing today.

    Amazon is building out workload-specific portions of their cloud for high performance computing running on top of cloud. Still, at that very top of the current layer of grid use cases that are HPC, the cost economics for cloud are probably never going to make sense. For example, it may be the case for a large research institution like CERN or some other large HPC consumer that really needs very low latency infrastructure for MPI problems.

    Robert: It seems that a lot of issues around the cloud are less associated with technical challenges than they are about law, policy, and even psychology. I'm thinking here about issues of trust from the public sector, for example. Many end customers also currently need to have the data center physically located in their country. How do you see the legal and policy issues evolving to keep up with the technical capabilities of the cloud?

    Randy: It's always hard to predict the future, but some of the laws really need to get updated as far as how we think about data and data privacy. For example, there are regulatory compliance issues that come up regularly when I talk to people in the EU. Every single EU member country has different laws about protecting data and providing data privacy for your users. Yet at the same time, some of that is largely prescriptive rather than requirements-based, like stating that data can't reside outside of a specific country.

    I don't know that that makes as much sense as specifying that you need to protect the data in such a way that you never leave it on the disk or move it over the network in such a way that it can be picked up by an unauthorized party. I think the security, compliance, and regulatory laws really need to be updated to reflect the reality of the cloud, and that will probably happen over time. In the short term, I think we're stuck in a kind of fear, uncertainty, and doubt cycle with cloud security.

    Previously, I spent about seven years as a full-time security person.  What I found is that there is always a fairly large disconnect between proper security measures and compliance. Compliance is the codification in laws to try to enforce a certain kind of security posture.
    But because of the way that data and IT are always changing and moving forward, while political systems take years to formulate laws, there's always a gap between the best practices in security and what the current compliance and regulatory environment is.

    Robert: Now, you mentioned a big cloud project your company did in South Korea. What are some of the issues that are different for cloud computing with customers outside the United States?

    Randy: I think one of the first things is that most folks outside the U.S. are really at the beginning of the adoption cycle, whereas inside the U.S., folks are pretty far along, and they've got more fully formulated strategies. And the second thing is that in many of these markets, since the hype cycle hasn't picked up yet, there are still a lot of questions around whether the business model actually works.

    So for example, in South Korea, the dedicated hosting and web hosting business is very small, because most of the businesses there have preferred to purchase the hardware. It's a culture where people want to own everything that they are purchasing for their infrastructure. So will a public cloud catch on? Will virtualization on demand catch on? I don't know.

    I think it'll be about cost economics, business drivers, and educating the market. So I think you're going to find that similar kinds of issues play out in different regions, depending on what the particulars are there. We're starting to work with folks in Africa and the Middle East, and in many cases, hosting hasn't caught on in any way in those regions.

    At the same time, the business models at Infrastructure as a Service providers in the U.S. don't really work unless you run them at 70 to 80% capacity. It's not like running an enterprise system where you can build up a bunch of extra capacity and leave it there unused until somebody comes along to use it.

    Robert: I almost liken it to when the long-distance companies, because of the breakup of the Bells, started to offer people long distance plans. You had to get your head around what your call volume was going to look like. It was the same when cell phones came out. You didn't know what you didn't know until you actually started generating some usage.

    Randy: I think the providers will have options about how they do the pricing, but the reality is that when you are a service provider in the market, you are relatively undifferentiated. And one of the ways in which you try to achieve differentiation is through packaging and pricing. You see this with telecommunications providers today.

    So we're going to see that play out over the next several years. There will be a lot of attempts at packaging and pricing services to address consumers' usage patterns. I liken it to that experience where you get that sticker shock because you went over your wireless minutes for that month, and then you realize that you need plan B or C, and then you start to use that.

    Or when you, as a business, realize that you need to get an all you can eat plan for all of your employees, or whatever you want that now works for your business model. Then service providers will come up with a plethora of different content pricing and packaging to try to service those folks and that will be more successful.

    Robert: In a recent interview I did with New Zealand's Chris Auld, he said that cloud computing is a model for the procurement of computing resources. In other words it's not a technological innovation as much as a business innovation, in the sense that it changes how you procure computing. What are your thoughts on his point?

    Randy: I am adamantly opposed to that viewpoint. Consider the national power grid; is it a business model or a technology? The answer is that it's a technology. It's a business infrastructure, and there happens to be a business model on top of it with a utility billing model.

    The utility billing model can be applied to anything. We see it in telecommunications, we see it in IT, we see it with all kinds of resources that are used by businesses and consumers today.
    We all want to know, what is cloud computing? Is it something new? Is it something disruptive? Does it change the game?

    Yes, it's something new. Yes, it's something disruptive. Yes, it's changed the game.

    The utility billing model itself has not changed the game.  Neither has the utility billing model as applied to IT, because that has been around for a long time as well. People were talking about and delivering utility computing services ten years ago, but it never went any where.

    What has changed the game is the way that Google, Amazon, Microsoft, and Yahoo use IT to run large scale infrastructure.  As a side effect, because we've figured out how to do this very cost effectively at a massive scale, the utility billing model and the utility model for delivering IT services now actually works. Before, you couldn't actually deliver an on-demand IT service in a way that was more cost effective than you could build inside your own enterprise.

    Those utility computing models didn't work before, but now we can operate at scale, and we have ways to be extremely cost-efficient across the board. If we can continue to build on that and improve it over time, we're obviously going to provide a less expensive way to provide IT services over the long run.

    It's really not about the business model. It really is about enabling a new way of doing IT and a new way of computing that allows us to do it at scale.  Then on top of this to provide a utility billing model.

    Robert: Clearly, we're seeing a lot of immediate benefit to startups, for the obvious reason that they don't need to procure all of that hardware. Are you seeing the same thing as well?

    Randy: I've been more interested in talking about enterprise usage of public services lately, but it seems that the start ups are well into the mature stage, where nobody ever goes out and builds infrastructure anymore if they have a new start up. It just doesn't make any sense.

    When folks were first starting to use electricity to automate manufacturing, textiles, and so on, larger businesses were able either to build a power plant, or to put their facility near some source of power, such as a hydroelectric water mill. Smaller businesses couldn't.

    Then when we built a national power grid, suddenly everybody could get electricity for the same cost, and it became very difficult to procure and use electricity for a competitive advantage. We're seeing the same thing here, in the sense that access to computing resources is leveling the playing field. Small businesses and start ups actually have access to the same kinds of resources that very large businesses do now. I think that that really changes the game over the long term.

    You will know we crossed a tipping point when two guys and their dog in a third world country can build the infrastructure to support the next Facebook with a credit card and a little bit of effort.

    Robert: Those are all of the prepared questions I had. Is there anything else you'd like to talk about?

    Randy: There are a few things that I'd like to add, since I have the opportunity. The first thing reaches back to the point I made before, likening the way cloud is replacing enterprise computing to the way client-server or enterprise computing replaced mainframes. What drove the adoption of client-server (enterprise) computing?

    It really wasn't about moving or replacing mainframe applications, but about new applications. And when you look at what's going on today, it's all new applications. It's all things that you couldn't do before, because you didn't have the ability to turn on 10,000 servers for an hour for $100 and use them for something.

    If you look at the way that enterprises are using cloud today, you see use cases like financial services businesses crunching end-of-day trading data, or pharmaceutical companies doing very large sets of calculations overnight, where they didn't have that capability before.
    There's a weird fixation in a lot of the cloud community on enterprise or private cloud systems. They're trying to say that cloud computing is about outsourcing existing workloads and capacity. Somebody who maybe doesn't have the same kind of cost efficiencies that Amazon or Google has.

    If you just outsource the mess in your data center to someone else who has the same operational cost economics, it can't really benefit you from a cost perspective. What has made Amazon and others wildly successful in this area is the ability to leverage this new way of doing IT in ways that either level the playing field or otherwise create new revenue opportunities. It's not about bottom line cost optimization.

    If we just continue doing IT the way we already do it today, I think we're going to miss the greater opportunity. On the other hand, you ask your developers, "What can you do for the business if I give you an infinite amount of compute, storage, and network that you can turn on for as little as five minutes at a time?" That's really the opportunity.

    Robert: That's excellent, Randy. I really appreciate your time.

    Randy: Thanks Robert.


  • Behind the Cloud scene with Microsoft IT

    I’ve blogged a couple of times in the past (here and here) about Tony Scott, Microsoft’s CIO, the areas that he focuses on, and what’s happening within Microsoft related to cloud computing.  While Microsoft is a world leader in delivering software solutions to business of all sizes and to consumers, we’re also a big consumer of our own technologies, delivering IT services to our employees around the globe.  The previous pieces I’ve highlighted with Tony talk about ‘dogfooding’ or the practice of using our own technologies, many times prior to the general availability of those technologies to customers.

     

    Along these lines, I came across a series of cloud computing content listed on the Microsoft IT Showcase site, with videos hosted on the TechNet Edge site.  These include an interview with Tony titled “What Does the Cloud Mean to the CIO”, in which he discusses conversations he’s had with other CIOs and their interest in utilizing the cloud for Platform as a Service solutions.  One example he provides is how a customer was able to use Windows Azure to quickly integrate a business they were acquiring so the two could jointly share information.  Windows Azure provided the scalability and platform to quickly develop a solution to merge and share their information, much faster than their previous isolated infrastructure silos would have allowed.

     

    Click here to play this video

     

    Some of the other videos available at the site provide information on Microsoft IT and a broad set of other cloud topics, including “How Microsoft IT is Integrating Operations with the Cloud” and “Cloud Computing: What Customers are Discussing with Microsoft IT”, as a couple of examples.  I urge you to check out these videos and some of the related videos at the site to get a behind the scenes look at Microsoft IT and cloud computing.

     

    If you have questions or want more information on Microsoft and cloud computing in the enterprise, I urge you to check out the Cloud Power site as well to get started.  Let me know if you have comments or questions and I’ll work to get back to them as quickly as possible as well.


    Thanks - larry

  • Environmental Benefits of Cloud Computing

    Many of the conversations around the benefits of cloud computing are related to reducing hardware and energy costs, gaining operational efficiencies, or reducing time to deployment/market for applications.  There are other benefits however that are often overlooked related to the environmental impact of cloud computing.

    A study (located here) commissioned by Microsoft and conducted by Accenture and WSP Environment & Energy, demonstrates cloud computing’s potential to operate business applications more efficiently, resulting in a potentially lower environmental impact.  The key drivers of emission reductions include:

    - Dynamic Provisioning – Over-provisioning of servers at the cloud's operational scale can be very expensive. Cloud operators can quickly match server capacity to demand shifts.

    - Multi-Tenancy – Major cloud providers have the ability to serve millions of users at thousands of companies simultaneously on one massive shared infrastructure.

    - Server Utilization – Cloud computing can help drive energy savings by improving server utilization, which is the measurement of the portion of a server's capacity that an application actively uses.

    - Datacenter Efficiency - The way facilities are physically constructed, equipped with IT and supporting infrastructure, and managed has a major impact on the energy use for a given amount of computing power.

    There were three applications included in the study: Microsoft Exchange, Microsoft Sharepoint, and Microsoft Dynamics CRM and the average across these applications for typical carbon reductions by deployment size are impressive, as shown in the below graphs: 

    Carbon Footprint by Deployment

    Often times we have a tendency to put our heads down and just focus on the ‘business’ aspects of what new technologies can deliver.  It can be refreshing to think about the broader implications of cloud computing and the related technologies, including how they benefit the environment or society.

    I urge you to check out the lifecycle analysis study to consider some of these broader benefits.  If you’d like more information on what Microsoft has to offer in cloud computing, be sure to investigate the Cloud Power site as a starting point.  If you have questions or comments, please post them and I’ll work to respond as quickly as possible.

    Thanks - larry

  • Thought Leaders in the Cloud: Talking with Aron Pilhofer, Editor of Interactive News Technologies at The New York Times

    Aron Pilhofer acts as editor of interactive news technologies at The New York Times, overseeing a news-focused team of journalist/developers who build dynamic, data-driven applications to enhance the Times' reporting online. He joined The Times in 2005. Previously, he was at the Center for Public Integrity in Washington, and before that at Investigative Reporters and Editors (IRE.org).

    In this interview, we discuss:

    -The purpose served by DocumentCloud

    -Lack of technology in newsrooms, and how the cloud is making information more attainable and process-able by journalists

    -How the elastic capabilities of cloud computing match with the event-based spikes in demand around news

    -How a "document dump" may cause thousands of documents to appear at one time, and are better processed by the elasticity of the cloud

    -Use of "CloudCrowd", a Ruby based MapReduce library

    Robert Duffner: Could you take a moment to introduce yourself and to give us some background on DocumentCloud?

    Aron Pilhofer: Sure. I wear a couple of different hats. At The New York Times, I'm editor of interactive news, which is a team of developers in the newsroom who are journalists. What we do is both editorially and data-driven. We operate like a news desk, but we also were a technology team.

    My other day job is on DocumentCloud, which is a nonprofit funded by the Knight Foundation. I proposed a grant to fund it with Eric Umansky and Scott Cline. We were awarded the grant, and we're entering our second year right now.

    The goal of the project is to improve journalism by creating a site that allows journalists to analyze, upload, share, and search public source documents that would be otherwise extremely difficult to find or analyze.

    Robert: There's an old issue in journalism that journalists often cite documents that aren't available to the reader. DocumentCloud lets the journalist post those source document in a public place, so the reader can go back to the source, just as a journalist could.

    As far back as the '20s, though, guys like Walter Lippmann argued that the public just isn't that interested in the details. Do you find that people aside from journalists are benefiting from DocumentCloud?

    Aron: Actually, no. Let me just explain a little bit what DocumentCloud is, how it started, and why the answer is no. There's DocumentCloud the software, which is one part of what we're building. It sort of sits on top of OpenCalais, which is an open API that does entity extraction and semantic markup.

    Think of it as a set of tools we're providing to journalists to give them the ability to treat unstructured text more like structured data, so they can find links between documents that they could not have found through traditional means.

    As an example, think of a case where you send through a document that includes a reference to the CIA. CIA is meaningful to a human being. You and I can look at that and go "Oh, that probably means the Central Intelligence Agency." Or, in other context, it might be the Culinary Institute of America. It's less clear to a traditional text search.

    The Calais engine allows us, in an automated way, to go through and say "OK, that's the Central Intelligence Agency. And by the way, here's this other document also about the Central Intelligence Agency, and both of them reference the same individual that you are curious about." So, that's an example of some of the tools we're building with journalism in mind. That's DocumentCloud the software.

    Then there's DocumentCloud the community, which is the other piece of what we're trying to put together. Right now, it includes about 150 journalists and journalism organizations, with that number growing by leaps and bounds. They're joining the community to use this tool to improve their reporting.

    In order to join that community, you pretty much need to be a journalist, by our definition. That is, you must be someone whose job, either paid or unpaid, involves the acquisition, analysis, and ultimately publishing of public source documents to benefit the public. Normally that means government documents, and a lot of those documents are acquired through FOIA, or they might exist on some other site.

    Having said all that, we have been approached by any number of non-journalism organizations, such as law firms. We've gotten the sense that there is a need out there for sort of a lightweight document management tool, and we may explore that as a potential revenue generator, but that isn't really our main focus.

    Robert: You talked about this idea of document management. One of the reasons that self publishing has been so popular has been the ease by which you can actually publish to a platform. Can you talk a little bit about how DocumentCloud removes some of the impediments traditionally associated with IT departments?

    Aron: The genesis of DocumentCloud was from a piece of software we developed at the Times called DocumentViewer, which is a really straightforward piece of software. It will take a PDF or a Word document, pretty much anything OpenOffice can open or a PDF, break it up, extract the text, make it searchable, and then publish it to the web in an attractive way.

    Our thinking going in was that most news organizations, even the smallish ones, would want something similar. So our original conception was that DocumentCloud would be sort of the hub. We would want your metadata, but generally speaking, we thought that all the member organizations would want this sort of viewer to be on their hardware, behind their firewalls.

    We could not have been more wrong about that, for both good reasons and unfortunate ones. My perception is that newsrooms lack fundamental technology to deal with documents, and that is sort of scary. The traditional way that newsrooms deal with big document dumps is to split them up and have people sit down with yellow legal pads and pens and highlighters.

    That is the highest technology, really, that most newsrooms currently employ. A lot of newsrooms don't have access to the simplest things, like OCR. That's surprising to a lot of people, but it's true, and in this little area of public source documents, we think we can help.

    That's why we pivoted early on away from thinking about DocumentCloud as a federated thing running on hundreds of websites, to a vision where fundamentally it all goes through us. For the most part, we actually host the documents on behalf of news organizations.

    The way we made that simple and scalable was to make the entire DocumentViewer portion static. What you see on the website is just HTML, JavaScript, CSS, and JSon. The only live portion is search, which is provided dynamically as a service.

    All a news organization has to do is get a little embed code from us, which they can embed anywhere they want in their CMS. They can put it within a blank page on their own site, in a blog post, or whatever. It's really simple and really straightforward.

    Robert: Some of these technologies like DocumentCloud coming out are pretty exciting. Can you talk a little bit about some other ways that the cloud might be fundamentally shaping journalism practices?

    Aron: My team here at the Times couldn't do what we do without the cloud. We run everything off of Amazon. On an election night, we can suddenly go from four or five servers to 22 servers to handle all that traffic. A day later, we can just spin back down to five servers. There's no way you could do that in a traditional IT environment.

    Robert: One concern that governments and corporations have about the cloud is where data is stored. Typically they want or need the data to be stored within their country's borders. But what's a drawback for some companies in this scenario actually looks like an advantage for journalism. Is one benefit of the cloud that it's possible to store any potentially embarrassing government documents out of the reach of that government?

    [laughter]

    Aron: That thought certainly has occurred to me, and I don't know that it's been adjudicated anywhere, really. To flip that idea on its head, consider that in the UK, there's this notion of Crown copyright, where the public doesn't really own public documents and data.

    It's sort of bizarre. For example, postal codes are copyrighted under Crown copyright, and you have to pay a huge amount of money to get boundaries of postal codes in the UK. I don't know what would happen if somebody were to make that data publically available on a server in the US. If there were some assertion of Crown copyright, would that even apply jurisdictionally to where that data is hosted?

    It's a really good question, and I'm not sure I want to find out, because this is sort of new territory for everybody. We're pretty cautious about what we put up on the cloud and what we don't.

    Robert: Looking at DocumentCloud, what was it that required something new to be built? I mean Microsoft has Office 365 with SkyDrive. Google obviously offers Google Docs. There's also Scribd. What did you need that you didn't find in these existing resources?

    Aron: We looked at all those options early on, and while in 2007, this field obviously wasn't quite as crowded as it is now, none of them did what we wanted DocumentViewer to do. DocumentViewer is more than just a way of putting a document online.

    For example, it also allows you to do annotations, which is kind of key from a journalistic standpoint. There's what we have come to refer to as kind of a journalistic layer on top of a document.

    A reporter can go into DocumentViewer, highlight a key paragraph, click "Drag," and create an annotation. He or she can actually write a couple of paragraphs to identify the significance of a particular sentence, phrase, or paragraph and deep link into it.

    That allows you to add a narrative to what is effectively a piece of raw data, and say to the reader, "OK, here's the document that we're basing our reporting on. But more that, here are the key paragraphs, and here's why they're key. Here's really what this means."

    Scribd didn't do that. Docstoc didn't do that. There was really no technology we could find that did it in a way that we thought accomplished our goals. We also wanted something that wasn't Flash-based, which Scribd at that time was.

    Robert: That makes a lot of sense, particularly to support standards, when you consider all of the form factors that you can use to access the Web. I imagine various reporters want to use something like an iPad, a mobile phone, you name it.

    Aron: Right. It's not the world's greatest experience, but you can actually use DocumentViewer on an iPhone. This is not an anti-Flash rant, or anything like that. It's just we felt that the right technology for this was to stick to web standards, and what we've come to refer to as HTML5.

    Robert: On your blog, you've talked about how to use Amazon EC2 behind the scenes. Can you explain how the elasticity of the cloud, scaling up and down on demand, gets put to use by DocumentCloud?

    Aron: Sure. It's a big challenge. Document processing is a very CPU-intensive process, and so we needed to be able to scale up rapidly when there's a big uploaded document, so we did two things. One is that we have built and released a fairly lightweight parallel processing library we call CloudCrowd. DocumentCloud has actually released a number of open source libraries. We haven't released the entire project, but that will come soon.

    But the first piece was CloudCrowd, and that was sort of a lightweight, Ruby-based parallel processing library, which allows us to quickly add additional processing nodes if we get a 3,000 document dump from AP, which actually happened last week.

    Relatively easily, we can add two, three, four, or 100 servers to the processing pool and split that job up. It's basically a MapReduce project at that point. So that's how the elasticity helps us on DocumentCloud. The front end isn't as much of an issue, because once the documents are actually rendered, it's 100% static content. We just serve those off of S3.

    Robert: Can you talk about how much you're processing and what you expect that to grow to?

    Aron: It fluctuates, obviously, and it's pretty spiky, which is why we couldn't really do this in a traditional environment. If you're building a data center, you have to size it to the biggest spike you expect to have, which means you've got a lot of time where you're sitting and idling with unused resources. Because we don't need to worry about that, we can spin up 10, 20, or whatever at a time.

    I think the most we've every processed in a day is a few thousand documents. And then there are certain days where it's just a few dozen. We opened our beta this summer, and I think we're over 400,000 pages now, closing in on 500,000.

    Robert: You mentioned already that DocumentCloud uses open source, and is itself open source.

    Aron: Actually, it's MapReduce, but we don't use Hadoop. Our version of Hadoop is CloudCrowd. Think of in the old Apple ad, CloudCrowd is Hadoop for the rest of us. It's a much simpler Ruby-based MapReduce library for doing parallel processing.

    Robert: We definitely sense that investigative journalism is being cut from a lot of news organizations, because it's expensive and time-consuming. At the same time, computer assisted reporting, which includes things like web scraping and data mining, is on the rise and has actually led to Pulitzer Prize winning stories. Do you think that technology offers new hope to investigative journalism?

    Aron: Certainly, and DocumentCloud, I think, is an example of how technology can be brought to bear on that. As I said before, most journalists do serious document reporting and analysis as a very analog process, and I think that the document piece is just one tiny fragment.

    Part of what a lot of computer assisted reporting folks are doing these days in newsrooms is acquiring the data and making it searchable, so it's easier for non-technical journalists to work with. I think the smart application of technology in newsrooms can be a force multiplier for shrinking staff.

    The Times obviously has made a significant commitment to investigative reporting, which not every news organization has. Anyone who reads a newspaper knows the industry is struggling, which is a very good reason why newspaper staffs are shrinking. The way I see it is that technology can help overcome some inefficiencies, which can help preserve journalistic quality.

    Robert: Hey, thanks so much for your time. I greatly appreciate it.

    Aron: You bet.


  • Azure customer NVoicePay featured in InformationWeek

    This week longtime IT journalist Charlie Babcock of InformationWeek posted a story called "4 Companies Getting Real Results from Cloud Computing."  It's a good article that profiles organizations that "are way past testing the cloud.  They've seen the shortcomings, and are still looking ahead to what's next."  One of the companies showcased is NVoicePay, a Portland, Oregon–based company that provides a business-to-business payment network for midsized businesses.  They are using the Windows Azure platform, including SQL Azure, AppFabric and a range of Microsoft infrastructure and development tools to give their customers an easy, safe way to receive vendor invoices and submit payments, eliminating the need for paper-based processes.

    As the article says: CTO Shaun McAravey predicts NVoicePay will be handling $250 million in annual invoice payments by the end of this year. To grow, NVoicePay needs to develop payment apps tuned to more industries. Using cloud infrastructure lets McAravey and his team focus on development rather than infrastructure. "I don't want to manage servers," he says. "I want to build a whole class of payment applications [for different vertical markets] and push them out into the cloud."

    You can read a case study about NVoicePay's use of the Windows Azure platform here.  Below is a video.

  • Flickr using Windows Azure and other cloud happenings

    There were a couple interesting cloud-related stories I saw that I wanted to highlight as the week raps up.

    Recently, Yahoo needed a cloud service provider that could help them quickly release their newest Flickr app for Windows 7 and Windows Phone 7. They wanted a dynamic platform that would help engage users across a wide spectrum of connected devices. As reported in ReadWriteWeb, Yahoo found what they were looking for in Windows Azure.  In the articles Marcus Spiering, a product manager at Yahoo responsible for Flickr mobile products says, “Azure allowed us to build an app quickly and do it with quality.” With the capability to manage various complexities that can arise from the way data and meta data gets handled, Windows Azure proved to be the right platform to help Yahoo bring their highly anticipated app to market. Be sure to check out the video from the story below for more:

    ">object>

    On a similar note, there was also a post from the Windows Azure AppFabric team on their blog about a Gartner paper saying that “continuing strategic investment in Windows Azure is moving Microsoft toward a leadership position in the cloud platform market.” For more read what the Windows Azure team had to say here.

    Also, for a look at the differences between Microsoft and Google’s enterprise cloud offerings, here’s an interview that PCMag.com conducted with Tom Rizzo, Senior Director of Microsoft Online Services. PCMag.com’s Samara Lynn leaves no stone unturned as they discuss everything from Office 365 v. Google Apps to Microsoft’s latest cloud campaign.

    We’d love to hear your thoughts on these stories and more, so feel free to comment below.  As always if you’re looking for more information on Microsoft’s commercial cloud offerings, be sure to check out the Cloud Power site at this link.

    Have a great weekend - Larry

  • Thought Leaders in the Cloud: Talking with Chris Auld, CTO at Intergen Limited and Windows Azure MVP

    Chris Auld is a Microsoft MVP, the CTO at Intergen Limited and a director of Locum Jobs Startup MedRecruit. Trained as an attorney, Chris chose to pursue a career with emerging technologies instead of practicing law. He is widely known for his evangelical, arm-waving style, as well as for his enthusiasm and drive.

    In this interview we discuss:

    -Cloud computing as a business, rather than technological, innovation

    -Scenarios that utilize the cloud's elastic capabilities

    -The red herring of security vs. the real issue of sovereignty

    -Laws are unlikely to catch up, so hybrid clouds, with things like the Azure appliance, will become the way this is navigated

    -A key challenge in porting apps to the cloud is that their data tier was architected for vertical scaling, and the cloud provides horizontal data scaling

    -The success of the cloud is "just math", as you're paying for average usage. With on-premises you're paying for peak usage

    -Azure stands out as a "platform that is designed to give you the building blocks to build elastic, massive-scale applications"

    Robert Duffner: Chris, could you take a moment to introduce yourself?

    Chris Auld: I am the Chief Technology Officer at company called Intergen; we're a reasonably significantly sized Microsoft Gold partner based out of Australia and New Zealand. I've got a pretty long background with Microsoft technologies, and most recently, I have been focused quite significantly on the Windows Azure platform.

    I'm one of the about 25 Windows Azure MVPs world wide with my particular focus being on Azure Architecture. MVPs are members of the community who have a lot to say about Microsoft technology and who provide support and guidance in the community. I've done a significant amount of presenting and training delivery on Windows Azure around the globe.

    For example, I'm in New Zealand this week, and I will be in Australia the week after next to do some Azure training courses. Last week, I was at TechEd Europe in Berlin, and at the Oredev Conference in Malmo, Sweden, delivering talks on Windows Azure architecture.

    Robert: You've said that cloud computing isn't a technological innovation as much as a business one, and that it's really a new model to procure computing. Can you expand a little bit about that?

    Chris: The architectural patterns and implementation approaches that we take with Windows Azure applications are the same ones we've implemented for many, many years. And the thinking around scale out architectures that we're building today are the same thoughts as those around what I was building back in the 'dot com' timeframe with classic ASP.

    Where cloud computing is really unique is that it offers a very different way for us to be able to procure computing power. And in particular, to be able to procure computing power on an elastic basis. So there are significant new opportunities that are opened up by virtue of being able to buy very large amounts of computing resources for very short periods of time, for example.

    Robert: You've also said that the cloud's unique selling proposition is elasticity. What are some of the scenarios that have highly elastic needs?

    Chris: The canonical one that I always use is selling tickets to sporting events. Typically, your website may be selling a handful of tickets each and every day, but when a very popular event goes on sale, you can expect to sell hundreds of thousands of tickets over a time period as short as, say, five to ten minutes. We see similar patterns in other business scenarios as well.

    Another good example would be the ability to use the cloud to spin up a super computer for a temporary load. Maybe you're a mining company or a minerals exploration company, and you get some seismic data that you need to analyze rapidly.

    Being able to spin up a super computer for a couple of days and then turn it back off again is really valuable, because it means that you don't have the cost of carrying all of that capital on your balance sheet when you don't actually need to use it.

    Robert: Background-wise, you come into technology with a law degree. As you look at the cloud, where the technology really is outpacing legislation, how do you think your law background informs the way you view the cloud?

    Chris: Some of the legal stuff around the cloud remains somewhat intractable. I obviously do a lot of presenting around this stuff, and I usually start by asking people in the audience how many of them are concerned about cloud security, and it typically is everybody.

    I'm not particularly concerned about cloud security, because there's really nobody I would trust more with my data than a really large, multinational technology company like Microsoft or some of the other major cloud vendors. The more interesting thing, in terms of the legal stuff, is data sovereignty. That's really thinking about what laws apply when we start working with cloud computing.

    If my app is in Singapore, but the Singaporean datacenter is owned by a Belgian company that happens to have a sales office in Reno, what laws apply to my data? What privacy law applies? What competition law applies? What legal jurisdiction applies? Who can get search warrants to look at my data and so forth?

    Those are some very hard problems, and in fact, my law degree doesn't particularly help me solve them. Indeed, the law in general really struggles to answer those sorts of questions at the moment. Those legal and sovereignty questions may be the hardest questions in cloud computing.

    Robert: In Switzerland, customer financial information has to reside in the country, and moreover, only Swiss citizens can actually look at that data. So unless you have Swiss citizens in your call centers in Dublin or Mumbai, you start to see challenges.

    Chris: That, in some ways, determines who can actually run your data center, who can be operating your servers. Some of those laws can become quite pervasive.

    Robert: At some point, that is just going to become technologically untenable. Do you have any thoughts on that? Do you think that eventually there'll be a lot of pressure to change laws?

    Chris: I think there will. Technology is outpacing the law already, and we see it across many areas. For instance, in New Zealand we have things called "name suppression orders," and there's a been a whole load of issues with suppression orders. What happens with bloggers? What happens depending on where the data happens to be housed, and so forth?

    So technology is massively outpacing the law at the moment. If you think about how we might handle these sorts of complex, multi-jurisdiction, conflict-of-laws kind of issues traditionally; we'd sit up and we'd put together a multilateral treaty or some sort of international treaty.

    But of course, in the IT industry, we move at the sort of pace where we're shipping new functionality every couple of weeks. And specifically, cloud computing vendors are shipping new releases of their technology and platform every few months. An international treaty can take many years to negotiate.

    Can you imagine the sorts of negotiations that would need to occur for various jurisdictions around the world to be prepared to cede legal sovereignty for information that might be domiciled within their country? I don't have any degree of optimism that the law will actually catch up. I think the approach that needs to be taken is this idea of a hybrid approach. You need to have a broad range of options as to what cloud computing means for you.

    Cloud computing, for some customers, does mean a true public cloud, with massive-scaled, highly nested workloads. For other customers, it means a private cloud, where they are a large organization, particularly a government entity, and they want to have a private cloud.

    For other customers, the cloud's just not suitable at all, particularly if they need absolute control over their data. One of the benefits of working with some of the Windows Azure stuff that we find is it's actually pretty easy to work across all of those scenarios.

    To take the Microsoft Windows Azure cloud offering as an example, the option is forthcoming to drop something like a Windows Azure appliance which will let youun the same apps I your private cloud as in the public cloud. To me, that's particularly beneficial for large corporations and federal government, where they may sell it to other government departments.

    At the end of the day, we're working with standard Windows technologies, which we've worked with for a long time, but we can pick up and deploy into on-premises environments just as easily.

    Robert: That's a good segue, because we did announce an Azure platform appliance, primarily to give customers an on-premises solution. Where do you think this is going, Chris? Do you think this is just a short-term issue, and that once trust and legal issues are worked out, everything will go to the public cloud? Or do you think customers are always going to need private cloud options?

    Chris: I think customers are always going to be interested in private cloud options, particularly in things like the public sector. And I think we need to draw a strong distinction between what is really a true cloud computing offering and what's really just virtualization in drag. To me, true cloud computing offerings require a pretty significant scale. People who look at the Windows Azure appliance need to know that it's going to be a large-scale investment and a large-scale deployment.

    If you think back to what we discussed at the start, one of the key reasons you want that large scale is because you want to have, effectively, spare computing capacity that you can tap into elastically. By having a large-scale deployment shared by many, many people, the cost to carry that additional capacity is shared across all of those customers. Some of the key scenarios where I see the Windows Azure appliance really working well are things like government.

    For example, you may have a national government that chooses to deploy a Windows Azure appliance, and then sells that Windows Azure appliance to other government agencies within that national government. And based on the fact that they are selling it and actually applying a true pricing model and ideally, maybe applying some sort of differential pricing, they can encourage those government agencies to move their load around based on the price.

    So if it's more expensive to run computing workloads during the day than it is at night, you'd expect organizations such as a meteorological office or a big university who want to use the cloud for number crunching to move their loads into off peak time zones.

    To me, one of the key things that we need to see from true private clouds is massive scale. And to meet massive scales, at least one order of magnitude larger than the largest elastic workload is my sort of rule of thumb.

    You also need to have a suitable pricing system. I think there'd be an internal marketplace in which people would buy that computing power. If you buy a private cloud and then apply that as an overhead charge across all of your departments in your business or government, it's simply not going to work. Because it's not going to economically drive a sort of behavior that will optimize your usage of computing.

    Robert: That's a very good point. James Urquhart recently put up a post entitled "Moving to Versus Building for Cloud Computing," where he says that many applications can't just be ported over the cloud. That post really holds up Netflix as an organization that's completely architected around public cloud services. What's your advice to organizations that have lots of legacy applications on how to be competitive against startups that can fully embrace the cloud from day one?

    Chris: Moving to the cloud is very hard, because historically, people have not typically architected their applications for aggressive scale-out scenarios. Typically, people would have thought of scaling out in the application tier. But often, they will not have thought of scaling out in the data tier, and that's actually something that's really important to all of the cloud platforms and Windows Azure in particular.

    I think organizations that are looking at how they mature their current on-premises set need to really take a hard look at the data tier. And looking at that, they need to ask how they can partition their data tier. How can they get their data tier to enable scale out horizontally, rather than the on-premises approach, which is just buying a bigger SQL server?

    When you think about scaling the database tier on premises, you just buy a bigger box. If you think about scaling a database tier in Windows Azure, you really are all about taking SQL Azure and partitioning your database.

    So to me, most of the focus needs to be around the data tier for these applications. If people can solve the data tier, it's going to massively reduce the impact of trying to migrate into one of the clouds.

    Robert: From a different perspective, how should startups be looking at cloud computing and the way to enter and disrupt the industry with established players?

    Chris: For startups, cloud is as total no-brainer. You've basically cloistered yourselves in a Silicon Valley garage and lived on pizza and caffeine for six months building your app. You need to hold onto your equity as tightly as possible, and the last thing you want to do is spend a whole lot of capital on hardware. There are two major reasons.

    The first is that, if you're going to buy all that equipment, you've got to go and find some venture capital. And those guys are going to take a pretty penny off you in terms of your equity to give you the money to go and buy the hardware.

    The second thing is that lots of startups fail. The last thing you want when you have a failed startup is to be left carrying a whole lot of hardware that you then have to get rid of to recover your cash so you can go and do your next startup. The beauty of the cloud is it's basically a scale-fast, fail-fast model. So if your startup's a dog, you can fail fast. It doesn't cost you the earth, and you don't have all that hardware hanging around.

    If your startup's a wild success, and you need to add massive amounts of computing power fast, traditional infrastructures can be impossible to scale fast enough to meet the demand- you can't buy and ship the servers fast enough! That situation can turn your wildly successful startup suddenly into a complete disaster. The beauty of the cloud is that, without paying any capital costs up front, you have an effectively infinite amount of computing capacity that you can turn on as needed.

    Robert: In his "Cloudonomics" work, Joe Weinman basically says there's no way that building on premises for peak usage can compare with pay per use for your average capacity. How much of the cloud adoption you're seeing is just for cost savings versus business agility? Or even building new kinds of solutions that just wouldn't be feasible without cloud capabilities?

    Chris: "Cloudonomics" is based on the idea that building for the peak loads on premises is too expensive. It's not merely that we can save money by doing this in the cloud; it's that we can only do it by building it in the cloud, because it's just so economically unfeasible to do it on premises.

    It is economically unfeasible to carry the hardware you need for those peak loads if you've got to have it running 365 days of the year. The cloud allows us at a business level to solve problems that we haven't been able to solve in the past.

    Robert: If you could take your MVP hat off for a second, I imagine that you must have looked at other cloud offerings. You probably have some opinions where you think Azure stands out, and then where other offerings stand out. Can you comment more on that?

    Chris: Azure really stands out as a platform-as-a-service offering. The thing that you have to think about with Windows Azure is that you're not just buying virtual machines. You're really buying an entire platform that is designed to give you the building blocks to build elastic, massive-scale applications.

    Contrast that with something like Amazon's cloud services offering. Those guys are really mature, and they've been doing it a long time. It probably wouldn't be wrong to call them the market leaders and the innovators. It seems odd for an online bookstore to be the key innovators in cloud computing, but literally I think they just woke up one morning, and said, "Hey we're really good at building these massive scale websites. Why don't we put it in a bottle and sell it?"

    But Amazon doesn't really have that platform offering. If we think about building these massive scale applications, they maybe haven't taken it to the next level, in terms of being willing to build in things like the load balancer, recovery capabilities, and other features that you get with Windows Azure. One real strength of Amazon, though, is that they really get the economic stuff.

    Arguably, they're probably innovating more slowly in terms of technology than they are on the business side of things. Amazon offers things like spot pricing, which I love, because it sends economic price signals to encourage people to change their behavior. At the end of the day, that's what's going to drive Green IT: proper economic price signals driving behavior.

    Amazon also has reserved instances. These things mean that we can start to look at computing far more like we might look at say, the electricity market. Amazon is really probably the market leader in infrastructure as a service, in the sense of really renting raw capacity by the hour.

    Robert: In a recent interview, Accenture's Jimmy Harris said, "Cloud changes the role of IT, from a purveyor of service, to being an integrator of service." One potential challenge I see for IT is increased finger pointing. If an organization is accessing its SaaS solution through the Internet, and the SaaS solution is hosted on a public cloud, you could see finger pointing between the ISPs, the SaaS provider, and the cloud provider.

    Chris: I've been presenting pretty often for audiences like CIOs, and invariably at the end of my presentation, one of them will put up their hand and very boldly ask, "Why then should I trust Microsoft to run my application?" And of course, the answer to that is, there's probably nobody I'd trust more than Microsoft to run my application. These guys are running enormous data centers, and they have the smartest possible people running them, because the smartest possible people really want to run the enormous data centers.

    But I think there's still a mindset that there are benefits in being able to walk down the hallway and put a boot up someone's ass if something's broken. And you kind of lose that with the cloud, and to a degree you also lose some of the high-touch service level agreements that you might see with a typical outsourced provider.

    Because to a typical outsource provider, a large enterprise workload is a very significant customer, so they're often prepared at the sale time to actually enter into detailed negotiations about service level agreements.

    When you look at cloud computing, on the other hand, even large enterprise workloads are often just a drop in the ocean for the provider- remember my order of magnitude rule of thumb. But at the end of the day, what really matters is whether your application is up and running. And again, I come back to reinforce the point that these providers run at a massive scale, with very high levels of redundancy and reliability.

    There is nobody I would feel more confident in running my technology than a large cloud provider, even though I may not be able to walk down the corridor and kick someone when it stops working.

    Robert: Well Chris, thanks for your time.

    Chris: Thanks Robert. Always a pleasure.


  • Office 365 for education

    Today we are pleased to announce that more students than ever are using Live@edu, the world's leading cloud suite for education.  Live@edu is used by more than 15M students worldwide, up from 11M just three months ago.  In addition, we are sharing more information about Office 365 for education, our next generation cloud productivity service for schools and universities. Building upon on our success with Live@edu, Office 365 for education now delivers even more capabilities for students, enabling us to deliver on our commitment to schools and universities like never before. With the release of Office 365 later this year, students will now have access to Lync Online free of charge. This means easy collaboration on assignments and instant team meetings as well as IM, voice and even video chat with the click of a button. Additionally, presence information now begins to show up throughout, so students can see at a glance if a colleague is available and get things done quicker.

     

    That's not all. Students will now also have access to SharePoint Online; you guessed it, for free! SharePoint Online allows students to securely upload, share and collaborate on documents, including in-place editing access (with Office Web Apps) from anywhere with internet access, whether that is the dorm, library, on the go or at home for the holidays. And as the world of social networking becomes increasingly prevalent in our lives, SharePoint Online delivers MySites. Using their personal sites, students can organize, track and easily share classroom and course information, interests, expertise and most importantly, keep in touch with the lives of their classmates.

     

    But it doesn't end there. We designed Office, SharePoint Online, Exchange Online and Lync Online to work together, but not all schools have the ability to deliver this powerful platform to their students. So today we are also announcing the availability of the Office Desktop software for just $2 per student per month. This completes our productivity picture for institutions, providing students the tools they need to be successful, both today in their studies and later in the workplace.

     

    In the following video, Jon Perera,  General Manager of Education Strategy, discusses this evolution and provides more details about today's announcement on Office 365 for education.

     

     

    -Allen Filush, Office 365 Product Manager

     

    Microsoft Resources

    Office 365 for Education

    Office 365 Virtual Pressroom

    Live@edu

  • Virtualization alone does not a cloud solution make

    The following is a guest post from David Greschler, Director Virtualization and Cloud Strategy at Microsoft

    2010 saw the rapid rise in popularity of businesses considering cloud computing solutions as a way of helping reduce costs, gain efficiencies, as well as a variety of other benefits.  Unfortunately, at the same time, vendors have also attached the term ‘cloud computing’ to their wares in an effort to ride that wave of popularity without truly having the breadth of technologies to attach that moniker to their offerings.  Looking at the Wikipedia definition of cloud computing, it is clear that a broad set of technologies is needed to make up a proper cloud computing solution:

    Cloud computing is Internet-based computing, whereby shared servers provide resources, software, and data to computers and other devices on demand, as with the electricity grid. Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing. Details are abstracted from consumers, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them.

    In support of our Cloud Power campaign we’ve delivered a lighthearted and playful message, “Virtualization alone does not a cloud solution make”, which is intentionally incisive — designed to cut through the clutter and confusion that surrounds cloud computing. By making some clear assertions about what cloud computing is, we hope to turn the conversation away from hype and toward real-world guidance for businesses.

    We chose to focus our message on virtualization— one of the bigger areas of confusion in the industry’s conversation about cloud computing. Virtualization can certainly be used as a stepping stone to the cloud, but it is really just one component of cloud computing. Some of the attributes most commonly associated with cloud computing—elasticity, metered usage and a self-service provisioning model—are clearly not enabled by virtualization alone. 

    Clarifying the limitations of virtualization doesn’t in any way diminish its importance or its value—to the contrary, our continued investments in the Windows Server platform (Windows Server, Hyper-V and System Center) show our commitment to providing world-class virtualization technology, both as a solution in its own right and as a bridge to the future. Microsoft virtualization customers like CH2M Hill and Lionbridge are realizing impressive, meaningful benefits today, but just as importantly they are positioned to embrace broader cloud solutions tomorrow—on their own terms, and on top of their existing investments.

    No doubt about it, virtualization provides customers a strong step forward, but if we stopped there we’d be cutting the journey artificially short.

    As customers look at moving beyond running their own infrastructure and focusing more on their core business goals, Microsoft provides Windows Azure, a platform for developing, deploying and running applications in the cloud, as a complement to the Windows Server platform offerings, helping customers realize the full benefits of running IT as a Service.  Through the combination of these platforms, Microsoft offers a comprehensive set of cloud computing solutions today—spanning the infrastructure (IaaS), platform (PaaS), and software (SaaS) layers, with a common framework of tools between them. Customers will be able to choose between, or blend a combination of private or public cloud models depending on what best suits their needs.  For any infrastructure model they choose, they will have access to System Center’s complete, integrated monitoring and management. 

    In the end, organizations that implement our cloud computing solutions can realize the benefits of reduced IT complexity, hardware costs, and software maintenance costs, while having more time to commit to innovating and moving their business forward.

    You can learn more about the advantages of Microsoft’s approach vs. VMware by downloading our whitepaper, “How Microsoft is Writing the Future of Cloud Computing.”

     

  • Modular Datacenters and Dogfooding the Cloud at Microsoft

    Welcome to 2011!!  Hope that the New Year is starting off on the right foot for you.

    As I began digging out of emails after the holiday break I came across a couple interesting items that peel back the skin on some of the internal activities happening here at Microsoft that I wanted to share with you.

    The first was a blog post from the Microsoft Global Foundation Services team titled “Shedding Light on Our New Cloud Farms”.  The post talks about the work going on building out their next-generation, modular datacenter facilities.  I found the information on ITPACs (IT Pre-Assembled Components) - pre-manufactured, fully-assembled module that can be built with a focus on sustainable materials such as steel and aluminum and can house as little as 400 servers and as many as 2,000 servers, significantly increasing flexibility and scalability – to be very interesting including the ITPAC video at this page.  Additionally the self-running Powerpoint video here shows the actual Modular Datacenter PACs at the Quincy, WA. facility.  If you’re interested in this types of behind the scenes info on Microsoft’s own cloud farms, I urge you to bookmark the Global Foundation Services blog or subscribe to their RSS feed.

    The other piece I wanted to highlight is a story on InfoWorld by Eric Knorr titled “Microsoft CIO: We’re dog-fooding the cloud”, which is an interview with Microsoft’s CIO Tony Scott.  At Microsoft, we refer to “Dogfooding” as what we do when we’re testing pre-released versions of our software and services with our internal employees.  Tony discusses dogfooding in the interview with Eric touching on topics like Windows Azure, Office 365, and the private cloud.  I urge you to give the article a read and also check out my previous post from November on the topic as well here which included an interview with Tony on CIO.com.  Here’s a video I included in that post as well:

    You can find more information on Microsoft’s business related cloud offerings at the Cloud Power site. 

    As always, let me know if you have any comments or questions on this post and I’ll try to get back to you as quickly as possible.

    Thanks for your time today - Larry

  • Office 365 for enterprises: Part 5 - IT Control and Efficiency

    Control in enterprise IT is a balancing act. Automation and outsourcing save time and money, but if done wrong, can give up too much and make it harder to make technology work for your business. Office 365 is designed to provide the right balance, enabling you to get the value and streamlined management of the cloud while keeping control of the things that matter to your business. In essence, it allows you to offload the tedious, repetitive, time-consuming aspects of IT management such as server maintenance while keeping or even enhancing control over features, policies, and access.

     

    One way that Office 365 keeps you in control is through effective communication and transparency. Today with BPOS, and continuing with Office 365, we have a service health portal, which provides up-to-date information on service availability. We've been listening, and we've heard loud and clear that customers want full transparency into the status of their service. This information will also be available through RSS feeds and other types of communication. Additionally, 24/7/365 IT-level support for enterprises provides you with access to the expertise you need to resolve issues, while you stay in control of the user relationship.

     

    In terms of managing the technology itself, Office 365 provides a simplified, streamlined, Web-based management tool. You retain control over user management and service configuration so you can tailor services to the way your company does business. You can even automate management tasks and reporting using Remote PowerShell. Finally, role-based access control enables you to delegate specific capabilities to specialist users, for example, enabling a compliance officer to perform multi-mailbox search so IT staff doesn't have to.

     

    These are just a few of the ways Office 365 for enterprises is going to help your business run smoother and make your life easier. But let's also hear from you! What traditional IT tasks are you most excited to offload? Software updates and patches? Hardware upgrades? Let us know in the comments below.

     

    In our next post we will wrap up the Office 365 for enterprises series by covering how we provide robust security and reliability for your enterprise. For more from this series, visit the Office 365 blog.

     

    -Allen, Office 365 Product Manager

  • Consumerization of IT and Cloud Computing

     The Consumerization of IT, generally defined as the development of technologies for consumers that make their way into the enterprise, has been a trendy topic the last couple of years. With the International CES consumer technology tradeshow happening in Las Vegas right now, I've been thinking about the topic.  I also came across an interesting article from December 28 on CIO Update about it titled "Time to Embrace the Consumerization of IT".  The story touches on a variety of topics and talks about how cloud computing, along with mobile technologies, is key part of accelerating the adoption of consumer technologies in IT.

    A couple of months ago I received my Windows Phone 7 mobile device and have been having a great time exploring it.  One of my favorite features is the integration between Microsoft OneNote inOffice Mobile on the phone, with Office Web Apps on SkyDrive online, with Office 2010 on my desktop.  With this feature I make notes in OneNote on my Windows Phone, I can then view them online from a web browser by logging into my SkyDrive account.  When I get to my network connected computer the notes are seamlessly synced to my Office 2010 OneNote and I can see all the updates there and also make edits or additions.  Of course all of this happens in the reverse direction as well or any combination of them.  To me this anywhere access is super powerful and just the beginning of what will be possible.

    What makes this magic possible?  Cloud computing.  It's the glue that brings my mobile, desktop and online experiences together, providing me with this anywhere and anytime access I have internet access.  This also brings together technologies from Microsoft, such as Windows Live and SkyDrive cloud infrastructure, which were previously primarily targeted at consumers.  These technologies are now providing me with business services, as they've gone from consumer to business enabled services.

     Other examples consumerization of IT include IM (instant messaging), which has been primarily a consumer technology coming in the form or Windows Live  Messenger from Microsoft.  Now however with IM being part of Microsoft Lync and the Office 365 offerings, I couldn't imagine living without it in the business and enterprise environment.  The cloud forms the foundation for the Microsoft Lync Online offerings as well.

    To me it's personally exciting to see the melding of these capabilities and to begin thinking about the possibilities.  We're really just beginning to touch the surface of what will be possible in an anywhere, anytime connected world.  Many of these scenarios are enabled through cloud computing services on the backend.

    For more on Microsoft cloud offerings be sure to check out the Cloud Power site.

  • Thought Leaders in the Cloud: Talking with Barton George, Cloud Computing Evangelist at Dell

    Barton George joined Dell in 2009 as the company's cloud computing evangelist. He acts as Dell's ambassador to the cloud computing community and works with analysts and the press. He is responsible for messaging as well as blogging and tweeting on cloud topics. Prior to joining Dell, Barton spent 13 years at Sun Microsystems in a variety of roles that ranged from manufacturing to product and corporate marketing. He spent his last three years with Sun as an open source evangelist, blogger, and driver of Sun's GNU/Linux strategy and relationships.

    In this interview, we discuss:

    • -Just do it - While some people are hung up arguing about what the cloud is, others are just using it to get stuff done
    • -Evolving to the cloud - Most organizations don't have the luxury to start from scratch
    • -Cloud security - People were opposed to entering their credit card in the early days of the internet, now it's common. Cloud security perceptions will follow a similar trajectory
    • -Cost isn't king - For many organizations, is the "try something quickly and fail fast" agility that's drawing people to the cloud, not just cost savings
    • -The datacenter ecosystem - The benefits of looking at the datacenter as a holistic system and not individual pieces

    Robert Duffner: Could you take a minute to introduce yourself and tell us a little bit about your experience with cloud computing?

    Barton George: I joined Dell a little over a year ago as cloud evangelist, and I work with the press, analysts, and customers talking about what Dell is doing in the cloud. I also act as an ambassador for Dell to the cloud computing community. So I go out to different events, and I do a lot of blogging and tweeting.

    I got involved with the cloud when I was at a small company right before Dell called Lombardi, which has since been purchased by IBM. Lombardi was a business process management company that had a cloud-based software service called Blueprint.

    Before that, I was with Sun for 13 years, doing a whole range of things from operations management to hardware and software product management. Eventually, I became Sun's open source evangelist and Linux strategist.

    Robert: You once observed that if you asked 10 people to define cloud, you'd get 15 answers. [laughs] How would you define it?

    Barton: We talk about it as a style of computing where dynamically scalable and often virtualized resources are provided as a service. To simplify that even further, we talk about it as IT provided as a service. We define it that broadly to avoid long-winded discussions akin to how many angels can dance on the head of a pin. [laughs]

    You can really spend an unlimited amount of time arguing over what the true definition of cloud is, what the actual characteristics are, and the difference between a private and a public cloud. I think you do need a certain amount of language agreement so that you can move forward, but at a certain point there are diminishing returns. You need to just move forward and start working on it, and worry less about how you're defining it.

    Robert: There are a lot of granular definitions you can put into it, but I think you're right. And that's how we look at it here at Microsoft, as well. It's fundamentally about delivering IT as a service. You predict that traditional, dedicated physical servers and virtual servers will give way to private clouds. What's led you to that opinion?

    Barton: I'd say that there's going to be a transition, but I wouldn't say that those old models are going to go away. We actually talk about a portfolio of compute models that will exist side by side. So you'll have traditional compute, you'll have virtualized compute, you'll have private cloud, and you'll have public cloud.

    What's going to shift over time is the distribution between those four big buckets. Right now, for most large enterprises, there is a more or less equal distribution between traditional and virtualized compute models. There really isn't much private cloud right now, and there's a little bit of flirting with the public cloud. The public cloud stuff comes in the form of two main buckets: sanctioned and unsanctioned.

    "Sanctioned" includes things like Salesforce, payroll, HR, and those types of applications. The "unsanctioned" bucket consists of people in the business units who have decided to go around their IT departments to get things done faster or with less red tape.

    Looking ahead, you're going to have some traditional usage models for quite a while, because some of that stuff is cemented to the floor, and it just doesn't make sense to try and rewrite it or adapt it for virtualized servers or the cloud.

    But what you're going to see is that a lot of these virtualized offerings are going to be evolved into the private cloud. Starting with a virtualized base, people are going to layer on capabilities such as dynamic resource allocation, metering, monitoring, and billing.

    And slowly but surely, you'll see that there's an evolution from virtualization to private cloud. And it's less important to make sure you can tick off all the boxes to satisfy some definition of the private cloud than it is to make continual progress at each step along the way, in terms of greater efficiency, agility, and responsiveness to the business.

    In three to five years, the majority of folks will be in the private cloud space, with still pretty healthy amounts in the public and virtualized spaces, as well.

    Robert: As you know, Dell's Data Center Solutions Group provides hardware to cloud providers like Microsoft and helps organizations building their own private clouds. How do you see organizations deciding today between using an existing cloud or building their own?

    Barton: Once again, there is a portfolio approach, rather than an either-or proposition. One consideration is the size of the organization. For example, it's not unusual for a startup to use entirely cloud-based services. More generally, decisions about what parts a business keeps inside are often driven by keeping sensitive data and functionality that is core to the business in the private cloud. Public cloud is more often used for things that are more public facing and less core to the business.

    We believe that the IT department needs to remake itself into a service provider. And as a service provider, they're going to be looking at this portfolio of options, and they're going to be doing "make or buy" decisions. In some cases, the decision will be to make it internally, say, in the case of private cloud. Other times, it will be a buy decision, which will imply outsourcing it to the public cloud.

    The other thing I'd say is that we believe there are two approaches to getting to the cloud: one is evolutionary and the other one is revolutionary. The evolutionary model is what I was just talking about, where you've made a big investment in infrastructure and enterprise apps, so it makes sense to evolve toward the private cloud.

    There are also going to be people who have opportunities to start from ground zero. They are more likely to take a revolutionary approach, since they're not burdened with legacy infrastructure or software architecture. Microsoft Azure is a good example. We consider you guys a revolutionary customer, because you're starting from the ground up. You're building applications that are designed for the cloud, designed to scale right from the very beginning.

    Some organizations will primarily follow one model, and some will follow the other. I would say that right now, 95% of large enterprises are taking the evolutionary approach, and only 5% are taking a revolutionary approach.

    People like Microsoft Azure and Facebook that are focused on large scale-out solutions with a revolutionary approach are in a small minority. Over time, though, we're going to see more and more of the revolutionary approach, as older infrastructure is retired.

    Robert: Let me switch gears here a little bit. You guys just announced the acquisition of Boomi. Is there anything you can share about that?

    Barton: I don't know any more than what I've read in the press, although I do know that the Boomi acquisition is targeted to small and medium-sized businesses. We target that other 95% on the evolutionary side with what we call Virtual Integrated System. That's the idea of starting with the already existing virtualized infrastructure and building capabilities on top of it.

    Robert: The White House recently rolled out Cloud Security Guidelines. At Microsoft and Dell, we've certainly spent a lot of time dealing with technology barriers. How much of the resistance has to do with regulation, policy, and just plain fear? And how much do things like cloud security guidelines and accreditation do to alleviate these types of concerns?

    Barton: To address those issues, I think you have to look at specific customer segments. For example, HIPAA regulations preclude the use of public cloud in the medical field. Government also has certain rules and regulations that won't let them use public clouds for certain things. But as they put security guidelines in place, that's going to, hopefully, make it possible for the government to expand its use of public cloud.

    I know that Homeland Security uses the public cloud for their public-facing things, although obviously, a lot of the top secret stuff that they're doing is not shared out on the public cloud. If you compare cloud computing to a baseball game, I think we're maybe in the bottom of the second inning. There's still quite a bit that's going to happen.

    One of the key areas where we will make a lot of progress on in the next several years is with security, and I think people are going to start feeling more and more comfortable.

    I liken it to when the Internet first entered broad use, and people said, "I would never put my credit card out on the Internet. Anyone could take it and start charging up a big bill." Now, the majority of us don't think twice about buying something off of the web with our credit cards, and I think we're going to see analogous change in the use of the cloud.

    Robert: Regardless of whether you have a public or private cloud, what are your thoughts on infrastructure as a service and platform as a service? What do you see as key scenarios for each of those kinds of clouds?

    Barton: I think infrastructure as a service is a great way to get power, particularly for certain things that you don't need all the time. For example, I was meeting with a customer just the other day. They have a web site that lets you upload a picture of your room and try all kinds of paint colors on it. The site renders it all for you.

    They just need capacity for a short period of time, so it's a good example of something that's well suited to the public cloud. They use those resources briefly and then release them, so it makes excellent sense for them.

    There's also a game company we've heard about that does initial testing of some of their games on Amazon. They don't know if it's going to be hit or not, but rather than using their own resources, they can test on the public cloud, and if it seems to take off, they then can pull it back in and do it on their own.

    I think the same thing happens with platform as a service. Whether you have the platform internal or external, it allows developers to get access to resources and develop quickly. It allows them to use resources and then release them when they're not needed, and only pay for what they use.

    Robert: In an article titled, "Cloud Computing: the Way Forward for Business?," Gartner was quoted as predicting that cloud computing will become mainstream in two to five years, due mainly to cost pressures. When organizations look pass the cost, though, what are some of the opportunities you think cloud providers should really be focusing on?

    Barton: I think it's more about agility than cost, and that ability to succeed or fail quickly. To go back to that example of the game company, it gives them an inexpensive testing environment they can get up and going easily. They can test it without having to set up something in their own environment that might take a lot more time. A lot of the opportunity is about agility when companies develop and launch new business services.

    The amount of time that it takes to provision an app going forward should, hopefully, decrease with the cloud, providing faster time to revenue and the ability to experiment with less of a downside.

    Robert: Gartner also recently said that many companies are confused about the benefits, pitfalls, and demands of cloud computing. What are some of the biggest misconceptions that you still run into?

    Barton: Gartner themselves put cloud at the very top of the hype cycle for emerging technologies last year, and then six weeks later, they turned around and named it the number one technology for 2010. There are a lot of misconceptions because people have seen the buzz and want to sprinkle the cloud pixie dust on what they offer.

    This is true both for vendors, who want to rename things as cloud, and for internal IT, who when asked about cloud by their CIO, they say, "Oh, yes. We've been doing that for years."

    I do think people should be wary of security, and there are examples where regulations will prohibit you from using the cloud. At the same time, you also have to look at how secure your existing environment is. You may not be starting from a perfectly secure environment, and the cloud may be more secure than what you have in your own environment.

    Robert: Those are the prepared questions I have. Is there anything interesting that you'd like to add?

    Barton: Cloud computing is a very exciting place to be right now, whether you're a customer, an IT organization, or a vendor. As I mentioned before, we are in the very days of this technology, and we're going to see a lot happening going forward.

    In much the same way that we really focused on distinctions between Internet, intranet, and extranet in the early days of those technologies, there is perhaps an artificial level of distinction between virtualization, private cloud, and public cloud. As we move forward, these differences are going to melt away, to a large extent.

    That doesn't mean that we're not going to still have private cloud or public cloud, but we will think of them as less distinct from one another. It's similar to the way that today, we keep certain things inside our firewalls on the Internet, but we don't make a huge deal of it or regard those resources inside or outside as being all that distinct from each other.

    I think that in general, as the principles of cloud grab hold, the whole concept of cloud computing as a separate and distinct entity is going to go away, and it will just become computing as we know it.

    I see cloud computing as giving IT a shot in the arm and allowing it to increase in a stair-step fashion, driving what IT's always been trying to drive, which is greater responsiveness to the business while at the same time driving greater efficiencies.

    Robert: One big trend that we believe is going to fuel the advance of cloud computing is the innovation happening at the data center level. It's one thing to go and either build a cloud operating system or try to deploy one internally, but it's another thing to really take advantage of all the innovations that come with being able to manage the hardware, network connections, load balancers, and all the components that make up a data center. Can you comment a little bit about how you see Dell playing into this new future?

    Barton: That's really an area where we excel, and that's actually why our Data Center Solutions Group was formed. We started four or five years ago when we noticed that some of our customers, rather than buying our PowerEdge servers, were all of a sudden looking at these second-tier, specialized players like Verari or Rackable. Those providers had popped up and identified the needs of these new hyperscale providers that were really taking the whole idea of scale-out and putting it on steroids.

    Dell had focused on scale starting back in 2004, but this was at a whole other level, and it required us to rethink the way we approach the problem. We took a step back and realized that if we want to compete in this space of revolutionary cloud building, we needed to take a custom approach.

    That's where we started working with people like Microsoft Azure, Facebook, and others, sitting down with customers and focusing on the applications they are trying to run and the problems they are trying to solve, rather than starting with talking about what box they need to buy. And then we work together with that customer to design a system.

    We learned early on that customers saw the system as distinct from the data center environment. Their orientation was to say, "Don't worry about the data center environment. That's where we have our expertise. You just deliver great systems and the two will work together." But what we found is if you really want to gain maximum efficiencies, you need to look at the whole data center as one giant ecosystem.

    For example, with one customer, we have decided to remove all the fans from the systems and the rack itself and put gigantic fans in the data center, so that the data center becomes the computer in and of itself. We have made some great strides thinking of it in that kind of a holistic way. Innovation when developing data centers is very crucial to the overall excellence in this area.

    We've been working with key partners to deliver this modular data center idea to a greater number of people, so this revolutionary view of the data center can take shape more quickly. And then, because they're modular, like giant Lego blocks, you can expand these sites quickly. But once again, the whole thing has to be looked at as an ecosystem.

    Robert: Thanks a lot for your time. This has been a great conversation.

    Barton: Thank you.


  • Office 365 for enterprises: Part 4 - Works With What You Know

    Sometimes people refer to the cloud as a "disruptive" technology. But that's really the wrong word to describe Office 365. Constructive? Yes. Practical? That, too. Microsoft understands that businesses don't care for surprises. We have gone to great lengths to make Office 365 familiar and easy to use.

    New capabilities, no matter how powerful in theory, don't deliver value unless users use them. That's why Office 365 is all about lighting up Office - the desktop and Web-based productivity tools that users rely on every day. For example, it takes little to no training for users to take advantage of the presence information in Office that becomes available with Office 365. It just works, immediately, easily enabling more efficient collaboration and decision-making.

    Now let's look at how Office 365 doesn't disrupt your datacenter. First of all, you can choose a plan that enables coexistence. That means you can keep your on-premises Exchange Server and Lync Server and integrate them with Office 365 services. Therefore if some users are on Exchange Server and others are using Exchange Online, they can share free/busy information-no silos, no disconnect. If you want to take advantage of the PBX replacement scenario available with Lync Server, you can do that while using Exchange in the cloud. Coexistence* gives you the flexibility to migrate to the cloud at your own pace, keep certain users on-premises for compliance or other reasons, and get the right mix of features and architecture for your business.

    Notice we said "choose a plan." That's also an important point-there's a range of plans available to enterprises, and you can mix and match them as you need. If some users need Office Web Apps and others don't, you can do that. If you have some kiosk workers who don't have dedicated PCs and just need lightweight, Web-based access, you can do that. If you need coexistence, you can do that, too. It's your choice and you can get what works for your business.

    Of course, one of the benefits of cloud-based productivity services is that you get the latest features without having to do much. But, our approach to this style of innovation is business-focused-that is, surprisingly surprise-free. We plan to update the services frequently with deliberate, responsible, fully-tested capabilities and upgrades. It's innovation that is customer-centered rather than for its own sake.

    Hopefully, this post has given you some insight into how Microsoft takes a measured, customer-centric approach to productivity in the cloud. In the next post, we'll talk about how we deliver enterprise-class security and reliability so you don't have to.

    * Check out the Exchange Server Deployment Assistant, which provides customized instructions for your business about how to configure rich coexistence

    Thanks!

    Allen Filush, Office 365 Product Manager

  • Public Sector Customers Across Globe Choose Microsoft Cloud Offerings

    It's been an interesting last couple of weeks for Microsoft and Public Sector customers around the globe related to cloud technlogies.  At the beginning of December there was the news that Microsoft's Cloud Infrastructure received FISMA certification, which means customers of any size can benefit from highly-focused testing and monitoring, automated patch delivery, cost-saving economies of scale, and ongoing security improvements delivered by this infrastructure.

    Then last week there was the news that the USDA had been looking for a flexible, reliable and secure cloud solution for their messaging and collobaration needs and in the end selected Microsoft Cloud offerings, including Exchange Online, SharePoint Online, Office Communications Online and Office Live Meeting.  They'll be moving 120,000 people to the solutions and in the process consolidating 21 different messaging and collaboration systems.

    As part of the story, there was a great analogy in a quote from Chris Smith, chief information officer for the USDA, which was:

    “Basically, the car we owned was getting ready for a major engine overhaul,” he said. “All our servers were at least three years old. We’re going from owning the car and paying for the tires, the oil, and the upkeep to basically buying a Zip car that’s wherever we need it, whenever we need it.”

    USDA decided to go with Microsoft’s cloud offering after years of watching the online messaging and collaboration space mature, Smith said. “We really ended up in the right place at the right time,” he said.

    Finally there was a piece of news this week from Europe where the Transport of London (TfL) will be moving their Trackernet data feed to the Microsoft Windows Azure platform.  Trackernet is described as an innovative new realtime display of the status of the London Underground 'Tube' network.

    Mark Taylor, Director of DPE at Microsoft UK, said "TfL asked for a system able to handle in excess of seven million requests per day, as well as being able to scale to handle unpredictable events like snow days."

    Michael Gilbert, TfL's Director of Information Management, had this to say about the offering, "TfL with the help of Microsoft, has created a strategic, scalable technical platform that will aid us in making real-time data sets available; the first of which is Trackernet data." 

    It's great to see government agencies and other public sector groups beginning to take advantage of the benefits that cloud computing offers such reducing hardware costs, greater scalability and flexibility, as well as the proven enterprise ready software and support solutions that Microsoft brings from to the table.

  • Three Steps CIOs Should Take Now While Planning for a Private Cloud

    Many people in IT related roles today are aware that moving to a private cloud represents a paradigm shift up and down the IT stack, from how IT pros will deploy services to how end users will consume them. And while the cloud's benefits are attractive, typically any shift in IT infrastructure, CIOs and IT strategist need to do some careful planning to achieve those benefits. Here are three steps that IT managers can take to get ready for cloud computing:

    Identify the Low-Hanging Fruit

    Practically every organization will have one or more applications that are obvious cloud candidates. Sit down with your IT workers and identify these applications. Then talk about the most appropriate cloud infrastructure for these applications. Remember that cloud computing isn't a whole-hog architecture. You don't need to wait until your entire datacenter has migrated into a slick private cloud infrastructure before deploying applications. You can build a smaller private cloud infrastructure first; or rent some private cloud infrastructure from a third party; or deploy an application on Windows Azure; or even simply access an application as a cloud service, like Office 365. No need to boil the ocean. The important thing is to let your IT workers and your users experience cloud computing for themselves, which will pay dividends later on when you begin bigger or more difficult cloud projects.

    Develop Your IT Workers

    The cloud's benefits are obvious to many CIOs and high-level IT managers - cost, ease of deployment, ease of management automation, easy scalability and more. But to IT workers dodging daily bullets in the trenches of your datacenter, those aren't everyday benefits. Making sure those workers are trained in implementing a private cloud solves two problems: First, it will help allay concern that the cloud will make them obsolete. Once IT workers see that the building blocks of private cloud infrastructures are similar to the platforms they're already managing in the datacenter - Windows Server 2008 R2 & Hyper-V, Active Directory Federated Identity, the System Center management suite - they'll realize that many of their existing skills are applicable to deploying and managing a private cloud infrastructure, while they'll also have the opportunity to learn new skills to help develop their career. That will help get them more excited about this paradigm shift, and you'll benefit from their passion and innovation instead of becoming bogged down managing unnecessary fears.

    Second, with some skilled private cloud implementers on your staff, you'll be able to get an accurate idea of how much work and new infrastructure a private cloud project might require. While your trained IT staff is working to prep the infrastructure you've already got, you can manage procurement to make sure that the additional pieces they need are available when they need them. Microsoft will be releasing quite a bit of cloud training over the coming months. But to get started right away, check the Microsoft cloud pages, TechNet's cloud resources and especially the Hyper-V Cloud Fast Track pages for in-depth technical guidance. There's much more to come, so check back often.

    Design the Capabilities of Your Private Cloud

    Once you've got some private cloud expertise on your staff, you should be ready for the third step. Implementing a cloud solution around low-hanging fruit applications should be relatively easy. Now examine the rest of you software portfolio. Discuss with your front-line business managers and workers where these applications need to go in the future. Then act as a bridge between those requests and your IT staff. A private cloud is a powerful resource with many possibilities, not all of which might be required for your business.

    For example, does your company require fast global reach? That might necessitate the ability to access a public cloud - which means your IT staff needs to examine what's needed to bridge the gap between your private cloud and Windows Azure, including Azure-capable applications and Active Directory Federated Identity management. Is there benefit to allowing your users access to self-service features to provision new applications, servers and other IT resources? Again, if yes, then your IT staff needs to examine the identity management features needed to enable these features as well as the Virtual Machine Manager Self-Service Portal 2.0. Would access to instant additional private cloud resources via a third-party hoster be beneficial? This is an exciting and burgeoning field for hosters and large IT consultancies, so start looking around now for companies with whom to partner on these requirements.

    Properly implemented, a private cloud is a powerful and highly flexible infrastructure fabric. Timely and thorough planning can help ensure that your organization gets the most out of such a resource.

  • Thought Leaders in the Cloud: Talking with Maarten Balliauw, Technical Consultant at RealDomen and Windows Azure Expert

    Maarten Balliauw is an Azure consultant, blogger, and speaker, as well as an ASP.NET MVP. He works as a technical consultant at RealDomen. Maarten is a project manager and developer for the PHPExcel project on CodePlex, as well as a board member for the Azure User Group of Belgium (AZUG.BE).

    In this interview we discuss:

    • Take off your hosting colored glasses – architecting for the cloud yields benefits far beyond using it as just a mega-host
    • Strange bedfellows – PHP on Azure and why it makes sense
    • Average is the new peak – The cloud lets you pay for average compute, on-premesis makes you pay for peak capacity
    • Datacenter API – It makes sense for cloud providers to expose APIs for many of the same reasons it makes sense for an OS to expose APIs
    • Data on the ServiceBus – Transferring data securely between your datacenter and your public cloud applications

    Robert Duffner: Could you tell us a little bit about yourself to get us started?

    Maarten Balliauw: I work for a company named RealDolmen, based in Belgium. Historically, I've worked with PHP and ASP.NET, and since joining RealDolmen, I have been able to work with both technologies. While my day to day job is mostly in ASP.NET, I do keep track of PHP, and I find it really interesting to provide interconnections between the technologies.

    And Azure is also a great platform to work with, both from an ASP.NET perspective and a PHP perspective. Also, in that area, I'm trying to combine both technologies to get the best results.

    Robert: You recently spoke at REMIX 10, and in your presentation, you talked about when to use the cloud and when not to use the cloud. What's the guidance that you give people?

    Maarten: The big problem with cloud computing at this time is that people are looking at it from a perspective based on an old technology, namely classic hosting. If you look at cloud computing with fresh eyes, you will see that it is really an excellent opportunity, and that no matter what situation you are in, your solution will always be more reliable and a lot cheaper if you do it in the cloud.

    Still, not every existing application can be ported to the cloud at this time. One important metric to use in choosing between cloud and non-cloud deployments is your security requirements. Do you care about keeping your data and applications on premises versus in the cloud?

    Robert: We've obviously engineered Azure to be an open platform that runs a lot more than just .NET, including dynamic languages like Python and PHP that you mentioned, but also languages like Java. But you talk quite a bit about PHP on Azure. From your perspective, why would anyone want to do that when there are so many options for PHP hosting today?

    Maarten: You can ask the same question about ASP.NET hosting. There are so many options to host your .NET applications somewhere: on a dedicated server, on a virtual private server, on a cloud somewhere. So I think the same question applies to PHP, Java, Ruby, and whatever language you're using.

    Azure provides some quite interesting things with regard to PHP that other vendors don't have. For example, the service bus enables you to securely connect between your cloud application and your application that's sitting in your own data center. You can leverage that feature from .NET as well as from PHP. So it's really interesting to have an application sitting in the cloud, calling back to your on premises application, without having to open any firewall at all.

    Robert: In your talk, you also point to the Azure solution accelerators for Memcached, MySQL, MediaWiki, and Tomcat. In your experience, are most people even aware that these kinds of things run on Azure?

    Maarten: I'm not sure, because, traditionally, the Microsoft ecosystem is quite simple. There's Microsoft as a central authority offering services, and then there are other vendors adding additional functionality, bringing some other alternative components. In the PHP world, for example, there is no such thing as a central authority, so information is really distributed across different communities, companies, and user groups.

    I think some part of the message from Azure is already out there in all these communities and all these small technology silos in the PHP world, but not everything has come through. So I'm not sure if everyone is aware that all these things can actually run on Azure, if you're using PHP or MySQL or whatever application that you want to host on Azure.

    Robert: In another talk, you mentioned "Turtle Door to Door Deliveries," and how they estimated needing six dedicated servers for peak load, but because the load is fairly elastic, they saw savings with Azure. Can you talk a little bit more about that example?

    Maarten: That was actually a fictitious door-to-door delivery company, like DHL, UPS, or FedEx, which we created for that talk. They knew the load was going to be around six dedicated servers for peaks, but there were times of the day where only one or two servers would be needed. And when you're using cloud computing, you can actually scale dynamically and, for example, during the office hours have four instances on Azure, and during the evening have six instances, and then in the night scale back to two instances.

    And if you take an average of the number of instances that you're actually hosting, you will see that you're not actually paying for six instances, so that you're only paying for three or, maximum, four. Which means you have the extra capacity of two machines, without paying for it.

    We have done a project kind of like that, for a company in Belgium, doing timing on sports events. Most of these events have a maximum of 1,000 or 2,000 participants, but there are several per year with 30,000 participants.

    We used the same technique for that project as in the example in the talk. We scaled them up to 18 instances during peaks, and at night, for example, we scaled them back to two instances. They actually had the capacity of 18 instances during those peak moments, but on average, they only had to pay for seven instances.

    Robert: One important characteristic of clouds is the ability to control them remotely through an API. Amazon Web Services has an API that lets you control instances, and you recently wrote a blog post showing a PowerShell script that makes an app auto-scale out to Azure when it gets overloaded. What are some of the important use cases for cloud APIs?

    Maarten: If you have a situation where you need features offered by a specific cloud, then you would need those cloud APIs. For example, if you look at the PHP world, there's an initiative, the Simple Cloud API, which is one interface to talk to storage on different clouds, like Amazon, Azure, and Rackspace. They actually provide the common denominator of all these APIs, and you're not getting all the features of the exact cloud that you are using and testing.

    I think the same analogy goes for why you would need cloud APIs. They're just a direct means of interacting with the cloud platform, not only with the computer or the server that you're hosting your application on but, really, a means of communicating with the platform.

    For example, if you look at diagnostics, getting logs and performance counters of your Azure application on a normal server, you would log in through a remote desktop and have a terminal to look at the statistics and how your application is performing and things like that. But if you have a lot of instances running on Azure, it would be difficult to log in to each machine separately.

    So what you can do then is use the diagnostics API, which lets you actually gather all this diagnostic data in one location. You have one entry point into all the diagnostics of your application, whether it's running on one server or 1,000 servers.

    Robert: That's a great example. You also wrote an article titled "Cost Architecting for Windows Azure," talking about the need to architect specifically for the cloud to get all the advantages. Can you talk a little bit about things people should keep in mind when architecting for the cloud?

    Maarten: You need to take costs into account when you're architecting a cloud application. If you take advantage of all the specific billing situations that these platforms have to offer, you can actually reduce costs and get a highly cost effective architecture out there.

    For example, don't host your data in a data center in the northern US and your application in a data center in the southern US, because you will pay for the traffic between both data centers. If your storage is in the same data center as your hosted application, you won't be paying for internal traffic.

    There are a lot of factors that can help you really tune your application. For example, consider the case where you are checking a message queue, and you also have a process processing messages in this queue. Typically in this case, you would poll the queue for new messages, and if no messages are there, you would continue polling until there's a message. Then you process the message, and then you start polling again. You may be polling the queue a few times a second.

    Every time you poll the queue is a transaction. Even though transactions are not that expensive on Windows Azure, if you have a lot of transactions, it can cost you substantial money. Therefore, if you poll less frequently, or with a back off mechanism that causes you to wait a little longer if there are no messages, you can drastically reduce your costs.

    Robert: One of the key challenges for enterprise cloud applications is that they often can't run as a separate island. Can you talk about some of the solutions that exist when identity and data need to be shared between your data center and the cloud?

    Maarten: Actually, the Windows Azure AppFabric component is completely focused on this problem. It offers the service bus, which you can use to have an intermediate party between two applications. Say you have an application in the cloud and another one in your own data center, and you don't want to open a direct firewall connection between both applications. You can actually relay that traffic through the Windows Azure service bus and have both applications make outgoing calls to the service bus, and the service bus will route traffic between both applications.

    You can also integrate authentication with a cloud application. For example, if you have a cloud application where you want your corporate accounts to be able to log in, you can leverage the Access Control service. That will allow you to integrate your cloud application with your Active Directory, through the AppFabric ACS and an Active Directory federation server.

    If you have an application sitting in your own data center but you need a lot of storage, you can actually just host the application in your data center and have a blob storage account on Azure to host your data. So there are a lot of integrations that are possible between both worlds.

    Robert: You manage a few open source projects on CodePlex. Can you talk a little bit about the opportunity for open source on platforms like Azure or those operated by other cloud providers?

    Maarten: Consider, for example, SugarCRM, which is an open source CRM system that you can just download, install, modify, view the codes, and things like that. What they actually did was take that application and also host it in a cloud. They still offer their product as an open source application that you can download and use freely. But they also have a hosted version, which they use to drive some revenue and to be able to pay some contributors.

    They didn't have to invest in a data center. They just said, "Well, we want to host this application," and they hosted it in a cloud somewhere. They had a low entry cost, and as customers start coming, they will just increase capacity.

    Normally, if a lot of customers come, the revenue will also increase. And by doing that, they actually now have a revenue model for their open source project, which is still free if you want to download it and use it on your own server. But by using the hosted service that's hosted somewhere on a cloud, they actually get revenue to pay contributors and to make the product better.

    Robert: Maarten, I appreciate your time. Thanks for talking to the Windows Azure community.

    Maarten: Sure. Thank you.


  • From 2010 to 2011: Walking the Cloud Talk

    Contributed Article By David Greschler, Director, Virtualization and Cloud Strategy, Server and Tools Business at Microsoft
    Original post from VMBlog.com

    2010 was the year of the cloud. We saw some massive changes across the industry as IT decision makers and technology vendors wrestled with the shift to cloud computing. In particular, the industry had to grapple with many differing - and often conflicting - definitions of cloud computing. Certainly virtualization was often part of the discussion; however, 2010 brought a broader understanding that virtualization was no longer the end of the road, but instead a helpful stepping stone to the agile, responsive world of cloud computing.

    With an understanding of the cloud possibilities established, I believe 2011 is the year that IT departments will really begin to develop their cloud plans for implementation. Gartner has estimated that worldwide cloud services revenue (including public and private services) will reach $148.8 billion in 2014.

    As I see it, virtualization experts are poised to help their companies make that shift from virtualization to cloud computing and shape the cloud computing strategy that matches their needs.  To that end, here are some specific things IT pros - especially virtualization experts - should consider when planning for their own careers and company cloud implementation in 2011 and beyond:

    • • One size can't and won't fit all: 2011 will see organizations begin implementing clouds of all shapes and sizes, be they public, private or a combination of the two. Already Microsoft has worked with customers on various types of cloud deployments:
    • • Some like Aer Lingus, the European airline that developed an online trip-planning application, are using the Windows Azure platform to create and run applications that have more scalability and provide a better customer experience. 
    • • Others, such as translation specialist Lionbridge, have deployed a private cloud solution using Hyper-V and System Center that enabled them to take their existing IT investments in virtualization to the next level.

    The more robust cloud systems become, the more possible solutions - and therefore combinations of solutions - will be available to IT users.

    • • You don't have to start over. In 2010, companies started to realize that moving to the cloud doesn't mean starting from scratch. In fact, you can build private cloud solutions on top of existing datacenter investments - Windows Server, Hyper-V, and System Center comprise fully integrated server, virtualization, and management solutions. Microsoft's recently announced Hyper-V Cloud programs and initiatives help IT pros deploy private clouds that build on existing investments. You can also use the same management, identity and application platform across both private and public clouds. For instance, did you know that many native .NET apps can be run locally and in our public cloud, Windows Azure? And with our recently announced VMrole, you will be able to move many Hyper-V virtual machines into the Windows Azure public cloud. In 2011, I predict IT managers will increasingly take advantage of these sort of use-what-you-have options to ease the addition of cloud computing to their infrastructures
    • • End-to-end management will become a core requirement: In 2010, IT decision makers realized the complexities of adding a cloud component to their existing desktop-to-datacenter environment. More than ever, IT started demanding the tools needed to create a common thread of management framework and identity models for applications and IT systems, which is a key part of Microsoft's System Center management offering. That's part of the reason Microsoft's System Center product line has become a billion-dollar business. To my point about cloud diversity, as we move into 2011, customers are going to be even more vocal about wanting a single management console that can give them the best idea where all their workloads are going, no matter if they're physical, virtual, private cloud, hosted or part of a public cloud.
    • • Cost and value will continue to be important: Customers are taking a critical look at the value they're getting for their money:
    • • Engineering firm CH2M HILL estimates it will save $3 million over the next three to five years by virtualizing field servers. 
    • • German household appliance manufacturer Miele & Cie has saved an estimated $1.8 million U.S. to date by migrating to a Microsoft solution based on Windows Server 2008 R2 Datacenter with Hyper-V technology - and with the Microsoft System Center data center products, got a single suite of server management tools that provides visibility into the physical server, the operating system, the hypervisor, and the applications layers.

    Virtualization alone might save money, but ultimately IT staff will make decisions based on what they need to get the job done right - and in a number of instances, public clouds will be the best fit.

    • • Start small - build for the future. . I think perhaps one of the biggest areas we'll see a change of mentality will be in how organizations take that very first step toward cloud computing. In 2011, IT pros and decision-makers will realize cloud computing doesn't have to be all or nothing, and that small projects can provide insightful learning opportunities. Customers like Umbraco, which makes an open-source content management system for websites, used Windows Azure to simplify its solution and expand its customer base with an accelerator that allows existing Umbraco implementations to run in the cloud. Based on those learnings, the company plans to move fully to the cloud. My advice for 2011: identify 1-2 low risk applications that can serve as starter cloud projects, just as test/dev apps were the first workloads people used when first trying virtualization.

    So what does all this mean for the IT professional? In the coming years, your job will change, possibly even more than it already has in 2010. You're still going to be worried about workloads and your employers will still look to you to keep things running. But the interesting, strategic and exciting part is that many of you will start thinking of yourselves as "cloud architects."

    My wish - and I'll put this in the "optimistic prediction" category - is that 2011 will be the year IT pros take all the cloud talk from 2010 and create their own cloud plans: walking the cloud walk.

    About the Author

    David Greschler is director of virtualization and cloud strategy within Microsoft's Server and Tools Business. Greschler is focused on virtualization solutions and systems management tools for the desktop and datacenter.

    Greschler came to Microsoft with the July 2006 acquisition of Softricity. Prior to joining Microsoft, Greschler was co-Founder of Softricity, developers of SoftGrid and the originator and leading vendor of the application virtualization industry. With more than 20 years of pioneering experience in the computer field, Greschler has held various positions at the MIT Media Lab and The Computer Museum, and holds numerous virtualization patents. Greschler holds a bachelor's degree from Brandeis University.

    Founded in 1975, Microsoft (Nasdaq "MSFT") is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.

  • Welcome to the Cloud, USDA!

    The past few months have marked a transformative time in government IT, with the State of California, the State of Minnesota and New York City, all embracing Microsoft Online Services.  And now that momentum is carrying into the federal sector, with groundbreaking news from the United States Department of Agriculture (USDA), which announced it is migrating 120,000 users to Exchange Online, SharePoint Online and Office Communications Online.  Curt Kolcun, Vice President, Microsoft U.S. Public Sector, also talks about the move in his blog post.

    Some of the independent coverage of the announcement includes:

    The good folks at the USDA bring Americans things like the food pyramid, national parks, the national school lunch program, and organic certification to name just a few services -- ones that also help keep our food supply safe and much more, and now with Microsoft Online Services. Today, they are setting the bar for cloud services in the federal government. 

    USDA is the first cabinet-level agency to announce its move to cloud services, and the first federal government agency (period) to deploy a full cloud messaging and collaboration service. While other agencies have just announced their plans or deployed e-mail solutions in sub-agencies, USDA will consolidate 120,000 people across 21 e-mail systems in a highly distributed workforce.  So there is no doubt that other government organizations will be watching an immense enterprise like the USDA as a bellwether for cloud adoption.

    USDA chose Microsoft because it wanted enterprise-grade features that would work with many of the systems and applications they already use and know.  Some of the specific capabilities USDA officials cited are global address lists, full calendar synchronization, integrated voice mail and email, delegated administration, read receipts, distribution lists, offline access and robust security and privacy.

    USDA expects to reduce costs as well as streamline and improve operations with this move. USDA employees will be able to collaborate more easily with colleagues across the nation, and they will get the latest innovations while still using existing devices and applications.  And, with the release of Office 365 in 2011, USDA will also be one of the very first government entities to enjoy the new service.

    So, welcome to the cloud, USDA.  We're glad to have you!

    -Allen Filush, Office 365 Product Manager