• CDW Talks Cloud Computing - Guest blog post

    I wanted to share a guest blog post from Derrek Kim, a Technology Specialist with CDW, on cloud strategies and the role of virtualization in cloud computing.  In the post Derrek shares some perspectives on cloud computing, including both public and private cloud deployments and the results of CDW's recent Cloud Computing Tracking Poll.  Give the post below a read and let me know if you have questions or thoughts in the comments section.  If you're looking for more information on Microsoft's business cloud offerings check out the Cloud Power site also.  Thanks - Larry

    CDW talks Cloud ComputingCDW logo

    It is an interesting time for the IT industry right now; a pivotal point in IT history that has a profound effect on our lifestyle.  The “Cloud” has reached mainstream status, infiltrating everyday life with offerings from email to minute by minute status updates on aspects of our personal lives.   Cloud services are changing how we deploy and use technology more and more.  There are many customers I speak with who are either saying it is just another XaaS (substitute X to make your favorite offering as a Service) or believe it simply means running servers in a 3rd-party datacenter.  Though a vast number of IT professionals are still digesting the cloud’s potential, those on the leading edge or those that have a mature IT services model as a pervasive part of an operational framework understand that the Cloud promises to be the most compelling IT story yet. 

    Currently, the cloud can primarily be divided up into two general types; Public and Private (although there are others such as community and hybrid).  The public cloud is a bit more mature and understood as a collection of capabilities that come together to offer just what we ask of it; an application, a specific service, a server or even an entire infrastructure.  Additionally, the public cloud’s pay-for-what-you-use model is exceptionally cost effective, eliminating upfront capital investments.  Most importantly, you can focus more time on adding business and operational value to your organization’s strategic initiatives rather than chasing down the reason a network port just went offline, for example.

    Cloud’s potential impact on driving efficiencies and cost savings is not yet well defined.  CDW’s recent Cloud Computing Tracking Poll surveyed 1,200 IT professionals from eight business segments to gauge organizations’ progress toward cloud adoption as well as their future plans for moving to cloud solutions. The results are surprising and illustrate how quickly the cloud is being adopted.  For instance, large businesses and higher education institutions lead cloud adoption at 37% and 34%, respectively. 

    Overall Status of Cloud Computing in Organizations 

    It is interesting to note that 50% of the respondents indicate they have not put together a cloud strategy, providing them with an opportunity to take advantage of the many flavors of cloud offerings including SaaS, IaaS or PaaS.

    Business and IT leaders will dictate their rate of progression to the cloud based on organizational goals. In fact, many organizations have already moved one of their most critical applications to the cloud, email. This is a good indication of an organization’s wiliness to enable seamless remote access to a critical application running in the cloud.

    There are many services beyond email that can benefit from moving to the cloud. Expect to see many other offerings from major software vendors in the near future.  While the public cloud has dominated the offerings from major providers, we shouldn’t overlook the private and hybrid cloud approaches.

    Private cloud promises much of the same value as the public cloud; we can start to liberate ourselves from the nitty gritty details of IT and start to think (and act) at a higher level.  We want to deploy line of business applications, not hard drives, network cards, servers, operating systems, etc..  Private cloud enables us to take advantage of the benefits of the cloud while knowing that our data and security are still completely within our control or domain.  This does not mean, however, that a private cloud has to be on premise; it can be housed in a 3rd-party datacenter with proper physical security controls.  Private cloud, no matter the location, is an assembly of technologies, processes, automations and flexibility rolled into one package. 

    For many there is an erroneous understanding that virtualization = private cloud.  While virtualization is an important enabler to private cloud, it is important to note that it is just one part of the overall story.  Private cloud provides for ease of deployment and advanced systems management. Virtualization lets us build a Virtual Machine by selecting the processor, network and storage. Following these decisions, it would be necessary to install the operating system and any necessary applications.  With a private cloud solution however, a user can provision a pre-staged, fully operational service in just minutes with only a few mouse clicks.  This may include multiple virtual servers, network load balancers, storage provisioning and application configuration.  Not just a single virtual guest server.  Through the combination of process and automation, a private cloud streamlines virtualization, allowing VMs to be fully managed, not simply run as separate servers operating on a consolidated host.

    Building a private cloud requires deep knowledge of server, storage and networking coupled with an understanding of the business workflow and process automation.  For large organizations this means coordinating activities across many teams or groups.  In small organizations, it can increase the necessary knowledge to levels that may not be found currently inside the company.  It is essential to have specialists with deep technical knowledge that understand the needs of the organization when implementing a private cloud solution. 

    1. Identify the business goals – Work with stakeholders to identify how a private cloud fits into the overall organizational strategy.
    2. Support and Maintenance – Anytime a new technology is brought into operation you must consider support resources and costs.  Incidents rarely happen when we are ready and available to take on a new task.  Formalizing a support process will ensure incidents are addressed in a timely manner.
    3. Avoid Shelf-ware.  It is easy to fall into the lure of the “Deluxe” “Pro” “Plus” technology packaging schemes. Understand your organization’s needs and requirements to avoid over purchasing. 
    4. Business process analysis – Private cloud helps streamline the manner in which services are deployed and maintained in the datacenter.  Determine what functionality can be moved away from high cost sys-admin personnel to other personnel in the organization.  Your investments should not be to simply make it easier on IT, but to enable the business to be more nimble and agile.
    5. Leverage a partner – Technology providers can bring perspective and knowledge that is often difficult to know unless you specialize in that particular technology/solution.  It’s not really a cloud if you have to start from square one.  Your time is valuable too.  When creating solutions from scratch, many times the best of breed philosophy comes into play.  Though there are some advantages to a best-of-breed approach, it is important to consider how this may impact the level of support you receive from each of the chosen vendors.  A technology partner should be able to help identify reference architectures which have been deployed and have the deep vendor relationships necessary to escalate issues to knowledgeable resources on your behalf. 

    Ultimately, the goal of cloud computing is the ability to be more efficient in IT operations.  It brings technology together so it is cheaper, faster and easier to deploy.   Not too long ago the industry disputed the details of each component to determine which one would work best in any given architecture.   Today, there is less concern, as functionality has become so evenly matched between suppliers.  When the technology just works, we don’t need to question and examine each little detail.  The commoditization of computers and components has simplified the industry a great deal.  Virtualization is fast becoming a commodity offering as well.  From a performance and reliability standpoint, there is little difference between any of the major Hypervisors. 

    As technology evolves, it is the natural course that technologies which become commonplace get adopted by every vendor.  We are moving past differentiation in many hardware and software technologies to differentiation solely in how these almost identical technologies are implemented and managed.  The real value of the private cloud is that it promises to provide the glue to bind together many of the various common hardware and software technologies. It allows a large part of the functionality to be managed as a whole rather than as numerous individual technologies.

    In CY 2011 Q1 Microsoft announced the release schedule for an overall update to its System Center family of products.  Within this release of System Center Virtual Machine Manager 2012 will be some very promising capabilities for private cloud management.  The most compelling part of the story is the commitment of support for many of the major technology vendors.  Regardless of which brand of storage, virtualization and servers are being used, everything to support a cloud environment will be manageable thru a single console.  This unification is game changing.  Coupled with Service Desk functionality and data center process automation enabled by new orchestration tools, this vision is a fresh approach to what has been coming from Redmond for the past couple of years.  We are excited to see where this takes us.

    Thanks for your time - Derrek

  • Thought Leaders in the Cloud: Talking with Rob Gillen, Oak Ridge National Lab Cloud Computing Researcher

    Rob Gillen is researching cloud computing technology for the government at Oak Ridge National Laboratory. He also works for Planet Technologies, which recently launched a new cloud practice to assist government and public sector organizations with cloud computing. He has a great blog on cloud computing that goes back seven years, and he also has a lot of presentations and talks up on the web. Rob is also a Windows Azure MVP (Most Valued Professional).

    In this interview we cover:

    -The pros and cons of infrastructure-as-a-service

    -Maximizing data throughput in the cloud

    -Cloud adoption in computational science

    -The benefits of containerized computing

    -Architecting for the cloud versus architecting for on-premises

    Robert Duffner: Could you take a moment to introduce yourself?

    Rob Gillen: I am a solutions architect for Planet Technologies and I work in the Computer Science and Mathematics division here at Oak Ridge National Laboratory, and I'm doing work focused on scientific and technical workloads.

    Robert: To jump right in, what do you see as advantages and disadvantages for infrastructure and platform-as-a-service, and do you see those distinctions going away?

    Rob: Each of those aspects of the technology has different advantages. For many people, the infrastructure-as-a-service platform approach is simpler to start using, because your existing codes run more or less unmodified. Most of those services or offerings don't have requirements with regard to a particular OS.

    As we receive more technically-focused offerings of unique network interconnections and so forth, people are able to deploy cloud-based assets that are increasingly similar to their on-premises assets.

    We have seen some interesting pickup in platform-as-a-service offerings, particularly from the lower end of scientific computing, among people who have not traditionally been HPC users but maybe have been doing a lot of computing on their local machines and have become machine bound. We've seen tools written and developed that can extend their problems and algorithms directly into the cloud using the APIs that are  inherent in platform-as-a-service offerings.

    As far as the distinctions going away, I think the days of a particular vendor only offering one or the other will be over soon. If you look at some of the vendors, there's a lot of cross-play across their offerings. Still, I think the distinctions will continue to live on to some degree. Additionally, don't think that platform-as-a-service offerings will be going away any time soon.

    For example, Amazon’s elastic compute cloud service is very much an infrastructure-as-a-service play. However, if you look at their elastic MapReduce product or their Beanstalk product, both of those are very much platform-as-a-service.

    When we compare offerings from our perspective as computational researchers, as you start with the infrastructure offerings, you have a great deal of control from a programmatic standpoint and an infrastructure details standpoint, but you give up a lot of the “magic” traditionally associated with clouds. As you move along the cloud spectrum toward platform as a service, you give up some control, but you gain a lot of magic, in the sense that there are a lot of things you don't have to worry about. So depending on the type of computation you're doing, they have different value to you.

    To summarize, I think that individual technologies will continue to grow, but the distinctions at the vendor level will fade over time.

    Robert: It seems that, in the current state of the market, infrastructure-as-a-service is better suited to migrate existing applications, and platform-as-a-service is really architecting a whole new type of cloud-based applications. Would you agree with that?

    Rob: Mostly, yes. Infrastructure-as-a-service is definitely easier for migrating, although I am would want to clarify the second half of your statement. I think it depends on the type of problem you're trying to solve. The platform-as-a-service offerings from any vendor are generally very interesting, but they have constraints, and depending on the type of problem you're trying to solve, those constraints may or may not be acceptable to you.

    So, I agree with you, with the caveat that it's not a blanket statement that green-field implementations should always look at platform as a service first – you have to evaluate the suitability of the platform to the problem you are trying to solve.

    Robert: You've interacted with government agencies that are looking at the cloud, and you've blogged about your company's launch of GovCloud. What are some of the key differences between government and other uses of the cloud?

    Rob: One of the biggest things comes down simply to data privacy and data security. The first thing every customer we talk to about cloud brings up, both inside and outside the government space, is data privacy. While there’s some good reasoning behind that, the reality is that cloud computing vendors often do better there than what the customers can provide themselves, particularly in the private sector. For many of those customers, moving to the cloud gives them increased data security and data privacy.

    In some areas of the government, that would also be true (especially in some of the smaller state and local government offices) – cloud vendors might actually have a more secure platform than what they're currently using. But most often there are policy and legal issues that will prevent them from moving into the cloud, even if they want to.

    I think some of the major vendors have recently been certified for a base level or what we would call low-security data, allowing public sector customers to put generally available data in the cloud. But anything with any significant sensitivity can't be moved there yet by policy, regardless of the actual appropriateness of the implementation.

    That's a major consideration today – which  is unfortunate – because as it stands, the federal government has many tasks that could benefit from a cloud computing infrastructure. I get excited when I see progress being made toward breaking down some of those barriers. Certainly, some of those barriers should not and will not go away but there are some that should, and hopefully they will.

    Robert: You did a series of blog posts on maximizing data throughput in the cloud. What led you down that path? And was there a scenario where you needed to maximize a file transfer throughput?

    Rob: One of the aspects where we think cloud computing can be valuable for scientific problems is in post-processing or post-analysis of work or datasets that were generated on supercomputers.

    We took a large selection of climate data generated on Jaguar, which is one of the supercomputers here at Oak Ridge, and we modeled the process of taking that data and moving it into the cloud for post-processing. We looked at different ways to get the data there faster while making sure that data integrity remained high.

    We also worked through problems around data publishing, so that once it’s in the cloud, we can make it available in formats that are consumable by others, both within and outside the particular research domain. We're working through the challenge that many scientific domains use domain-specific file formats. For example, climatology folks often use file formats like NetCDF and HDF5. They have particular reasons for using those, but they are not necessarily widely used in other disciplines. Taking that same data and making it available to a wide set of people is difficult if it remains in those native formats.

    Therefore, we're looking at how to leverage the infrastructure in the platforms provided by the cloud, whatever data structures they use, to actually serve that data up and make it available to a new and broader audience than has previously been possible.

    That was the main problem set that we were working on, and we found some interesting results. With a number of the major providers, we came up with ways to improve data transfer, and it's only getting better as Microsoft, Amazon, and other vendors continue to improve their offerings and make them more attractive for use in the scientific domain.

    Robert: Data centers are pretty opaque, in the sense that you don't have a lot of visibility into how the technology is implemented. Have you seen instances where cloud performance changes significantly from day to day? And if so, what's your guidance to app developers?

    Rob: That issue probably represents the biggest hesitation on the part of the scientists I'm working with, in terms of using the cloud. I'm working in a space where we have some of the biggest and brightest minds when it comes to computational science, and the notion of asking them to use this black box is somewhat laughable to them.

    That is why I don't expect, in the near term at least, that we’ll see cloud computing replace some of the specifically tuned hardware like Jaguar, Kracken, or other supercomputers. At the same time, there is a lot of scientific work being done that is not necessarily as execution-time-critical as others. Often, these codes do not benefit from the specialized hardware available in these machines.

    There are certain types of simulations that are time-sensitive and communication heavy, meaning for each step of compute that is performed, a comparatively significant amount of communication between nodes is required. In cases like this, some of the general cloud platforms aren’t as good a fit.

    I think it's interesting to see some of the cloud vendors realizing that fact and developing platforms that cater to that style of code, as illustrated by some of the cluster computing instances by Amazon and others. That’s important in these cases, since general-purpose cloud infrastructures can introduce unacceptable inconsistencies.

    We've also seen a lot of papers published by people doing assessments of infrastructure-as-a-service providers, where they'll look and see that their computational ability changes drastically from day to day or from node to node. Most often, that's attributed to the noisy neighbor problem. When this research is done in smaller scale projects, by university students or others on constrained budgets, they tend to use the small or medium instances offered by whatever cloud vendor is available. In such cases, people are actually competing for resources with others on the same box. In fact, depending on the intensity of their algorithms and the configuration they have selected, they could be fighting with themselves on the same physical nodes, since the cloud provider’s resource allocation algorithm may have placed them on the same physical node.

    As people in the scientific space become more comfortable with using the largest available node, they're more likely to have guaranteed full access to the physical box and the underlying substrate. This will improve the consistency of their results. There are still shared assets that, depending on usage patterns, will introduce variability (persistent storage, network, etc.) but using the larger nodes will definitely reduce the inconsistencies – which is, frankly, more consistent with traditional HPC clusters. When you are running on a collection of nodes within a cluster, you have full access to the allocated nodes.

    The core issue in this area is to determine what the most applicable or appropriate hardware platform is for a given type of problem. If you're doing a data parallel app, in which you're more concerned about calendar time or development time than you are about your execution time, a cloud will fit the problem well in many cases. If you're concerned about latency and you have a very specific execution time scale concerns, the cloud (in its current incarnation, at least) is probably not the right fit.

    Robert: Back in August of last year, you also posted about containerized computing. What interest do you see in this trend, and what scenarios are right for it?

    Rob: That topic aligns very nicely with the one we touched on earlier, about data privacy in the federal space. A lot of federal organizations are building massive data centers. One key need for the sake of efficiency is to get any organization, government or otherwise, to stop doing undifferentiated heavy lifting.

    Every organization should focus on where it adds value and, as much as possible, it should allow other people to fill in the holes, whether through subcontracting, outsourcing, or other means. I expect to see more cases down the road where data privacy regulations require operators not only to ensure the placement of data geographically within, say, a particular country’s boundary, but specifically within an area such as my premises, my corporate environment, or a particular government agency.

    You can imagine a model wherein a cloud vendor actually drops containerized chunks of the data center inside your fence, so you have physical control over that device, even though it may be managed by the cloud vendor. Therefore, a government agency would not have to develop its own APIs or mechanisms for provisioning or maintenance of the data center – the vendor could provide that. The customer could still benefit from the intrinsic advantages of the cloud, while maintaining physical control over the disks, the locality, and so on.

    Another key aspect of containerized approaches to computing is energy efficiency. We’re seeing vendors begin to look at the container as the field-replaceable unit, which allows them to introduce some rather innovative designs within the container. When you no longer expect to be able to swap out individual servers, you can eliminate traditional server chassis (which, beyond making the server “pretty” simply block airflow and reduce efficiency), you can consolidate power supplies, experiment with air cooling/swamp cooling, higher ambient temperatures… the list goes on and we are seeing some very impressive PUE numbers from various vendors and we are working to encourage these developments.

    There are also some interesting models for being able to bundle very specialized resources and deploy them in non-traditional locations. You can package up a generator, a communications unit, specialized compute resources, and analysis workstations, all in a 40 foot box, and ship it to a remote research location, for example.

    Robert: The National Institute of Standards and Technology (NIST) just released a report on cloud computing, where they say, and I quote, "Without proper governance, the organizational computing infrastructure could be transformed into a sprawling, unmanageable mix of insecure services." What are your thoughts on that?

    Rob: My first thought is that they're right.

    [laughter]

    They're actually making a very similar argument to one that’s often made about SharePoint environments. Any SharePoint consultant will tell you that one of the biggest problems they have, which is really both a weakness and strength of the platform, is that it's so easy to get that first order of magnitude set up. In a large corporation, you often hear someone say, “We've got all of these rogue SharePoint installs running across our environment, and they're difficult to manage and control from an IT perspective. We don't have the governance to make sure that they're backed up and all that sort of thing.”

    And while I can certainly sympathize with that situation, the flip side is that those rogue installs are solving business problems, and they probably exist because of some sort of impediment to actually getting work done, whether it was policy-based or organizationally based. Most of those organizations just set it up themselves because it was simpler than going through the official procedures.

    A similar situation is apt to occur with cloud computing. A lot of people won’t even consider going through months of procurement and validations for policy and security, when they can just go to Amazon and get what they need in 10 minutes with a credit card. IT organizations need to recognize that a certain balance needs to be worked out around that relationship.

    I think as we move forward over time, we will work toward an environment where someone can provision an on-premises platform with the same ease that they can go to Amazon, Microsoft, or whoever today for cloud resources. That model will also provide a simple means to address the appropriate security considerations for their particular implementation.

    There's tension there, which I think has value, between IT people who want more control and end users who want more flexibility. Finding that right balance is going to be vital for any organization to use cloud successfully.

    Robert: How do you see IT creating governance around how an organization uses cloud without sacrificing the agility that the cloud provides?

    Rob: Some cloud computing vendors have technologies that allow customers to virtually extend their physical premises into the cloud. If you combine that sort of technology with getting organizational IT to repackage or re-brand the provisioning mechanisms provided by their chosen cloud computing provider, I think you can end up with a very interesting solution.

    For example, I could imagine an internal website managed by my IT organization where I could see a catalog of available computing assets, provide our internal charge code, and have that platform provisioned and made available to me with the same ease that I could with an external provider today. In fact, that scenario could actually make the process easier for me than going outside, since I wouldn’t have to use a credit card and potentially a reimbursement mechanism. In this model, the IT organization essentially “white labels” the external vendor’s platform, and layers in the organizational policies and procedures while still benefiting from the massive scale of the public cloud.

    Robert: What do you think makes architecting for the cloud different than architecting for on-premises or hosted solutions?

    Rob: The answer to that question depends on the domain in which you're working. Many of my cloud computing colleagues work in a general corporate environment, with customers or businesses whose work targets the sweet spot of the cloud, such as apps that need massive horizontal scaling. In those environments, it's relatively straightforward to talk about architecting for the cloud versus not architecting for the cloud, because the lines are fairly clear and solid patterns are emerging.

    On the other hand, a lot of the folks I'm working with may have code and libraries that have existed for a decade, if not longer. We still have people who are actively writing in Fortran 77 who would argue that it's the best tool for the job they're trying to accomplish. And while most people who are talking about cloud would laugh at that statement, it's that type of scenario that makes this domain unique.

    Most of the researchers we're working with don't think about architecting for the cloud or not, so much as they think in terms of architecting to solve their particular problem. That's where it comes to folks like me and others in our group to help build tools that allow the domain scientist to leverage the power of the cloud without having to necessarily think about or architect for it.

    I've been talking to a lot of folks recently about the cloud and where it sits in the science phase. I've worked in the hosted providers’ space for over a decade now, and I’ve been heavily involved in doing massive scaling of hosted services such as hosted email (which are now being called “cloud-based services”) for many, many years. There are some very interesting aspects of that from a business perspective, but I don't think that hosted email really captures the essence of cloud computing.

    On the next level, you can look at massive pools of available storage or massive pools of available virtual machines and build interesting platforms. This seems to be where many folks are focusing their cloud efforts right now, and while it adds significant value, there’s still more to be gleaned from the cloud.

    What gets me excited about architecting for the cloud is that rather than having to build algorithms to fit into a fixed environment, I can build an algorithm that will adjust the environment based on the dynamics of the problem it's solving. That is an interesting shift and a very different way of solving a problem. I can build an algorithm or a solution to a scientific problem that knows what it needs computationally, and as those needs change, it can make a call out to get another couple of nodes, more storage, more RAM, and so on. It’s a game-changer.

    Robert: What advice do you have for organizations looking at porting existing apps to the cloud?

    Rob: First, they should know that it's not as hard as it sounds. Second, they should take it in incremental steps. There are a number of scenarios and tutorials out there that walk you through different models. Probably the best approach is to take a mature app and consider how to move it to the cloud with the least amount of change. Once they have successfully deployed it to the cloud (more or less unmodified), they can consider what additional changes they can make to the application to better leverage the cloud platform.

    A lot of organizations make the mistake of assuming that they need to re-architect applications to move them to the cloud. That can lead them to re-architect some of the key apps they depend on for their business from the ground up. In my mind, a number of controlled incremental steps are better than making fewer large steps.

    Robert: That seems like a good place to wrap up. Thanks for taking the time to talk today.

    Rob: My pleasure.


    Tweet

  • Fujitsu Launches Global Cloud Platform Service Powered By Windows Azure

    Today, Fujitsu and Microsoft announced the upcoming August 2011 availability of the Fujitsu Global Cloud Platform service, which marks the first official production release of a Windows Azure platform appliance delivered by Fujitsu.  The new service offering has already been running in Fujitsu’s datacenter and available on a trial basis to companies since April 21, 2011.

    Some of the benefits highlighted in the announcement include enabling customers to quickly build elastically-scalable applications using familiar Windows Azure platform technologies, and the ability to store their business data domestically in Japan if they prefer, which can help address compliance issues and also minimize the time accessing data due to network latency.  Fujitsu datacenters utilize some of the most advanced technologies in the industry to offer cutting-edge disaster-readiness, security and environmental friendliness. 

    The service offering will be available globally, and will be geared towards a wide range of customers, from leading enterprises to value-oriented small and medium-sized companies. Building on a core platform of Windows Azure compute and storage, SQL Azure, and Windows Azure AppFabric technologies, the new service will also help customers with the development of new applications and the migration of existing applications. 

    Full details of the announcement are available in the official press release and on the Windows Azure blog.  If you’re looking for more information on Microsoft’s cloud computing offerings for businesses, check out the Cloud Power site. 

    Thanks for your time and if you have any questions, please post them in the comments - Larry

  • Thought Leaders in the Cloud: Talking with Olivier Mangelschots, Managing Director at Orbit One Internet Solutions

    Olivier Mangelschots is Managing Director at Orbit One Internet Solutions, a systems integrator based in Belgium that is deeply involved in Microsoft technology.

    In this interview we cover:

    • Identity management in hybrid environments
    • The role of partners in providing customized cloud solutions
    • SLAs and cloud outages
    • Migrating to the cloud vs. building for the cloud
    • Things in clouds work better together

    Robert Duffner: Could you take a moment to introduce yourself and Orbit One?

    Olivier Mangelschots: I'm Managing Director of Orbit One Internet Solutions. We have been in business since '95 here in a city called Gent, Belgium. Today, we have 18 people, and we mainly focus on developing web portals. We use technology such as SharePoint, Microsoft CRM, and Umbraco, which is an open source CMS based on ASP.NET.

    We also try to help our customers realize the new world of work, making use of technology such as Microsoft Lync to be able to work from anywhere while staying in contact with their teams. We're really interested in the cloud and looking forward to this change.

    Robert: You've been involved in building customer solutions since well before cloud computing. How have you seen the cloud impact the solution that you're providing to your customers?

    Olivier: We've always tried to make solutions in such a way that the impact on the internal IT structure for the customer is as low as possible. Even as far back as 2000, the solutions that we've developed have mostly been hosted by us.

    We try to minimize the need for customers to implement local servers, so they can focus on making the best use of the solutions instead of the technical infrastructure behind it.

    Robert: Jericho Forum president Paul Simmonds says that new rules are needed for identity in the cloud and that passwords are broken. Can you talk about the challenges and solutions for identity management in the cloud? How is it different from traditional hosting?

    Olivier: Identity is one of the key elements to make the cloud successful, and I think we've come a long way. Today, most cloud solutions are starting to incorporate identity management the way it should be done, using federated identity and single sign-on. In the past, an organization had to choose between doing everything on-premises or moving everything to the cloud.

    It was difficult to have part in the cloud and part on-premises, because you had to manage users and synchronization separately. It was quite a pain. But now, large and small companies can move to the cloud and have centralized user management, so they are able to handle user services in a very transparent way.

    It shouldn't matter for the users whether an application is hosted on-premises or hosted in a cloud at Microsoft or hosted at a partner, so long as everything is nicely integrated. Of course, the first thing the user notices is the fact that he has to enter a username and password, so that should be very transparent.

    Robert: Customers can choose between cloud, on-premises, and partner hosting. How do you explain the differences between these options to the customers you work with?

    Olivier: Cost is obviously one of the factors to take into consideration. Most customers are coming from an on-premises history, and by moving to cloud technologies such as Windows Azure, Office 365, and CRM Online they can save a lot on costs. Of course, one has to look at the complete picture: not only licensing, but also factors such as human resources, hardware, and electricity.

    In addition to saving on costs, they can make things happen more quickly. If they want to deploy something new, they can do so in a matter of hours in the cloud, where they would need days, weeks, or sometimes months for an on-premises deployment.

    Partner hosting is still very important, mainly because not everything is possible in the public cloud. There are certain limitations with Azure and Office 365, for example. The price is very affordable, but you get what's in the box, and partners can offer customization.

    In addition to offering more personalized solutions with regard to technical features, partners can also provide customization in terms of service-level agreements, security considerations, encryption, and those sorts of things, which are very important for some organizations.

    Robert: At EMC's recent conference, CEO Joe Tucci said that hybrid clouds are becoming the de facto standard. Can you talk a little bit about hybrid solutions that may use a mix of options?

    Olivier: As an example, one of the things that is very easy to migrate to a public cloud is an organization’s set of Exchange mailboxes with contacts, calendars, and so on. The level of customization that users need is quite small, and most people are happy with the product as it comes out of the box.

    If you move the mailboxes to the cloud, users typically don’t even notice. They just keep using Outlook and Outlook Web Access, synchronizing their phones as they need to. Still, it saves a lot of costs, as well as allowing many companies to have much larger mailboxes than they would otherwise be able to.

    This is one of the mixed situations we see, where companies are moving part of their services to the cloud, such as Exchange mailboxes, while keeping, for example, SharePoints sites internally because they need some custom modules in there that are not available in the cloud.

    Mixing and matching in that way can be a smart approach, because it allows companies to save costs while also being more productive and agile.

    Robert: Following the recent Amazon outage where full service wasn't restored for about four days, are you seeing customers question the reliability of the cloud? What do you think is the lesson learned from that?

    Olivier: Almost all companies are a bit scared of moving their data away to some unknown location, because they have less control over those systems. The event at Amazon was, of course, very unfortunate. The cloud on a massive scale is still very new, and certain technologies should really be considered to be in a beta phase.

    I think we have to be realistic about the fact that in an on-premises situation, uptime is not guaranteed at all. Many organizations have far more than four days of outages a year because of human error.

    Many companies are not ready today to move certain critical applications to the cloud. I believe that, as the cloud grows bigger and more mature, service-level agreements will be available from cloud systems that are far more demanding than those that are possible from on-premises situations.

    This is also where partner hosting can come into play. You can combine certain things in the public cloud for very affordable mass-usage scenarios while putting specific, mission-critical solutions at a partner that will do a custom replicated solution.

    In the long term, I believe that the public cloud will come in several flavors, including an inexpensive mass market flavor and a more enterprise-focused flavor with high levels of redundancy and availability, which will cost more.

    Robert: Lew Moorman, the chief strategy officer at Rackspace, likened the Amazon interruption to the computing equivalent of an airplane crash. It's a major episode with widespread damage, but airline travel is still safer than traveling in a car. He was using this as an analogy to cloud computing being safer than running data centers by individual companies. Do you think that analogy holds up?

    Olivier: I think it does in certain scenarios, although not all. But I think you're absolutely right that when an airplane crash occurs, it garners a lot of attention, even though statistically, it is far safer than driving a car.

    If a big cloud goes down, that’s a major news story, and everybody's talking about it. But actually, this almost never happens, and a very large scale public cloud can be much safer than environments run by individual companies.

    At the same time, there is always a balance between how much you pay and what you get for it. I don't think it's possible to get the service with the maximum possible guarantees for a very low fee. If you're willing to pay more, you will get more possibilities.

    Azure is a nice example, because you can choose what geographical area your data and services will be running in. And you're completely free, as a developer or as an architect, to create systems that are redundant over several parts of the Azure cloud, which allows you to go further than what's in the box.

    Robert: Customers aren't always starting from scratch, and sometimes they have something existing that they want to move to the cloud. Can you talk a little bit about migration to the cloud and things that customers might need to be aware of?

    Olivier: This is a major issue today. For certain services, migration to the cloud is more difficult than it should be. The issue is going to be addressed step by step. First, of course, you need to have the cloud. Then you can start building migration tools. When I look, for example, at Microsoft Exchange, it's very easy and there are lots of good tools to move from an on-premises or a partner-hosted solution to the cloud.

    SharePoint, for example, or Dynamic CRM, is much harder to migrate. You need third-party tools, although Microsoft is working on creating its own tools. There is still work to do there.

    Azure, I think, is a completely different beast, and you can’t just take an application and put it on Azure. To make it really take advantage of the Azure opportunities and added value, you need to redesign the application and make it Azure-aware. That can take quite some time to do, and it's a long-term investment for product developers.

    Robert: As more people move to the cloud, there's the chance to integrate one cloud resource with another. I know you've been thinking about the combination of Office 365 and Azure. Can you tell us your thoughts on that?

    Olivier: The combination of Office 365/Dynamics CRM Online with Windows Azure is a very interesting thing. For example, we have customers using CRM Online, which is kind of out of the box, you get what's in there. We combine it for them with custom Azure solutions to do things that are not foreseen in CRM.

    To give you an example, there is a company called ClickDimensions that has an email marketing plug-in for Microsoft CRM. You can send out mass e-mails to people from CRM, and there is tracking functionality about who opens the e-mail and who clicks on your website. You have a whole history about your prospects and your leads.

    Actually, all this is running in the Azure cloud. It's all custom-developed, and it's always up, piping this information through to your CRM system. This is a nice combination of using out-of-the-box standards, shared hosting products such as Office 365, and CRM Online, combined with custom-developed solutions running in Azure. You get the best of both worlds.

    Robert: At Microsoft, we see cloud as a critical back-end for mobile applications. You probably saw the recent announcement around our Toolkits for Devices that includes support for the iOS, Android, and Windows Phone 7. Do you have any thoughts around the combination of cloud and mobile?

    Olivier: I don't really have special thoughts, although cloud and mobile, of course, work very well together. On the other hand, I think that any application is nice to have in the cloud, and the nice thing about the combination of cloud and mobile is making sure it's available from anywhere, since mobile users can be coming from anywhere in the world.

    It's very difficult to know when you roll out a mobile application how much people are going to use it, and hosting these kind of things on the cloud makes very much sense, because you can cope with the peaks, you can cope with identity issues, and you have a nice kind of platform to start with.

    Robert: Was there anything else that you wanted to talk about or any other subject you want to discuss?

    Olivier: Today, I see Azure as a tool kit, or a large system to build new applications and solutions, so the group using it is mostly developers and other technical people. It would be nice to see a layer between Azure and other scenarios, where Azure is the engine and Microsoft or other partners create front ends for it.

    To give you an example, if I want to host simple websites running a CMS solution, I can choose any of a number of partners that have management modules that allow me to easily configure the website, hit start, and it's running. It would be great to see an integration between for example Microsoft WebMatrix and Azure, allowing less technical people to get their website running in Azure in a few clicks.

    These extra layers on top of Azure are a big thing for partner opportunities, but I also think that Microsoft should also participate to speed up things. I see Azure as the first big infrastructure step, we are just at the beginning!

    One thing that developers might be afraid of is that if today you build an application specially for Azure, you're going to use the Azure tables, the Azure way of doing message queuing, and so on, making it very hard to move away from Azure.

    Of course, today, Azure is only available through Microsoft, but I think it makes sense in the future to have the Azure platform also available in custom flavors through service providers that are competing with one another on innovation and pricing.

    Of course, Microsoft probably doesn't want to give everything away, but there are a lot of partner models. It will be interesting to see how this will evolve in the future.

    Robert: Very good. Thanks, Olivier.

    Olivier: Thank you.


    Tweet

  • WIRED event: Innovation and Opportunity in the Cloud

    I wanted to share another post from my colleague, Mark Miller, this time summarizing a recent WIRED cloud computing event in New York City. The full text of Mark’s recap is below and it also includes a couple links to videos from the event.  Let me know if you have any comments or questions. Thanks – Larry

    A Recent Smart Salon presented by WIRED: Innovation and Opportunity in the Cloud

    The cloud – for some it’s an evolution, for some it is a revolution and for others that perspective remains to be seen. For most, cloud is a critical part of how IT will shift from being a technology provider to becoming an innovation engine that delivers business value.  That was what I heard in early May when we invited more than 100 senior technology and business leaders to join us at a Microsoft sponsored Smart Salon event, presented by WIRED, to talk about the ways businesses are dealing with the disruption of cloud today and into the future.

    Facilitated by WIRED Contributing Editor Spencer Reiss, the event featured two broad-ranging panel discussions, as well as a no holds barred Q&A session with Microsoft Corporate Vice President Bob Kelly.  It was a great opportunity to hear senior leaders talk about their cloud reality in candid, real world, terms and so I thought it might be interesting to highlight a few of my takeways from the day.

    The first was how divergent, yet aligned, people’s point of view is about the business considerations and opportunities of cloud computing.  For some, the cloud is driving a revolution in how they approach computing – from the technology they use to how they run their business.  For others, it is a technology evolution – helping them do things better than they have before.  To nobody’s surprise though – whether evolution or evolution – the cloud’s ability to dramatically speed up time to market (or time to innovation) was clear to everyone - as one CIO said, “in the cloud model, it’s days, not six months to a year.”

    My second takeaway was around how cloud is really making people think about the future of IT – specifically how it will look in our somewhat cloudy future.  While no one had a perfectly clear vision of our cloudy future most of the attendees – especially our CIO panelists – were bullish on the potential of the cloud to help them deliver more business value.  Their path to delivering more business value varied: For some, it was about improving overall business processes, and for others it was about becoming a critical source of innovation for their organization. But the need to deliver business value was clear.

    My third takeaway was that, to no one’s surprise, throughout the day it was clear that there were a LOT of different definitions of cloud computing. That was one of the first things Bob Kelly addressed in his Q&A – during which he outlined Microsoft’s perspective on the core attributes of cloud computing and how the breadth of Microsoft’s cloud offerings gives businesses the option to transition to the cloud on their terms.

    Whatever cloud computing is for you - an evolution, a revolution or an uncertainty – Microsoft is here for you.  We understand the journey to cloud may be disruptive and we know the cloud is not one-size-fits-all. That is why we are committed to helping you do cloud on your terms. To learn more about this approach to cloud computing, check out the Cloud Power site.  

    Thanks for your time – Mark Miller, Director, Server and Tools, Microsoft