This blog post lists out terms frequently referenced in Windows Azure Platform. They are presented in a hierarchical order based on the context shown in the following schematic. Each term is described concisely with key concept and pertinent information. The content is intended for IT pros and non-programmers.
A collective name of Microsoft’s Platform as a Service (PaaS) offering which provides a programming platform, a deployment vehicle, and a runtime environment of cloud computing hosted in Microsoft datacenters
Essentially Microsoft cloud OS which provides abstractions and shields the complexities of implementing and managing collections of hardware, software, and instances
A Windows Azure service for executing application code based on a specified role including web role, worker role, and VM role
A service definition to deploy a VM with IIS 7 for hosting a web application
A service definition to deploy a VM without IIS for running application code in the background similar to Windows processes, batch jobs, or scheduled tasks
A service definition to upload a VM to cloud (i.e. Windows Azure Platform) for deploying an application with a custom or predictable runtime environment and provided as a last resort for addressing issues including:
A Windows Azure service for allocating persistent and durable storage accessible with HTTP/HTTPS (REST) and .NET
Binary Large Object for storing large data items like text and binary data
Structured storage in the form of tables which store data as collections of entities for maintaining service state
A page BLOB and formatted as a single-volume NTFS virtual hard drive to be mounted within a Windows Azure role instance and accessed like a local drive
Non-persistent storage local to a role instance
Owner of datacenter including hardware, software, and instances and ultimately the brain of the cloud OS
A self-initialized application deployed with the root partition of a Windows Azure Compute node to form the fabric
A self-initialized application deployed with the base image of a Guest OS to form the fabric
A user interface to configure IPsec protected connections between computers or virtual machines (VMs) in an organization’s network, and roles running in Windows Azure
An add-on feature to Windows Azure subscription to cache Windows Azure BLOBs and the static content output of Compute instances at Microsoft’s caching servers near what the content is most frequently accessed
A cloud-based relational database service with SQL Azure Reporting, a report generating service
To provide secure messaging and connectivity capabilities through firewalls, NAT gateways, and other problematic network boundaries and enable building distributed and disconnected applications in the cloud, as well hybrid application across both on-premise and the cloud
A hosted service providing federated authentication and rules-driven, claims-based authorization for REST Web services with integration with Windows Identity Foundation (WIF) like Active Directory Federation Services (ADFS) v2
A subset of the on-premise distributed caching solution, Windows Server AppFabric Caching, for provisioning a cache in cloud to be used with ASP.NET or client applications for caching requirements
Capabilities similar to those of Biz-Talk to integrate Windows Azure Platform applications with existing LOB and databases and third-party Software as a Service (SaaS) applications
For building applications with a composite of services in the cloud and on premises, components, web services, workflows, and existing applications
Microsoft Enterprise Desktop Virtualization, or MED-V, is a desktop virtualization solution providing a self-contained computing environment including the OS, intended applications, and customized settings, if any. Desktop virtualization allows an application to run in a specific OS environment different from the OS running the hosting computer. MED-V uses Virtual PC 2007 to provide a virtualized and customizable computing environment required by an intended application, yet incompatible or conflicting with that in the hosting computer. In other words, MED-V allows computing environments which are incompatible, conflicting, or with different requirements to run currently in the same physical device. For instance, running a legacy or line-of-business application requiring Windows XP SP2 in a Vista SP1 desktop or deploying a managed computing environment (like a corporate-managed desktop) to a non-managed (like a personal or home) desktop are some of the business challenges MED-V addresses.
MDOP now includes 6 tools and solutions as below and is available to Software Assurance customers.
Customers interested in MDOP should review the faq and contact their software vendor or Microsoft for additional information. For a comprehensive guide on Microsoft Virtualization from data center to desktop, download it here. I have produced the 4-part Mad About MED-V screencast series to offer a quick review of MED-V solutions including the following. I will update each link, once the associated screencast is published.
The following is the part 1 screencast with a focus on the MED-V fundamentals to establish a baseline for subsequent discussions in the series. The remaining of this post highlights the key concept, architecture, and pertinent information of a MED-V solution.
MED-V is perhaps the least understood piece in Microsoft Virtualization Solutions. A desktop virtualization solution MED-V is as opposed to App-V, an application virtualization. This distinction is an important one since they solve two different areas of business problems. Desktop virtualization addresses the incompatibility between a target application and the host operating system by virtualizing an entire desktop, i.e. a self-contained runtime environment including the operating system and the application. Such that a target application requiring, for instance, Windows XP SP2 and incompatible with Windows Vista can still be deployed to a Vista desktop by running the application in a hidden Virtual PC running Windows XP SP2 while using MED-V to seamlessly make the application accessible from the Start-All Programs menu on the host computer. App-V, on the other hand solves the incompatibility between two applications by offering a virtualized application runtime environment, the so-called bubble, while allowing these applications running on the same operating system instance. The following illustrates the concept.
Conceptually desktop virtualization using Virtual PC is easy to understand. Nevertheless to deploy desktop virtualization to enterprise, system administration and scalability are rather challenging. In essence, a Virtual PC lifecycle management solution is the key to make enterprise desktop virtualization a reality, and this is where MED-V comes in. MED-V makes Virtual PC deployable and saleable with a centralized lifecycle management solution including: image creation, delivery, monitoring, and maintenance.
To run a MED-V application the associated workspace must be first started. And if a user tries to start a MED-V application while the workspace is not in place, the workspace will start on demand and once the workspace is loaded, the application will start. A workspace is a Virtual PC image with a usage policy defined by a MED-V administrator. An administrator will use MED-V management console to configure usage policy which is a set of settings defining how MED-V applications will behave for a target Active Directory users or groups. Notice that the Virtual PC is where a MED-V application is configured, and the Virtual PC is also running in the background. MED-V workspace policy allows a MED-V application to seamlessly integrate into the All Programs menu on the host computer and runs transparently with the locally installed applications. A conceptual model of the integration is shown below.
The high level MED-V architecture as shown below starts with: (1) and (2) to create, test and upload Virtual PC images encapsulating a target computing environment of an OS, applications and optional management and security tools to the image repository by administrator; (3) MED-V Management Server, the brain of the whole system, enabling an administrator to control image repository which is an IIS virtual directory and (4) provision images for targeted Active Directory Users and Groups along with usage policies; and finally (5) delivering the images and usage policies to clients. And a client starts a MED-V application, the client will authenticate against the management server, retrieve the workspace policy, and acquire the workspace image.
Notice a MED-V Management Server also aggregates clients' events, and stores them in an external database (MS SQL) for monitoring and reporting purposes. Also a MED-V client has two functional components – the first connecting to the server and retrieving the usage policy and an associated image form the repository, while the second offering the end-user experience and managing the Virtual PC from user experience and troubleshooting aspects.
The information provided here is as of March of 2009.
In order to prevent antivirus activity from affecting the performance of the virtual desktop, it is recommended where possible to exclude the following Virtual Machine file types from any antivirus or backup processing running on the host:
*.VHD *.VUD *.VSV *.CKM *.VMC *.INDEX
*.VHD *.VUD *.VSV *.CKM *.VMC *.INDEX
One very interesting piece of MED-V solution is the Trim Transfer technology as illustrated below. Trim Transfer accelerates the download of initial and updated Virtual Machine images over the LAN or WAN, thereby reducing the network bandwidth needed to transport a Workspace Virtual Machine to multiple end-users. It uses existing local data to build the Virtual Machine image, leveraging the fact that in many cases, much of the Virtual Machine (e.g., system and application files) already exists on the end-user's disk. For example, if a Virtual Machine containing Microsoft Windows XP is delivered to a client running a local copy of Windows XP, MED-V will automatically remove the redundant Windows XP elements from the transfer. To ensure a valid and functional Workspace, the MED-V Client cryptographically verifies the integrity of local data before it is utilized, guaranteeing that the local blocks of data are absolutely bit-by-bit identical to those in the desired Virtual Machine image. Blocks that do not match are not used.
The process is bandwidth efficient and transparent, and transfers run in the background, utilizing unused network and CPU resources. When updating to a new image version (e.g., when administrators want to distribute a new application or patch), only the elements that have changed ("deltas") are downloaded, and not the entire Virtual Machine, significantly reducing the required network bandwidth and delivery time.
You can configure which folders are indexed on the host as part of the Trim Transfer protocol according to the host OS. These setting are configured in the ClientSettings.xml file which can be found in the Servers\Configuration Server\ folder.
Office 2007 Deployment project template is available for download from here or
1. Download, install, and bring up Microsoft Deployment Tool Kit.
2. Go to Documentation and click Office Deployment icon.
3. Click Office Project Plan.mpp.
One of the five essential attributes of cloud computing (ref. The 5-3-2 Principle of Cloud Computing) is resource pooling which is an important differentiator separating the thought process of traditional IT from that of a service-based, cloud computing approach.
Resource pooling in the context of cloud computing and from a service provider’s viewpoint denotes a set of strategies for categorizing and managing resources. For a user, resource pooling institutes an abstraction for presenting and consuming resources in a consistent and transparent fashion.
This article presents key concepts derived from resource pooling as the following:
Ultimately data center resources can be logically placed into three categories. They are: compute, networks, and storage. For many, this grouping may appear trivial. It is however a foundation upon which some cloud computing methodologies are developed, products designed, and solution formulated.
This is a collection of all CPU capabilities. Essentially all data center servers, either for supporting or actually running a workload, are all part of this compute group. Compute pool represents the total capacity for executing code and running instances. The process to construct a compute pool is to first inventory all servers and identify virtualization candidates followed by implementing server virtualization. And it is never too early to introduce a system management solution to facilitate the processes. Which in my view is a strategic investment and a critical component for all cloud initiatives.
The physical and logical artifacts putting in place to connect resources, segment, and isolate resources from layer 3 and below, etc. are gathered in the network pool. Networking enables resources becoming visible and hence possibly manageable. In the age of instance gratification, networks and mobility are redefining the security and system administration boundaries, and play a direct and impactful role in user productivity and customer satisfaction. Networking in cloud computing is more than just remote access, but empowerment for a user to self-serve and consume resources anytime anywhere with any device. BYOD and consumerization of IT are various expressions of these concepts.
This has long been a very specialized and sometimes mysterious part of IT. An enterprise storage solution frequently characterizes as a high cost item with a significant financial and contractual commitment, specialized hardware, proprietary API and software, a dependency on direct vendor support, etc. In cloud computing, storage has become even more noticeable since the ability to grow and shrink based on demands, i.e. elasticity, demands an enterprise-level, massive, reliable, and resilient storage solution at a global scale. While enterprise IT is consolidating resources and transforming existing establishment into a cloud computing environment, how to leverage existing storage devices from various vendors and integrate them with the next generation storage solutions are among the high priorities for modernizing a data center.
In the last decade, virtualization has proved its values and accelerated the realization of cloud computing. Then, virtualization was mainly server virtualization. Which in an over-simplified statement means hosting multiple server instances with the same hardware while each instance runs transparently and in isolation, as if each consumes the entire hardware and is the only instance running. Much of the customer expectations, business needs, and methodologies has since evolved. Now, we should validate virtualization in the context of cloud computing to fully address the innovations rapidly changing how IT conducts business and delivers services. As discussed below, in the context of clod computing, consumable resources are delivered in some virtualized form. Various virtualization layers collectively construct and form the so-called fabric.
The concept of server virtualization remains running multiple server instances with the same hardware while each instance runs transparently and in isolation, as if each instance is the only instance running and consuming the entire server hardware.
In addition to virtualizing and consolidating servers, server virtualization also signifies the practices of standardizing server deployment switching away from physical boxes to VMs. Server virtualization is for packaging, delivering, and consuming a compute pool.
There are a few important considerations of virtualizing servers. IT needs the ability to identify and manage bare metal such that the entire resource life-cycle management from commencing to decommissioning can be standardized and automated. To fundamentally reduce the support and training cost while increasing the productivity, a consistent platform with tools applicable across physical, virtual, on-premises, and off-premises deployments is essential. The last thing IT wants is one set of tools for physical resources and another for those virtualized; one set of tools for on-premises deployment and another for those deployed to a service provider; one set of tools for development and another for deploying applications. The requirement is one methodology for all, one skill set for all, and one set of tools for all. This advantage is obvious when developing applications and deploying Windows Server 2012 R2 on premises or off premises to Windows Azure. The Active Directory security model can work across sites, System Center can manage resources deployed off premises to Windows Azure, and Visual Studio can publish applications across platforms. Windows infrastructure architecture, security, and deployment models are all directly applicable.
Similar idea of Server Virtualization applies here. Network virtualization is the ability to run multiple networks on the same network device while each network runs transparently and in isolation, as if each network is the only network running and consuming the entire network hardware.
Conceptually since each network instance is running in isolation, one tenant’s 192.168.x network is not aware of another tenant’s and identical192.168.x network running with the same network device. Network virtualization provides the translation between physical network characteristics and the representation of and a resource identity in a virtualized network. Consequently, above the network virtualization layer, various tenants while running in isolation can have identical network configurations.
A great example of network virtualization is Windows Azure virtual networking. At any given time, there can be multiple Windows Azure subscribers all allocate the same 192.168.x address space and with identical subnet scheme, 192.168.1.x/16, for deploying VMs. Those VMs belongs to one subscriber will however not be aware of or visible to these deployed by others, despite the network configuration, IP scheme, and IP address assignments may all be identical. Network virtualization in Windows Azure isolate on subscriber from the others such that each subscriber operates as if the subscription is the only one employing 192.168.x address space.
I believe this is where the next wave of drastic cost reduction of IT post server virtualization happens. Historically, storage has been a high cost item in IT budget in each and every aspects including hardware, software, staffing, maintenance, SLA, etc. Since the introduction of Windows Server 2012, there is a clear direction where storage virtualization is built into OS and becoming a commodity. New capabilities like Storage Pool, Hyper-V over SMB, Scale-Out Fire Share, etc. are now part of Windows Server OS and making storage virtualization part of server administration routines and easily manageable with tools and utilities like PowerShell familiar to many IT professionals.
The concept of storage virtualization remains consistent with the idea of logically separating a computing object from its hardware, in this case the storage capacity. Storage virtualization is the ability to integrate multiple and heterogeneous storage devices, aggregate the storage capacities, and present/manage as one logical storage device with a continuous storage space. JBOD is a technology to realize this concept.
Each of the three resource pools has an abstraction to logically present itself with characteristics and work patterns. A compute pool is a collection of physical (virtualization and infrastructure) hosts and VMs. A virtualization host hosts VMs which run workloads deployed by service owners and consumed by authorized users. A network pool encompasses network resources including physical devices, logical switches, address spaces, and site configurations. Network virtualization as enabled/defined in configurations can identify and translate a logical/virtual IP address into a physical one, such that tenants with the same network hardware can implement identical network scheme without a concern. A storage pool is based on storage virtualization which is a concept of presenting an aggregated storage capacity as one continuous storage space as if provided from one logical storage device.
In other words, the three resource pools are wrapped with server virtualization, network virtualization, and storage virtualization, respectively. Each virtualization presents a set of methodologies on which work patterns are derived and common practices are developed. These virtualization layers provides opportunities to standardize, automate, and optimize deployments and considerably facilitates the adoption of cloud computing.
Virtualizing resources decouples the dependency between instances and the underlying hardware. This offers an opportunity to simplify and standardize the logical representation of a resource. For instance, a VM is defined and deployed with a VM template which provides a level of consistency with a standardized configuration.
Once a VM characteristics are identified and standardized, we can now generate an instance by providing only instance-based information or that depending on run-time such as the VM machine name which must be validated at run-time to prevent duplicated names. This requirements for providing only minimal information at deployment can significantly simplify and streamline operations for automation. And with automation, resources can then be deployed, instantiated, relocated, taken off-line, brought back online, or removed rapidly and automatically based on set criteria. Standardization and automation are essential mechanisms so that workload can be scale on demand, i.e. become elastic.
Standardization provides a set of common criteria. Automation executes operations based on set criteria with volumes, consistency, and expediency. With standardization and automation, instances can be instantiated with consistency, efficiency, and predictability. In other words, resources can be operated in bulk with consistency and predictability. The next logical step is then to optimize the usage based on SLA.
The presented progression are what resource pooling and virtualizations can provide and facilitate. These methodologies are now built into products and solutions. Windows Server 2012 R2 and System Center 2012 and later integrate server virtualization, network virtualization, and storage virtualization into one consistent solution platform with standardization, automation, and optimization for building and managing clouds.
A significant abstraction in cloud computing this is. Fabric implies the accessibility and discoverability, and denotes the ability to discover, identify, and manage a resource. Conceptually fabric is an umbrella term encompassing all the underlying infrastructure supporting a cloud computing environment. At the same time, a fabric controller represents the system management solution which manages, i.e. owns, fabric.
In cloud architecture, fabric consists the three resource pools: compute, networks, and storage. Compute provides the computing capabilities, executes code, and runs instances. Networks glues the resources based on requirements. And storage is where VMs, configurations, data, and resources are kept. Fabric shields the physical complexities of the three resource pools presented with server virtualization, network virtualization, and storage virtualization. All operations are eventually directed by the fabric controller of a data center. Above fabric, there are logical views of consumable resources including VMs, virtual networks, and logical storage drives. By deploying VMs, configuring virtual networks, or acquire storage, a user consume resources. Under fabric, there are virtualization and infrastructure hosts, Active Directory, DNS, clusters, load balancers, address pools, network sites, library shares, storage arrays, topology, racks, cables, etc. all under the fabric controller’s command to collectively present and support fabric.
For a service provider, building a cloud computing environment is essentially to establish a fabric controller and construct fabric. Namely institute a comprehensive management solution, build the three resource pools, and integrate server virtualization, network virtualization, and storage virtualization to form fabric. Form a user’s point of view, how and where a resource is physically provided is not a concern, but the accessibility, readiness, scalability, and fulfillment of SLA are.
This is a well-defined term and we should not be confused with it. (ref. NIST SP 800-145 and the 5-3-2 Principle of Cloud Computing) We need to be very clear on: what a cloud must exhibit (the five essential attributes), how to consume it (with SaaS, PaaS, or IaaS), and the model a service is deployed in (like private cloud, public cloud, and hybrid cloud). Cloud is a concept, a state, a set of capabilities such that a business can be delivered as a service, i.e. available on demand.
The architecture of a cloud computing environment is presented with three resource pools: compute, networks, and storage. Each is abstraction provided by a virtualization layer. Server virtualization presents a compute pool with VMs which supplies the computing, i.e. CPUs, power to execute code and run instances. Network virtualization offers a network pool and is the mechanism to allow multiple tenants with identical network configurations on the same virtualization hosts while connecting, segmenting, isolating network traffic with virtual NICs, logical switches, address space, network sites, IP pools, etc. Storage virtualization provide a logical storage device with the capacity appeared continuous and aggregated with a pool of storage devices behind the scene. The three resource pools together constitute the fabric (of a cloud) while the three virtualization layers collectively form the abstraction, such that despite the underlying physical infrastructure may be intricate, the user experience above fabric remains logical and consistent. Deploying a VM, configuring a virtual network, or acquiring storage is transparent with virtualization regardless where the VM actually resides, how the virtual network is physically wired, or what devices in the aggregate the requested storage is provided with.
Cloud is a very consumer-focused approach. It is more about a customer’s ability and control based on SLA in getting resources when needed and with scale, and equally important releasing resources when no longer required. It is not about products and technologies. It is about servicing, consuming, and strengthening the bottom line.
This is it! We had waited and waited, and it's finally here. Windows 7 is now generally available. With Windows 7, there's never been a better time to be a PC. For all you IT Professionals out there, let me highlight the 3 key deliveries:
and innovations introduced in Windows 7 and make pertinent information readily available for you here.
Making people productive anywhere
Making people productive is not that hard. In your office plugging company’s network with a laptop loaded with apps, you can be productive. Making people productive “anywhere” on the other hand is a very challenging effort for IT, while facing the mass amount of mobile devices and increasingly complex network computing environment today. The growing numbers of mobile workforce and branch offices are at the same time demanding corporate resources seamlessly available regardless the required infrastructure and organizational boundaries. Two Windows 7 solutions to facilitate remote access are BranchCache and DirectAccess.
Managing risks through enhanced security and control
Security is nothing we need to much justify the need in today’s network computing environment. It is critical, imperative, and all too often costly. From Windows Vista, Windows Vista SP1, to Windows 7, BitLocker has been expanded from a single drive, multiple drives, now to portable media. Windows 7 offers security enhancements enabling a user to secure data from unauthorized access very easily with BitLocker-to-Go, for example. In Windows 7 Explorer, highlight a portable drive, right-click to turn on BitLocker-to-Go. It is that readily available, easy to do, and readable with Windows XP. There is really no reason not to do it since it is so little to do, yet with so much control and so strong protection on data. As a memory stick is now with 32 GB and beyond capacity, BitLocker-to-Go is one very cost-effective way to protect data from unauthorized access. For a large company, BitLocker technology with group policies offers a software based enterprise solution of hard disk encryption. You don’t need to look for a solution and end up with a second-best solution. It is in Microsoft Vista and it is much enhanced in Microsoft Windows 7.
In an enterprise environment, software restriction is one of the most difficult enforcements. Not only it needs a mature infrastructure to provide software inventories, metering, and on-going monitoring, but the required skill sets to develop, test, and manage those software restriction policies are hard to find, take years to develop, and come with very high costs. Windows 7 and Windows Server 2008 R2 together present AppLocker as a vehicle with which a system administrator can provision a policy to deny/allow execution, installation, or usage of a target application based on the application's digital signature by deriving a publisher rule defined and enforced with a Group Policy Object without programming. A complex requirement, for instance allowing task workers to access Office 2007 and later, but not PowerPoint when accessed by contractors, can be done with AppLocker in a few mouse clicks without any scripting.
Reducing cost by streamlining PC management
Many thought without a direct migration path, i.e. in-place upgrade, from Windows XP to Windows 7, the deployment of Windows 7 must be a tedious and tricky process. In fact, Windows 7 offers a number of vehicles making the migration an intuitive and straightforward process. For consumers and small businesses, Easy Transfer makes migrating from Windows XP to Windows 7 absolutely “easy” and, in my view, fun actually. Scanstate and Loadstate, two key utilities in USMT (User State Migration Tools) make a migration process very logical and easy to understand. Hard-Link Migration leaves and remaps data in place and significantly reduces the time needed to place large amount of user data in a typical PC refresh scenario.
In the past two years, with Microsoft’s introduction of virtualization strategies and solutions, there are many options in resolving compatibility issues at an application or OS level while reducing TCO and increasing flexibilities in deploying and managing IT resources in the long run. Specific to Windows XP compatibility issues, Windows 7 Professional and above offer Windows XP Mode (via a free download) with a local virtualization of Windows XP SP3 machine. So those applications developed specific for Windows XP can now essentially run in a Windows 7 environment with a few steps to set up a virtualized Windows XP SP3 run-time environment to host those Windows XP specific applications. Further an application running in Widows XP Mode can be seamlessly integrated into the Start/All Programs menu of a host Windows 7 machine. Notice Windows XP Mode alone is designed for a relatively small deployment since there is basically no built-in system management function. For a large scale deployment, MED-V or Microsoft Enterprise Desktop Virtualization, one of the six offerings that come with MDOP (or Microsoft Desktop Optimization Pack available through Software Assurance program) is the solution to manage local desktop virtualization with the abilities to provision a MED-V workspace policy to deploy XP Mode with standardized settings and a consistent user experience, etc. While MED-V 1.0 SP1 to be available in the first quarter of 2010 with host support for Windows 7, notice that both MED-V 1.0, MED-V 1.0 SP1 will leverage Microsoft Virtual PC 2007 which does not required hardware assisted virtualization.
To setup your Windows Server 2012 lab for the "Early Experts" Challenge and/or IT Camp, you'll need a PC that meets the following requirements:
NOTE: If your PC does not meet the above requirements, do not continue with this process. Instead, you may prefer to build a Windows Server 2012 box in the cloud by leveraging Windows Azure Virtual Machines. Building your lab in the cloud will allow you to complete most hands-on activities in this study group, but will not permit you to perform hands-on activities related to Windows Server 2012 Hyper-V.
DISCAIMER: This process installs Windows Server 2012 in a dual-boot scenario using Boot-to-VHD features in Windows Vista, Windows 7 and Windows 8. While this process is not intended to disrupt your existing OS installation, these steps are for use at your own risk. No support or warranties are implied or provided.
This article is for readers who already have some experience with Microsoft Office Groove 2007 (Groove 2007) to better understand the usage, business values, and limitations related to the new feature, SharePoint Workspace, in SPW 2010. Those who are not familiar with how Groove 2007 works should first reference resources listed under Groove Workspace in Part 2 of this article.
As part of Microsoft Office 2010 Professional Plus, SPW 2010 brings much-needed SharePoint capabilities into the desktop. A key feature in SPW 2010 is the ability to synchronize SharePoint libraries and lists. Taking SharePoint content offline and synchronizing the content automatically and as needed is probably one of the most requested features in Office since the introduction of SharePoint Files Tool in Groove 2007. The SharePoint Files Tool in Groove 2007 can synchronize data with and only with a SharePoint document library. With SPW 2010, a content owner can create a so-called SharePoint Workspace and maintain a local copy of SharePoint libraries and lists and synchronize them with the corresponding items in an associated SharePoint site. SPW 2010 is the rich client for SharePoint 2010. And the relationship between a SharePoint Workspace and SharePoint is similar to how Outlook relates to Exchange.
There are other important changes introduced in SPW 2010. The Ribbon, shown below as the UI, provides a user experience that is consistent across all solutions in the Office family. InfoPath 2010 is now the form designer for all forms in SPW 2010. Both Discussion and List tools in SPW 2010 are based on InfoPath. In the Documents tool, users now can drag and drop items like they do in Windows Explorer. For quick and frequent access, a user can drag a SharePoint Workspace to the desktop. To simplify the log-in process without compromising security, SPW 2010 now provides an SSO experience, employing Windows credentials to authenticate a user.
SPW 2010 can be considered as new capabilities (including Ribbon and SharePoint Workspace) and most Groove 2007 features together, and essentially a two-in-one package. SPW 2010, in my opinion, signifies a major, strategic investment from Microsoft in data synchronization with SharePoint. For those who live to Groove and Groove to live, yes, most Groove functions and features are still available within SPW 2010 and life is good. Above all, SPW 2010 is to effectively address the business needs for accessing libraries and lists of a SharePoint site offline with a rich desktop client, while maintains high mobility for collaborating in a dynamic, ad hoc fashion with team members who are both within and outside of an organization.
Notice that there are products and features which are NO LONGER AVAILABLE in SPW 2010, including:
One interesting fact in SPW 2010 is workspace members can only be promoted. This rule applies to any workspace member who is uninvited from a workspace and then re-invited to the workspace. For example, a participant who is uninvited from a workspace can be re-invited to the same workspace only as a participant or manager. (Continued in Part 2)
The first business to understand cloud computing is to know what the term, service, means since it has been used autonomously and extensively to explain cloud technologies. Service in the context of cloud computing means “capacity on demand” or simply “on demand.” Notice that on-demand here also implies real-time response and ultimately with anytime, anywhere, and any device accessibility. The idea is straightforward. Basically, as a service bell is ringed, the requested resources are magically made available. So, IT as a Service means IT on demand. And now it should be apparent what news as a service, catering as a service, or simply my business as a service means. And we can clearly explain the three cloud computing delivery methods. SaaS means software on demand; simply an application can be readily available for an (authorized) user. PaaS offers a programming environment (or platform) enabling the development and delivery of SaaS. And IaaS empowers a user with the ability to provision infrastructure, i.e. deploy servers with virtual machines, on demand. Further, at an implementation level with current technologies, cloud computing also destines that virtualization (namely an abstraction of the underlying complexities of topology, networking, monitoring, management, etc. from provided services) is put in place. Such that a user can consume or acquire SaaS, PaaS, and IaaS without the need to own and deploy the required hardware, reconfigure the cabling, and so on.
The term, cloud, regardless public, private, and everything in between means the 5-3-2 principle of cloud computing (see above) is applicable. The 5 characteristics listed in the 5-3-2 principle are the criteria to differentiate a cloud from a non-cloud application, and also concisely outline the benefits of cloud computing. This recognition is the essence of cloud computing. And in my view, much of the confusion in cloud computing discussion has been due to lack of an understanding of the 5-3-2- principle. For instance, many are confused about and mistakenly consider remote access or anything via Internet as cloud computing. This assumption is incorrect and inconclusive. My rule of thumb is that those exhibiting the 5 characteristics are cloud applications and those who don’t are not. Above all, the 5-3-2 principle, or more specifically NIST Definition of Cloud Computing, scopes the subject domain of cloud computing with current technologies and presents a definition that is structured, disciplined, and with clarity.
So public cloud is a cloud and the 5-3-2 principle applies. The term, public, in the context of cloud computing, refers to Internet, general availability, and for subscription when applicable. Windows Live and Hot Mail for example, are Microsoft SaaS offerings in public cloud for consumers, while Office 365, Microsoft Online Services, and Microsoft Dynamics CRM Online are for businesses. They all are cloud applications because:
The 5 characteristics exhibiting in the above mentioned cloud applications are vivid and without ambiguity.
At the same time, private cloud is also a cloud and dedicated, hence private, to an organization. As explained in Highly Virtualized Computing vs. Private Cloud, ubiquitous access and pay-as-you-go model may not be essential in private cloud. Still the applicability of the 5-3-2 principle to all cloud applications including private cloud should be very clear here. So, for example, the reasoning to answer the following question is actually straightforward.
First, with the 5-3-2 principle, we can easily determine if an application is a cloud applications. Then the strategy is to discover which of the 5 characteristics are missing and how relevant they are to the business requirements. For instance:
And it is certainly up to an organization to decide how critical the 5 characteristics are and if all or selected ones are applicable to a targeted delivery. The lesson here is not necessarily an academic debate if a particular feature like self-service should be a requirement of private cloud. The crucial element is to have a predictable way (namely the 5 characteristics) to identify what are relevant to business requirements.
One interesting observation of cloud computing is that many seem having some understanding, yet few with a complete picture since this is a subject touching very much every aspect of IT. Many can highlight some points of cloud computing, yet few with a structured and disciplined approach of explaining cloud computing since cloud computing is a very complex proposal on both business and technical sides. I believe a productive way to discuss cloud computing is to focus on the fundamentals, and have a clear understanding of what cloud is about and why, before framing it with a particular business or implementation. Employ the 5-3-2 principle to organize the message and describe cloud computing with your own words. You will find out that once grasped the concept, you can navigate through a cloud computing conversation with clarity, substance, and productivity.
A Self-Service Portal is basically a Web site to be installed on a web server with ASP.NET, IIS6 Metabase Compatibility, and IIS6 WMI Compatibility Server Role Services. By accessing the Self-Service Portal, authorized users can create and operate their own virtual machines (VMs) as permitted by each user's User Roles, while the created VMs are placed in a Library Server managed by the System Center Virtual Machine Manager, or SCVMM. A User Role here is essentially a policy with membership, authorized hardware and software profiles, allowed scope of operations, and assigned templates applicable for creating and managing VMs using Self-Service Portal. In a Self-Service Portal session, an authorized user sees only those virtual machines that the user owns or is authorized to operate upon. And as a VM is created or deleted by a user, the user's quota points are subtracted or regained with the amount of quota points that the VM is assigned in an employed template. Once a user has quota points fewer than what are needed for creating a new VM, the user has reached the maximal number of VMs allowable for the applicable User Role to create.
The system requirements of components for constructing a Self-Service Portal include
To prototype a Self-Service Portal using a laptop, here are the steps:
The following screencasts present the user experience and walk through the operations carried out from steps 5 to 11:
In today’s episode Yung Chou shows us how we can create a Virtual Machine using Windows Azure. Sign up your free 90-day trial, if not already.In this how-to video Yung creates a Windows Server 2012 virtual machine within a matter of minutes, showing us what options are available as well as how you can manage and remote into it.
Websites & Blogs:
Follow @yungchou Become a Fan @ facebook.com/MicrosoftTechNetRadio Subscribe to our podcast via iTunes, Zune, Stitcher, or RSS
This series focusing on cloud essentials for IT professionals includes:
Compared with on-premises computing, deployment to cloud is much easier. A few clicking operations will do. And I believe this is probably why many have jumped into a conclusion that we, IT professionals, are going to lose our jobs in the cloud era. There is some, not much but some in my view, truth to it. However, not so fast, I say. Bringing cloud into the picture does dramatically reduce cost and complexities in some areas, yet at the same time cloud also introduces many new nuances which demand IT disciplines to develop criteria with business insights. No, I do not think so. IT professionals are not going to lose jobs, but to become more directly than ever affecting the bottom-line since with cloud computing everything is attached with a dollar sign and every IT operation has a direct cost implication. In this article, I discuss some routines and additional considerations for deployment to cloud. Notice through this article, in the context of cloud computing I use the terms, application and service, interchangeably.
For IT professionals, deploying a Windows Azure service to cloud starts with development hands over the gold bits and configuration file, as depicted in the schematic below.
In Visual Studio with Windows Azure SDK, a developer can create a cloud project, build a solution, and publish the solution with an option to generate a package which zips the code and in company with a configuration file defining the name and number of instance of Compute Role. This package and the configuration file are what gets uploaded into Windows Azure platform, either through the UI of Windows Azure Developer Portal or programmatically with Windows Azure API. For those not familiar with the steps and operations to develop and upload a cloud application into Windows Azure platform, I highly recommend you invest the time to walk through the labs which are well documented and readily available.
Application Lifecycle Management
In a typical enterprise deployment process, developers code and test applications locally or in an isolated environment and go through the build process to promote the code to an integrated/QA test environment. And upon finishing tests and QA, the code is promoted to production. For cloud deployment, the idea is much the same other than at some point the code will be placed in cloud and tests will be conducted through cloud before becoming live in Internet. Below is a sample of application lifecycle where both the production and the main test environments are in cloud.
Traditionally when developing and deploying applications on premises, one challenge (and it can be a big one) is to to try to keep the test environment as closely mimicking the production as possible to validate the developed code, data, operations, and procedures. Sometimes, this can be a major undertaking due to ownership, corporate politics, financial constraints, technical limitations, discrepancies between test environment and the production, etc. And the stories of applications behave as expected in test environment, yet generate numerous security and storage violations, fatal errors, and crash hard once in production are many times heard. This will be different when testing Windows Azure services. It turns out the user experience of promoting code from staging to production in cloud can be pleasant and something to look forward to.
When first promoting code into Windows Azure platform from local development/test environment, the application is placed in the so-called staging phase and when ready, an administrator can then promote the application into production. An interesting fact is that the staging and the production environment in Windows Azure platform are identical. There is no difference and Windows Azure platform is Windows Azure platform. What differentiates a staging environment from a production environment is the URL. The former is provided with an unique alphanumeric strings as part of the non-published staging URL like http://c2ek9aa346384629a3401e8119de3500.cloudapp.net/ while the latter is an administrator-specified user friendly URL like http://yc.cloudapp.net. So to promote from a staging environment to the production in Windows Azure platform, simply swap the virtual IP by swapping the staging URL with the production one. This is called VIP-Swap and two mouse-clicks are what it takes to deploy a Windows Azure service from staging to production. See the screen capture below. And minutes later, once Fabric Controller syncs with all the URL references of this application, in production it is.
VIP-swap is handy for initial deployment and subsequent updating a service with a new package. When making changes of a service, the service will be placed in staging environment and a VIP-Swap will promote the code into production. This feature however is not applicable to all changes of service definition. In such scenarios, redeployment of the service package will become necessary.
With Windows Azure platform in the cloud, there are new opportunities to extend as well as integrate on-premises computing with cloud services. An application architecture can accept HTTP/HTTPS requests either with a Web Role or a front-end that is on premises. And where to process requests and store the data can be in cloud, on premises, and a combination of the two, as shown below.
So either the service starts in cloud or on-premises, an application architect now has many options to architecturally improve the agility of an application. An HTTP request can start with on-premises computing while integrated with Worker Role in the cloud for capacity on demand. At the same time, a cloud service can as well employ the middle-tier and the back-end that are deployed on premises for security and control.
Standardizing Core Components
Either on premises or in cloud, core components including: application infrastructure, development tools, common management platform, identity mechanism, and virtualization technology should be standardized sooner than later. With a common set of technologies, see below, both IT professionals and developers can apply their skills and experiences to either computing platform. This will in the long run produce higher quality services with lower costs. This is crucial to make the transformation into cloud a convergent process.
By default, diagnostic data in Windows Azure is held in a memory buffer and not persistent. To access log data in Windows Azure, first the log data need to be moved to persistent storage. One can do this manually by using the Windows Azure Developer Portal, or add code in the application to dump the log data to storage at scheduled intervals. Next, need to have a way to view the log data in Windows Azure storage using tools like Azure Storage Explorer.
Traditionally when running applications on premises, diagnostics data are relatively easy to get since they are stored in local storage and can be accesses and moved easily. Logs and events generated by applications are subscribed, acquired, and stored as long as needed. When moving an application to the cloud, the access to diagnostic data is no longer status quo. Because persisting diagnostic data to Windows Azure storage costs money, IT will need to plan how long to keep the diagnostic data in Windows Azure and how to download it for offline analysis.
The cost of processing diagnostic data is one item in the overall cost model for moving application to public cloud. An application cost analysis can start with those big buckets including: network bandwidth, storage, transactions, cpu, etc. And eventually list out all the cost items and each with organizations responsible for those costs. Compare the ROI with that for on-premises deployment to justify if moving to cloud makes sense. Once agreements are made, each cost item/bucket should have a designated accounting code for tracking the cost. IT professionals will have much to do with the cost analysis since the operational costs of running applications in cloud are compared with both the capital expense and operational costs for on-premises deployment. For example, routine on-premises operations like backup and restore, monitoring, reporting, and troubleshooting are to be revised and integrated with cloud storage management which has both operation and cost implications.
The takeaway is “Don’t go to bed without developing a cost model.”
Some Thoughts on Security
Cloud is not secure? I happen to believe in most scenarios, cloud is actually very secure, and more secure than running datacenter on premises. Cloud security is a topic itself and certainly far beyond the scope of the discussions here. Nevertheless, you will be surprised how fast a cloud security question can be raised and answered, many times without actually answering it. In my experience, many answers of cloud security questions seem surfacing themselves once the context of a security question is correctly set. Keep the concept, separation of responsibilities, vivid whenever you contemplate cloud security. It will bring so much clarity. For SaaS, there is very limited what a subscriber can do and securities at network, system, and application layers are much pre-determined. PaaS like Windows Azure platform, on the other hand, presents many opportunities to place security measures in multiple layer with defense in depth strategy as show below.
Recently some Microsoft datacenters hosting services in the cloud for federal agencies have achieved FISMA certification and received ATO (Authorization to Operate). This is a very proof that cloud operations can meet and exceed the rigorous standard. The story of cloud computing will only get better from here.
Cloud is not secure? Think again.
[To Part 1, 2, 3, 4, 5, 6]
I want to call out and invite IT professionals interested in achieving Microsoft certifications to join, participate, and contribute to Windows Server Early Experts Challenge. This program is to learn about the latest version of Windows Server with excelling in related Microsoft certification exams in mind.
The Challenge involves a series of Knowledge Quests - starting with the Apprentice Quest below - and each Quest ends with a special completion certificate for you to promote your new knowledge! To make it easy to participate, each Quest is developed in a modular format that you can complete based on your own schedule and availability.
The first five Knowledge Quests are Apprentice, Installer, Explorer, Networker and Virtualizer. These Knowledge Quests target the objectives in Exam 70-410: Installing and Configuring Windows Server 2012.
Let me acknowledge that the contents presented in the Early Expert Challenge series are based on Keith Mayer’s work. HIs enthusiasm, efforts, and impact on helping IT pro communities adopt Windows Server 2012 have been inspirational, effective, and significant.
This program leverages the Microsoft Virtual Academy (MVA) for some of our free online study resources. You will need to first register for an MVA account using your Microsoft Account (aka., Windows Live ID) via the link below …
In this first knowledge quest, you will learn and explore the key new technical capabilities of Windows Server 2012 across the product pillars of virtualization, management, networking and storage, etc. to properly position them for relevant usage scenarios.
The seven modules in this course, through video and whitepaper, provide details of the new capabilities, features, and solutions built into the product. With so many new features to cover, this course is designed to be the introduction to Windows Server 2012. After completing this course, you will be ready to dive deeper into Windows Server 2012 through additional Microsoft Virtual Academy (MVA) courses dedicated to each topic introduced in this “Technical Overview.”
Alternate option: You can also attend a free Windows Server 2012 First Look Clinic at a Microsoft Learning partner near you if you'd prefer an in-person training experience.
With so much to learn in Windows Server 2012, building your own lab environment is the best way to REALLY learn new technology! You can download the Windows Server 2012 installation bits and start the process! We'll be using these installation bits in the coming weeks in the additional Knowledge Quests of the "Early Experts" Challenge. Be sure to download the bits in "VHD" format (not "ISO" format) as we'll be using the VHD bits to build your study lab and in future Knowledge Quests for hands-on activities.
Follow this step-by-step guide to build your own study lab as a dual-boot environment on your existing desktop or laptop PC. We'll leverage this study lab environment in future Knowledge Quests for hands-on activities. Hands-on experience with Windows Server 2012 will help you greatly in mastering the knowledge and skills needed to successfully pass the certification exams.
Participate in our Online Study Group Community on LinkedIn to post questions you may have, share your insights and collaborate with other members as we all prepare for certification! Each of us has unique insight and by participating in this community, we'll be able to expand our technical knowledge beyond our own experiences.
Now that you've completed this Knowledge Quest, be sure to share your success with your social network using one of the buttons below for Twitter, LinkedIn or Facebook. By sharing your success, you'll also help to encourage others to join our study group and increase the number of IT Pros working together to help grow our collective technical knowledge and share even more community insight that benefits us all!
Have you completed Steps 1 through 5? If so, follow these steps to validate your lab completion and claim your "Early Experts - Apprentice" certificate:
Once you've submitted your certificate request, feel free to keep going with the next Knowledge Quest below!
After you've completed the "Early Experts" Apprentice Quest, keep going with the next Knowledge Quest to continue your preparation for the MCSA on Windows Server 2012 Exams:
One key focus of an App-V solution is the ability to run multiple versions of application software within the same OS instance without the concern of conflicts among those versions. To quickly prove the concept, I prototyped a solution with 2 virtual machines based on Hyper-V. Here are the configurations:
Notice the above configurations are simply what I used for rapid prototyping to demonstrate the capabilities. They are not recommendations, nor best practices.
On the DC, I installed App-V 4.5 Management Server and imported all already sequenced applications. (See Figure 1.) Security groups for each sequenced applications were created in Active Directory Users and Computers as well. (See Figure 2.) When testing, I would add a test account into a target security group, for instance appvOffice97, followed by logging in the client machine to verify the connectivity and application streaming. The process is not complicated at all. However it is very easy to make operational mistakes and practice does very much so make perfect here.
Figure 1. App-V Management Server Console with Sequenced Applications Already Imported
Figure 2. Security Groups for Accessing Sequenced Applications
On the domain Vista SP1 desktop, I logged in as local admin to install the App-V 4.5 client and verify the connectivity. App-V 4.5 by default uses port 322 to stream and there were times I used telnet to make sure the port is open. Make sure to set up Windows Firewall accordingly. when connectivity had been verified, I then switched user and logged in using a test account. By default, App-V refreshes during use login time. This can be customize on the server under Provider Policies of the App-V Management Server console. Once logged in, all authorized App-V applications are listed in the client console. (See Figure 3.)
Figure 3. Sample List of Applications to Authorized User offered by App-V Client
How to sequence an application and import it into App-V Management Server is beyond the scope of this posting and to be demonstrated in upcoming screencasts. Here Figure 4 and Figure 5 show the user experience when multiple versions of Office suite were deployed using App-V to the desktop. Some may prefer to place the icons on the desktop or in folders with specific heading, etc. These settings are customizable in the osd file of a sequenced application.
Figure 4. Multiple Versions of Office Suite Deployed by App-V 4.5 to Vista Desktop
Figure 5. Running Access 97 and Access 2000 Deployed by App-V 4.5
To accelerate the learning of private cloud, a direct and effective way is to walk through the process of deploy one. And that is what this blog post and screencast series will deliver by detailing the essential operations and steps to deploy and manage a service deployed to a private cloud in SCVMM 2012 including:
The process I am focusing on in this series starts from the signoff of a to-be-deployed application, here StockTrader. And in this series, I as a Private Cloud Administrator will walk through the process to “deploy” StockTrader as a service to a target private cloud. How the application was developed, configured, and packaged are not the subjects here. How it is to be deployed as a service to a target private cloud is. Deploying and managing an application as a service is an important concept and a key delivery of VMM 2012. The following further explains.
Notice that in VMM 2012 a service means specifically a set of VMs which collectively delivers a business function. At operational level, this set of VMs can be configured, deployed, and managed as a whole, i.e. one entity. This is achieved in VMM 2012 by employing a service template. By predefining the application architecture with the content, configurations, deployment operations, and procedures of an intended application in a VMM 2012 service template, we can now essentially deploy an application architecture with a running instance of an intended application, i.e. deploy an application as a service. And by managing the instance of a service template, we are now managing all associated resources of a running instance of an intended application which may encompasses multiple VM instances in multiple tiers.
This is an end-to-end sample application based on Windows Communication Foundation and ASP.NET. StockTrader is designed as a high-performance application that can seamlessly scale out across multiple servers with load-balancing and failover at the service-request level. In addition, the application can be deployed to Windows Azure Platform, a private cloud, or a hybrid environments with securely communication between Windows Azure instances and on-premise services. It illustrates many of the .NET enterprise development technologies for building highly scalable, rich "cloud-connected" applications.
The StockTrader application package I downloaded from http://connect.microsoft.com (find more details at the end of in this blog post) includes pre-baked syspreped vhd images, application code, scripts, app-v packages, and a service template which defines the multi-tier application architecture, the operations and procedures, the dependencies and intelligence, etc. with VM templates. We will use the provided service template to deploy StockTrader as a service to a target private cloud.
From a consumer’s point of view, regardless where and how StockTrader is deployed, it is a web application. The cloud connotation is relevant to mainly a service provider to signify the ability to deploy, exhibit, and manage an application with the 5-3-2 principle of cloud computing or NIST SP 800-145.
The test lab is a simple environment including a windows domain with a VMM 2012 and a Hyper-V host as members. This lab is the starting point of a private cloud environment. It is a test lab, not an idea nor a realistic representation all components/functions needed to deliver a comprehensive private cloud solution. A comprehensive private solution including configuration management, deployment vehicle, process automation, service/help desk, virtual machine manager, self-service portal, etc. is what System Center 2012 delvers. For those who would like to build a test lab similar with mine, here is the hardware and software information:
You will need 64-bit hardware to build a Windows domain with a domain controller, a SCVMM 2012 server, and a Hyper-V host to get started with a simple yet realistic enough test lab. The Hyper-V host need to have access to the hardware since it needs to be a root/parent partition run virtual machines. A great poster to help you better understand Hyper-V is available at http://aka.ms/free. The rest two, a domain controller and a SCVMM 2012 box, can be physical or virtual machines. And as needed, other System Center 2012 family members can be later added into the environment to form a comprehensive private cloud solution. Having a SCVMM 2012 server and a Hyper-V host into a Windows domain is the beginning and the essentials to start building a private cloud solution.
I set up the environment with my laptop where the booted Windows Server 2008 R2 SP1, i.e. the root partition, is a Hyper-V host as a member of the contoso.corp domain which includes a domain controller and a SCVMM 2012 server are both virtual machines and each running as guest OS. The following are the hardware information.
As far as the hardware is concern, RAM is a significant resource in virtualization and where I will spend my money.
[To Part 2, 3, 4, 5]
There are two important concepts in VMM 2012 SP1 to understand Microsoft private cloud solutions. “Fabric” and “Service Template” they are.
In Microsoft private cloud solutions, VMM is the management solution for virtualized resources. In the context of cloud computing, virtualization now encompasses three disciplines. In addition to server virtualization which many IT professionals are familiar with, network virtualization and storage virtualization are included as the three resource pools to together form the so-called fabric. VMM 2012 and later has the (fabric) abstraction architected in. Designed with constructing and managing fabric in mind, VMM has becomes a key enabler in implementing a private cloud solution.
In cloud computing, fabric is an abstraction signifying the ability to discover, identify, and manage a resource. And there are three resource pools: Compute, Network, and Storage integrated with one another to collectively form the fabric. Namely a resource added into one of the three resource pools will by default become part of the fabric and automagically a managed object. Here the Compute pool represents all resources relevant to the computing power, cpu cycles, and execution of code. The Network pool is how resources are glued together or isolated. And the Storage pool is where digital assets are stored.
In VMM 2012 SP1, Admin Console (as shown on the left) substantiates the concept of fabric and the three resources pools with visual presentations. Clicking the Fabric workspace will display the three resource pools on the navigation pane as Servers (as Compute), Networking, and Storage. Each pool includes groups of components and configurations to support designated functions. One major part of building a private cloud is to establish the three resource pools by adding and configuring server, network, and storage virtualization solutions and components into an associated resource pool.
Above the fabric are resources available for consumption. While under the fabric are three resource pools managed by VMM offering computing power, networking capabilities, and storage space to fulfill requests with elasticity. Fabric offers simplicity and shield a user from those complexities under the hood.
Conceptually, the term, service, in the context of cloud computing means capacity on demand. Hence IaaS or Infrastructure as a Service means infrastructure available on demand. For IT professionals, infrastructure means servers and in cloud computing servers are deployed as VMs since all consumable resources in cloud computing are all virtualized. Therefore, IaaS becomes the ability to deploy VMs on demand.
PaaS is Platform as a Service. An application platform means a target runtime environment for an application. PaaS is then a runtime environment available on demand. A runtime environment includes DLLs, APIs, registry, services, etc. which are configured after a server OS is put in place which is what IaaS delivers. This suggests that PaaS has a dependency on IaaS.
SaaS is Software as a Service. Software is essentially an application. SaaS says an application available on demand. While an application is to run in a target runtime environment, if the target runtime environment is available on demand, the application can consequently become available on demand. For instance, a .Net application is to run in a .Net Framework environment which is what Windows Azure PaaS offers. Since the .Net Framework runtime environment is available on demand, a .Net application deployed to Windows Azure can then become available on demand, i.e. deployed with SaaS. In other words, SaaS relys on PaaS of a target runtime environment.
This relationship between IaaS, PaaS, and SaaS presents a logical approach for transforming enterprise IT into a cloud computing setting. That is to start with IaaS, transition to PaaS, and ultimately deliver SaaS. This concept is realized in a VMM Service Template deployment.
In VMM, a service has an operational definition as a set of VMs deployed and managed as one entity and collectively delivers a LOB application. This definition is significant.
A VMM service template is a deployment blueprint capable of encapsulating everything needed for deploying an application including application architecture, contents, requirements, configurations, processes, tasks, and operations. With a service template, IT can now deploy, configure, and substantiate an instance of a target application with consistency and predictability. The introduction of a service template makes deployment as a service a reality.
For example, the following is a service template with a web frontend, a mid-tier on operations, another mid-tier as business service layer, and a SQL backend. With each machine tier, a VM template is put in place with hardware profile, OS profile, application profile, and database profile as applicable. Associated with the four VM templates, there are two web application packages, a server app-v package for order processing, a server app-v package for business services, and five database deployment packages respectively.
Here these four VM templates collectively deliver a LOB application suggests this set of VMs represents the application architecture. By deploying this web application as a service in VMM denotes that the application architecture can be managed as a single entity. The ability to deploy an application architecture, i.e. a set of VMs to collectively deliver an application, is a realization of IaaS. Namely VMM can provision the infrastructure (i.e. deploy a set of VMs) of an application on demand.
Since the entire application architecture can be put in place as one entity, the processes to configure a target runtime environment for the application can be automated and carried out upon completion of deploying the architecture. For instance, once the set of VMs forming the multi-tier architecture of the web application is in place, the process can subsequently install web server role, .Net Framework, server app-v, and SQL Server on selected VMs and validate interdependencies like protocol, APIs, ports, rules, etc., if any, among these VMs. The outcome is a set of VMs configured to provide a target runtime environment. Since the application architecture is deployed on demand, the runtime environment can be automatically configured upon the application architecture is deployed, hence the application runtime environment (i.e platform) is available on demand. This is essentially transitioning from IaaS to PaaS.
As the target runtime is configured, the process can then kick off the application installation procedures. This is when frontend IIS server, mid-tier operations and business service servers, and backend SQL server are installed with web application packages, server app-v packages, and database packages respectively. Application parameters, customizations, and interdependencies among servers at the application layer are at this time set and validated. Upon the target application is successfully installed and started, an instance of the application is then substantiated. And because the runtime environment (or platform) is available on demand, the application running in the runtime environment can now be installed and becomes available on demand. Which is SaaS.
A service template can in essence encapsulate everything needed to successfully deploy a target application. The process starts with IaaS to deploy the application architecture, followed by transitioning to PaaS when configuring runtime environment, and then installing the target application and presenting the application with SaaS.
Once a service template is validated against fabric, i.e. all resources referenced in the template are correct and available in fabric, an application can be deployed by substantiating (i.e. deploying) an instance of the service template. Since the deployment of VMs, the runtime environment configurations, and the application installations and customizations can all be automated, the process to deploy a service template can be simplified to a few mouse clicks. And a successful deployment of a service template results with an instance of the target application.
There are many details embedded in a service template to make each service template deployment isolated from one another, and a unique application instance to the fabric. Nevertheless, the employment of a service template provides consistency of application design and configurations. It is similar to using the same layout to build a house, while all houses are with the same layouts, all houses are still individually identifiable and unique.
A fundamental approach in cloud computing is to develop process patterns for consistency, repeatability, predictability, and simplicity. Fabric and Service Template in VMM 2012 SP1 are two vivid examples to hide away and replace complexities with patterns and blueprints. Both suggest some form of logical grouping and standardization. Once standardized, automation can follow to increase efficiency and reduce TCO. And for those automated, maximize ROI with optimization. Ultimately VMM 2012 SP1 is about building a private cloud delivering quicker, better, and more, while all with less.
To deploy an application as a service to a private cloud in VMM 2012, a service template is the key. In this second article of the 5-part blog post series as shown below, let’s walk through the process to make a service template ready for use.
For those who would like to build a test lab, download Windows Server 2008 R2 SP1 and System Center products including VMM 2012. There are also free eBooks and posters illustrated many important concept of virtualization.
This is a main delivery of VMM 2012. And a noticeable differentiator from VM 2008 R2 is VMM 2012 is designed with the service concept and a private cloud in mind. A service in VMM 2012 is a set of VMs collectively delivering a business function, and they are configured, deployed, operated, and managed as a whole. And a service template is a vehicle to realize the service concept.
Physically an XML file, a service template encapsulates “everything” needed to do a push-button deployment of an application architecture with a running instance of an target application. Just imagine all the knowledge and tasks other than hardware allocations involved in an application deployment from application architecture to configurations, operations, and procedures are all orchestrated and encapsulated in this XML file. Here the hardware allocation is managed by VMM 2012 with the private cloud fabric and transparent to an application. And specifically “everything” of an application deployment I mean:
Importing into Private Cloud Fabric
To deploy an application as a service into target private cloud, first make all resources relevant to the deployment visible in private cloud fabric. And this can be easily done by simply first xcopy the StockTrader package, as shown on the right, into a library share of a VMM 2012 already configured as part of the private cloud fabric. (The information to download StockTrader is detailed at the end of Part 1.) Then in the admin console of VMM 2012, import the service template as shown above.
By default VMM 2012 refreshes a library share in 60 minutes as shown below. Depending on how often changes are introduced as well as the network topology and bandwidth, this refresh interval should be set.
As needed, an administrator can simply right-click and manually refresh a library share in VMM 2012 admin console, as shown on the left here, to index and make a newly added resource available upon refresh. Once the application package appears in the library share, we can now import the StockTrader service template. As VMM 2012 reads in the content of a service template for the first time, all resources referenced by the service template are validated against the private cloud fabric settings. For instance, when developing/testing application in a development environment, the employed credentials and network naming are often different from those in production. Individual settings must be validated against the corresponding ones in a target environment. Once validated, the application and associated resources become ready for employment in the private cloud fabric.
Recall that fabric is an important abstraction in cloud computing and signifies the ability to discover, identify, and manage computing resources. The presumption is that if a resource is added into one of the three resource pools in private cloud fabric, it can be discovered, identified, and managed by VMM 2012. And the importing process is in essence to examine a service template and flag settings for corrective actions, as applicable, such that all resources referenced by the service template are validated via an associated library servers where the resources reside.
The following illustrates the process of importing the StockTrader service template. If you want to import sensitive settings such as passwords, product keys, and application and global settings that are marked as secure, select the Import sensitive template settings check box. If you do not want to import sensitive data, you can update the references later the import process.
When VMM 2012 examines a service template, those references not properly resolved are list with yellow waning triangles. In such case, edit and validate an entry by clicking the pencil icon. Each entry with a red cross is actually an indicator that the referenced resource is validated, as shown below.
Like many Microsoft products, behind the scene, it is implemented with PowerShell. And a set of scripts associated to a series of operations with specified settings can be easily generated for later batch processing and automation. The following shows the View Scripts button available for generating PowerShell script during a service template import process.
Upon a successfully import, the service template is now listed as a resource available for deployment. Check the properties, as shown below, to reveal important information including service settings and dependencies defined in the service template.
StockTrader is a 4-tier application and in the service template properties, as illustrated below. The VHD, server app-v package, customization scripts, etc. to be installed are all listed under an associated VM template. When instantiating a VM instance, these dependencies become in effect and ensure all requirements are orchestrated and met along a deployment process.
At this time, the StockTrader service template is successfully imported and ready for use. Next is to examine the application architecture defined and configured in the service template. Life is good so far.
[To Part 1, 3, 4, 5]
One way to describe cloud computing is to base on the service delivery models. There are three, namely Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) and depending on which model, a subscriber and a service provider hold various roles and responsibilities in completing a service delivery. Details of SaaS, PaaS, and IaaS are readily available and are not repeated here. Instead, a schematic is shown below highlighting the various functional components exposed in the three service delivery models in cloud computing compared with those managed in an on-premises deployment.
Essentially, cloud computing presents a separation of a subscriber’s roles and responsibilities from those of a service provider’s. And by subscribing a particular service delivery model, a subscriber implicitly agrees to relinquish certain level of access to and control over resources. In SaaS, the entire deliveries are provided by a service provider through cloud. The benefit to a subscriber is there is ultimately no maintenance needed, other than the credentials to access the application, i.e. the software. At the same time, SaaS also means there is little control a subscriber has on how the computing environment is configured and administered outside of a subscribed application. This is the user experience of, for example, some email offering or weather reports in Internet.
In PaaS, the offering is basically the middleware where the APIs exposed, the service logic derived, the data manipulated, and the transactions formed. It is where most of the magic happens. A subscriber in this model can develop and deploy applications with much control over the applied intellectual properties.
Out of the three models, IaaS provides most manageability to a subscriber. Form OS, runtime environment, to data and applications all are managed and configurable. This model presents opportunities for customizing operating procedures with the ability to on-demand provision IT infrastructure delivered by virtual machines in cloud.
An important take-away is that we must recognize and be pre-occupied with the limitations of each service delivery model when assessing Cloud Computing. When a particular function or capability like security, traceability, or accountability is needed, yet not provided with a subscribed service, a subscriber needs to negotiate with the service provider and put specifics in a service level agreement. Lack of understanding of the separation of responsibilities in my view frequently results in false expectations of what cloud computing can or cannot deliver.
With the introduction of Windows Azure Connect, many options for an on-premises application to integrate with or migrate to cloud at an infrastructure level are available. The integration and migration opportunities will become apparent by examining how applications are architected for on-premises and cloud deployments. These concepts are profoundly important for IT pros to clearly identify, define, and apply while expanding the role and responsibilities into a cloud or service architect. In Part 2, let’s first review computing models before making cloud computing a much more exciting technical expedition with Windows Azure Connect.
Then Traditional 3-Tier Application Architecture
Based on a client-server model, the traditional n-tier application architecture carries out a business process in a distributed fashion. For instance, a typical 3-tier web application as shown below includes:
When deployed on premises, IT has physical access the entire infrastructure and is responsible for all aspects in the lifecycle including configuration, deployment, security, management, and disposition of resources. This had been a deployment model upon which theories, methodologies, and practices have been developed and many IT shops operated. IT controls all resources and at the same time is responsible for the end-to-end and distributed runtime environment of an application. Frequently, to manage an expected high volume of incoming requests, load-balancers which are expensive to acquire and expensive to maintain are put in place associated with the front-end of an application. To improve data integrity, clusters which are expensive to acquire and, yes, expensive to maintain are configured at the back-end. Not only load-balancer and clusters increase the complexities and are technically challenging with skillsets hard to acquire, but both fundamentally increase the capital expenses and the operational costs throughout the lifecycle of a solution and ultimately the TCO.
Now State-of-the-Art Windows Azure Computing Model
Windows Azure Platform is Microsoft’s Platform as a Service, i.e. PaaS solution. And PaaS here means that an application developed with Windows Azure Platform (which is hosted in data centers by Microsoft around the world) is by default delivered with Software as a Service, or SaaS. A quick review of the 6-part Cloud Computing for IT Pros series, one will notice that I have already explained the computing concept of Windows Azure (essentially Microsoft's cloud OS) in Computing Model and Fabric Controller. Considering Windows Azure computing model, Web Role is to receive and process incoming HTTP/HTTPS requests from a configured public endpoint, i.e. a web front-end with an internet-facing URL specified during publishing an application to Windows Azure. A Web Role instance is deployed to a (Windows Server 2008 R2) virtual machine with IIS. And the Web Role’s instances of an application are automatically load-balanced by Windows Azure. On the other hand, Worker Role is like a Windows service or batch job, which starts by itself and is the equivalent middle-tier where business logic and back-end connectivity stay in a traditional 3-tier design. And a Worker Role instance is deployed with a virtual machine without IIS in place. The following schematic illustrates the conceptual model.
VM Role is a definition allowing a virtual machine (i.e. VHD file) to be uploaded and run with Windows Azure Compute service. There are some interesting points of VM Role. Supposedly based on separation of responsibilities, in PaaS only Data and Application layers are managed by consumers/subscribers while Runtime layer and below are controlled by a service provider which in the case of Windows Azure Platform is Microsoft. Nevertheless, VM Role in fact allows not only Data and Application, but also Runtime, Middleware, and OS layers all accessible in a virtual machine controlled by a subscriber of Windows Azure Platform which is by the way a PaaS and not IaaS offering. This is because VM Role is designed for addressing specific issues, and above all IT pros need to recognized that it is intended as a last resort. Information of why to employ VM Role and how is readily available elsewhere, and not repeated here.
So, with Windows Azure Platform, the 3-tier design is in fact very much applicable. The Windows Azure design pattern employs Web Role as a front-end to process incoming requests as quickly as possible, while Worker Role as a middle-tier to do most of the heavy lifting, namely execute business logic against application data. The communications between Web Role and Worker Role is with Windows Azure Queue and detailed elsewhere.
With Visual Studio and Windows Azure SDK, the process of developing a Windows Azure application is highly transparent to that of an on-premise application. And the steps to publish a Visual Studio cloud project are amazingly simple to simply uploading two files to Windows Azure Platform Management Portal. The two files are generated when publishing an intended cloud project in Visual Studio. They are a zipped package of application code and a configuration file with cspkg and cscfg file extensions, respectively. The publishing process can be further hardened with certificate for higher security.
Compared with on-premises computing, there are noticeable constraints when deploying application to cloud including:
These constraints are related to enabling system management of resource pooling and elasticity which are part of the essential characteristics of cloud computing.
Two important features, high availability and fault tolerance, are automatically provided by Windows Azure. Which can significantly reduce the TCO of an application deployed to cloud compared with that of an on-premises deployment. Here, details of how Windows Azure achieves automatic high availability and fault tolerance are not included. A discussion of this topic is already scheduled to be published in my upcoming blog post. Stay tuned.
An Emerging Application Architecture
With Windows Azure Connect, to integrate and extend a 3-tire on-premises deployment to cloud is now relatively easy to do. As part of Microsoft PaaS offering, Windows Azure Connect automatically configures IPSec connectivity to securely connect Windows Azure role instances with on-premises resources, as indicated by the dotted lines in the following schematic. Notice that those role instances and on-premises computers to be connected are first grouped. And all members in a group are exposed as a whole and at the group level the connectivity is established. With IPSec in place, a Windows Azure role instance can join and be part of an Active Directory in private network. Namely, server and domain isolation with Windows Authentication and group polices can now be applied to cloud computing resources without significant changes of the underlying application architecture. In other words, Windows security model and system management in a managed environment can now seamlessly include cloud resources, which essentially makes many IT practices and solutions directly applicable to cloud with minimal changes.
With the introduction of cloud computing, an emerging application architecture is a hybrid model with a combination of components deployed to cloud and on-premises. With Windows Azure Connect, cloud computing can simply be par of and does not necessarily encompass an entire application architecture. This allows IT to take advantages of what Windows Azure Platform is offering like automatic load balancing and high availability by migrating selected resources to cloud, as indicated with the dotted lines in the above schematic, while managing all resources of an application with consistent security model and domain policies. Either the front-end of an application is in cloud or on premises, the middle-tier and the back-end can be a combination of resources with cloud computing and on-premises deployment.
Start Now and Be What’s The Next
With Windows Azure Connect, both cloud and on-premises resources are within reach to each other. For IT pros, this reveals a strategic and urgent need to convert existing on-premise computing into a cloud-ready and cloud-friendly environment. This means, if not already, to start building hardware and software inventories, automating and optimizing existing procedures and operations, standardizing authentication provider, implementing PKI, providing federated identity, etc. The technologies are all here already and solutions readily available. For those feeling Windows Azure Platform is foreign and remote, I highly recommend familiarizing yourselves with Windows Azure before everybody else does. Use the promotion code, DPEA01 to get a free Azure Pass without credit card information. And make the first step of upgrading your skills with cloud computing and welcome the exciting opportunities presented to you.
Having an option to get the best of both cloud computing and on-premises deployment and not forced to choose one or the other is a great feeling. It’s like… dancing down the street with a cloud at your feet. And I say that’s amore.
<Back to Part 1: Concept>
Keith Mayer and I had a chance to work on this project with our very best online content producer, Chris Caldwell. And what a productive experience and a fun time we have had. Keith’s extensive knowledge on many aspects of constructing a private cloud brought so much substance into every episode we recorded for this series. And I had the opportunities to offer my views on so many subjects discussed in the series.
Of these thirteen episodes, one will find out that each is an independent learning module with specific objectives, steps, and operations to carry out tasks, while the entire series collectively presents the methodology, processes, and milestones to conceptualize, design, and implement a private cloud.
Your call to action is to download Windows Server 2012 (http://aka.ms/8) and System Center 2012 SP1 (http://aka.ms/2012), review this TechNet radio series, and follow through the entire methodology (http://aka.ms/privatecloud) to learn and build your private cloud.
We, Microsoft Platform Technology Evangelists, this month are writing a series of articles that step through building your very own private cloud by leveraging Windows Server 2012, Windows Azure Infrastructure as a Service (IaaS) and System Center 2012 Service Pack 1. Week-by-week, we’ll be walking through the steps to envision, plan and implement your very own private cloud to take your existing data center to the next level.
The significance of managing a hybrid cloud is due to the potential complexities and complications of operating on resources deployed across various facilities among corporate datacenters and those of 3rd party cloud service providers. Multiple management software, inconsistent UI, heterogeneous operating platforms, non- or poorly-integrated development tools, etc. can and will noticeably increase overhead and reduce productivity.
The entire blog post series is at http://aka.ms/PrivateCloud with the latest updates.
Continuing our Windows Azure how-to series, Yung Chou shows us how easy it is to capture a Virtual Machine as an image in Windows Azure and then use it as a template to deploy additional VMs. Yung also walked through the process to attach a data disk as a local storage for keeping user and application data. Sign up Windows Azure 90-Day Trial, tune in, and follow through the process to realize the power of Windows Azure and cloud computing.
Follow @technetradio Become a Fan @ facebook.com/MicrosoftTechNetRadio Subscribe to our podcast via iTunes, Zune, Stitcher, or RSS
The Windows® 7 and Windows Server® 2008 R2 operating systems introduce DirectAccess, a new solution that provides users with the same experience working remotely as they would have when working in the office. With DirectAccess, remote users can access corporate file shares, Web sites, and applications without connecting to a virtual private network (VPN). Further DirectAccess separates intranet traffic from Internet traffic as shown on the right and reduces unnecessary traffic on the corporate network.
DirectAccess requirements include:
Here’s how DirectAccess works:
Notice the DirectAccess connection process happens automatically once a DirectAccess client boots up without requiring a user to log on.
This is the fifth article of a series to review the following five BI vehicles in SharePoint 2010
was a separate product. Now included in SharePoint 2010, PerformancePoint becomes a set of services configured as a service application, and surfaces itself in a web part page with Key Performance Indicators (KPIs), Scorecards, Analytic Charts and Grids, Reports, Filters and Dashboards, etc. Each of these components interacts with a server component handling data connectivity and security. This integration with SharePoint 2010 brings opportunities to better analyze data at various levels, while SharePoint security and repository framework provides consistency, scalability, collaboration, backup and recovery, and disaster recovery capabilities. One very interesting analytics tool in PerformancePoint is the Decomposition Tree which enables a user to navigate through mass amount of data in a visual and initiative way to decompose, surface, and rank data based on selected criteria. The user experience is shown below.
PerformancePoint is installed by default in SharePoint 2010. It can be easily configured as a service application in Central Admin and deployed in a SharePoint farm as shown below. Overall, this integration makes Business Intelligence much more approachable in system integration and administration. PerformancePoint planning, administration, developers and IT pros centers, and MSDN blog are good resources to find out more information.
(A cross-posting from Microsoft SharePoint Experts Blog)
Some IT decision makers may wonder, I have already virtualized my datacenter and am running a highly virtualized IT environment, do I still need a private cloud? If so, why?
The answer is a definitive YES, and the reason is straightforward. The plain truth is that virtualization is no private cloud, and a private cloud goes far beyond virtualization. (Ref 1, Ref 2)
Technically, virtualization is signified by the concept of “isolation.” By which a running instance is isolated in a target runtime environment with the notion that the instance consumes the entire runtime environment despite the fact that multiple instances may be running at the same time with the same hosting environment. A well understood example is server virtualization where multiple server instances running on the same hardware while each instance runs as if it possesses the entire runtime environment provided by the host machine.
A private cloud on the other hand is a cloud which abides the 5-3-2 Principle or NIST SP 800-145, the de facto definition of cloud computing. In other words, a private cloud as illustrated above must exhibit the attributes like elasticity, resource pooling, self-service model, etc. of cloud computing and be delivered in a particular fashion. Virtualization nonetheless does not hold, for instance, any of the three attributes as a technical requirement. Virtualization is about isolating and virtualizing resources, while how a virtualized resource is allocated, delivered, or presented is not particularly specified. At the same time, cloud computing or a private cloud, is visualized much differently. The accessibility, readiness, and elasticity of all consumable resources in cloud computing are conceptually defined and technically required for being delivered as “services.”
The service concept is a center piece of cloud computing. A cloud resource is to be consumed as a service. This is why these terms, IaaS, PaaS, SaaS, ITaaS, and XaaS (everything and anything as a service), are frequently heard in a cloud discussion. A service is what must be presented to and experienced by a cloud user. So, what is a service?
A service can be presented and implemented in various ways like forming a web service with a block of code, for example. However in the context of cloud computing, a service can be precisely captured by three words, capacity on demand. Capacity here is associated with an examined object such as cpu, network connections, or storage. One-demand denotes the anytime readiness with any network and any device accessibility. It is a state that previously takes years and years of IT disciplines and best practices to possibly achieve with a traditional infrastructure-focused approach, while cloud computing makes “service” a basic deliver model and demand all consumable resources including infrastructure, platform, and software to be presented as services. Consequently, replacing the term, service, with “capacity of demand” or simply “on demand’ brings clarity and gives substance to any discussion of cloud computing.
Hence, IaaS, infrastructure as a service, is infrastructure on demand. Namely one can provision infrastructure, i.e. deploying virtual machines (since all consumable resources in cloud computing are virtualized) based on needs. PaaS means platform as a service, or a runtime environment available on demand. Notice that a target runtime environment is for running intended applications. Since runtime is available on demand, an application deployed to the runtime will then become available on demand, which is SaaS, or software available on demand or as a service.
Logically, building a private cloud is the post-virtualization step to continue transforming IT into the next generation of computing with cloud-based deliveries. The following schematic depicts Microsoft’s vision of transforming a datacenter from infrastructure-based deployments to a service-centric cloud delivery model.
Once resources have been virtualized with Hyper-V, System Center 2012 SP1 builds and transforms existing establishments into an on-premise private cloud environment based on IaaS. Windows Azure then provides a computing platform with both IaaS and PaaS solutions for extending an on-premise private cloud beyond corporate boundaries and into a global setting with resources deployed off premise. This hybrid deployment scenario is emerging as the next generation IT computing model where IT’s ultimate missions to deliver and support business functions will be carried out and maintained as services.
So what is cloud exactly?
Cloud, as I define it here, is a concept, a state, a set of capabilities such that a targeted business capacity is available on demand. And on-demand denotes a self-servicing model with anytime readiness, any network and any device accessibility. Cloud is certainly not a particular implementation since the same state can be achieved in various implementations as technologies continue to advance and methodologies evolve.
Comparing apples to apples, there is few reason that a business does not prefer cloud computing over traditional IT. Why one would not want to acquire the ability to adjust business capacity based on needs. Therefore, to cloud or not to cloud is not the question. Nor about security is the issue. In most cases, cloud is likely to be more secure as managed by a team of cloud security professionals in a service provider’s datacenter, as opposed to be implemented by IT generalists wearing multiple hats while running an IT shop. Cloud is about how critical the on-demand capability means to a business and for certain verticals, the question is more about regulatory compliance. And above all it is about a business owner’s understanding and comfort level with cloud.
IT nevertheless does not wait, nor can simply maintain status quo. Why private cloud? The pressure to produce more with less, the need to be instantaneously ready and respond to a market opportunity is not just a pursuit of excellence, but a matter of survival in today's economic climate with ever increasing user expectations. One will find out that a private cloud is a vehicle to facilitate and transform IT with increasing productivity and reduced TCO over time as discussed in the Building a Private Cloud blog post series. IT needs a private cloud to shorten go-to-market, to encourage consumption, to accelerate product adoption, to change the dynamics by offering better, quicker, and more with less. That is the reality of IT. That is why.
The content of this post was based on Windows Server 2008 R2. However the concepts remains applicable and the implementations are much the same with those in Windows Server 2012.
The ability to deliver a desktop with full fidelity over a network, while deploying applications on demand and with hardware independence, is an IT reality with Windows 7, Windows Server 2008 R2, and Application Virtualization (App-V) which is part of Microsoft Desktop Optimization Pack (MDOP). This screencast highlights how these three amazing technologies work as a solution platform, by demonstrating key user scenarios. Notice that if to implement the VDI solution in a Windows 2003 functional level domain, one must extend the AD schema to Windows Server 2008 level.
For more information, I have also published a number of blog posts and screencasts on Microsoft virtualization solutions including: