One key focus of an App-V solution is the ability to run multiple versions of application software within the same OS instance without the concern of conflicts among those versions. To quickly prove the concept, I prototyped a solution with 2 virtual machines based on Hyper-V. Here are the configurations:
Notice the above configurations are simply what I used for rapid prototyping to demonstrate the capabilities. They are not recommendations, nor best practices.
On the DC, I installed App-V 4.5 Management Server and imported all already sequenced applications. (See Figure 1.) Security groups for each sequenced applications were created in Active Directory Users and Computers as well. (See Figure 2.) When testing, I would add a test account into a target security group, for instance appvOffice97, followed by logging in the client machine to verify the connectivity and application streaming. The process is not complicated at all. However it is very easy to make operational mistakes and practice does very much so make perfect here.
Figure 1. App-V Management Server Console with Sequenced Applications Already Imported
Figure 2. Security Groups for Accessing Sequenced Applications
On the domain Vista SP1 desktop, I logged in as local admin to install the App-V 4.5 client and verify the connectivity. App-V 4.5 by default uses port 322 to stream and there were times I used telnet to make sure the port is open. Make sure to set up Windows Firewall accordingly. when connectivity had been verified, I then switched user and logged in using a test account. By default, App-V refreshes during use login time. This can be customize on the server under Provider Policies of the App-V Management Server console. Once logged in, all authorized App-V applications are listed in the client console. (See Figure 3.)
Figure 3. Sample List of Applications to Authorized User offered by App-V Client
How to sequence an application and import it into App-V Management Server is beyond the scope of this posting and to be demonstrated in upcoming screencasts. Here Figure 4 and Figure 5 show the user experience when multiple versions of Office suite were deployed using App-V to the desktop. Some may prefer to place the icons on the desktop or in folders with specific heading, etc. These settings are customizable in the osd file of a sequenced application.
Figure 4. Multiple Versions of Office Suite Deployed by App-V 4.5 to Vista Desktop
Figure 5. Running Access 97 and Access 2000 Deployed by App-V 4.5
A Self-Service Portal is basically a Web site to be installed on a web server with ASP.NET, IIS6 Metabase Compatibility, and IIS6 WMI Compatibility Server Role Services. By accessing the Self-Service Portal, authorized users can create and operate their own virtual machines (VMs) as permitted by each user's User Roles, while the created VMs are placed in a Library Server managed by the System Center Virtual Machine Manager, or SCVMM. A User Role here is essentially a policy with membership, authorized hardware and software profiles, allowed scope of operations, and assigned templates applicable for creating and managing VMs using Self-Service Portal. In a Self-Service Portal session, an authorized user sees only those virtual machines that the user owns or is authorized to operate upon. And as a VM is created or deleted by a user, the user's quota points are subtracted or regained with the amount of quota points that the VM is assigned in an employed template. Once a user has quota points fewer than what are needed for creating a new VM, the user has reached the maximal number of VMs allowable for the applicable User Role to create.
The system requirements of components for constructing a Self-Service Portal include
To prototype a Self-Service Portal using a laptop, here are the steps:
The following screencasts present the user experience and walk through the operations carried out from steps 5 to 11:
This series focusing on cloud essentials for IT professionals includes:
Compared with on-premises computing, deployment to cloud is much easier. A few clicking operations will do. And I believe this is probably why many have jumped into a conclusion that we, IT professionals, are going to lose our jobs in the cloud era. There is some, not much but some in my view, truth to it. However, not so fast, I say. Bringing cloud into the picture does dramatically reduce cost and complexities in some areas, yet at the same time cloud also introduces many new nuances which demand IT disciplines to develop criteria with business insights. No, I do not think so. IT professionals are not going to lose jobs, but to become more directly than ever affecting the bottom-line since with cloud computing everything is attached with a dollar sign and every IT operation has a direct cost implication. In this article, I discuss some routines and additional considerations for deployment to cloud. Notice through this article, in the context of cloud computing I use the terms, application and service, interchangeably.
For IT professionals, deploying a Windows Azure service to cloud starts with development hands over the gold bits and configuration file, as depicted in the schematic below.
In Visual Studio with Windows Azure SDK, a developer can create a cloud project, build a solution, and publish the solution with an option to generate a package which zips the code and in company with a configuration file defining the name and number of instance of Compute Role. This package and the configuration file are what gets uploaded into Windows Azure platform, either through the UI of Windows Azure Developer Portal or programmatically with Windows Azure API. For those not familiar with the steps and operations to develop and upload a cloud application into Windows Azure platform, I highly recommend you invest the time to walk through the labs which are well documented and readily available.
Application Lifecycle Management
In a typical enterprise deployment process, developers code and test applications locally or in an isolated environment and go through the build process to promote the code to an integrated/QA test environment. And upon finishing tests and QA, the code is promoted to production. For cloud deployment, the idea is much the same other than at some point the code will be placed in cloud and tests will be conducted through cloud before becoming live in Internet. Below is a sample of application lifecycle where both the production and the main test environments are in cloud.
Traditionally when developing and deploying applications on premises, one challenge (and it can be a big one) is to to try to keep the test environment as closely mimicking the production as possible to validate the developed code, data, operations, and procedures. Sometimes, this can be a major undertaking due to ownership, corporate politics, financial constraints, technical limitations, discrepancies between test environment and the production, etc. And the stories of applications behave as expected in test environment, yet generate numerous security and storage violations, fatal errors, and crash hard once in production are many times heard. This will be different when testing Windows Azure services. It turns out the user experience of promoting code from staging to production in cloud can be pleasant and something to look forward to.
When first promoting code into Windows Azure platform from local development/test environment, the application is placed in the so-called staging phase and when ready, an administrator can then promote the application into production. An interesting fact is that the staging and the production environment in Windows Azure platform are identical. There is no difference and Windows Azure platform is Windows Azure platform. What differentiates a staging environment from a production environment is the URL. The former is provided with an unique alphanumeric strings as part of the non-published staging URL like http://c2ek9aa346384629a3401e8119de3500.cloudapp.net/ while the latter is an administrator-specified user friendly URL like http://yc.cloudapp.net. So to promote from a staging environment to the production in Windows Azure platform, simply swap the virtual IP by swapping the staging URL with the production one. This is called VIP-Swap and two mouse-clicks are what it takes to deploy a Windows Azure service from staging to production. See the screen capture below. And minutes later, once Fabric Controller syncs with all the URL references of this application, in production it is.
VIP-swap is handy for initial deployment and subsequent updating a service with a new package. When making changes of a service, the service will be placed in staging environment and a VIP-Swap will promote the code into production. This feature however is not applicable to all changes of service definition. In such scenarios, redeployment of the service package will become necessary.
With Windows Azure platform in the cloud, there are new opportunities to extend as well as integrate on-premises computing with cloud services. An application architecture can accept HTTP/HTTPS requests either with a Web Role or a front-end that is on premises. And where to process requests and store the data can be in cloud, on premises, and a combination of the two, as shown below.
So either the service starts in cloud or on-premises, an application architect now has many options to architecturally improve the agility of an application. An HTTP request can start with on-premises computing while integrated with Worker Role in the cloud for capacity on demand. At the same time, a cloud service can as well employ the middle-tier and the back-end that are deployed on premises for security and control.
Standardizing Core Components
Either on premises or in cloud, core components including: application infrastructure, development tools, common management platform, identity mechanism, and virtualization technology should be standardized sooner than later. With a common set of technologies, see below, both IT professionals and developers can apply their skills and experiences to either computing platform. This will in the long run produce higher quality services with lower costs. This is crucial to make the transformation into cloud a convergent process.
By default, diagnostic data in Windows Azure is held in a memory buffer and not persistent. To access log data in Windows Azure, first the log data need to be moved to persistent storage. One can do this manually by using the Windows Azure Developer Portal, or add code in the application to dump the log data to storage at scheduled intervals. Next, need to have a way to view the log data in Windows Azure storage using tools like Azure Storage Explorer.
Traditionally when running applications on premises, diagnostics data are relatively easy to get since they are stored in local storage and can be accesses and moved easily. Logs and events generated by applications are subscribed, acquired, and stored as long as needed. When moving an application to the cloud, the access to diagnostic data is no longer status quo. Because persisting diagnostic data to Windows Azure storage costs money, IT will need to plan how long to keep the diagnostic data in Windows Azure and how to download it for offline analysis.
The cost of processing diagnostic data is one item in the overall cost model for moving application to public cloud. An application cost analysis can start with those big buckets including: network bandwidth, storage, transactions, cpu, etc. And eventually list out all the cost items and each with organizations responsible for those costs. Compare the ROI with that for on-premises deployment to justify if moving to cloud makes sense. Once agreements are made, each cost item/bucket should have a designated accounting code for tracking the cost. IT professionals will have much to do with the cost analysis since the operational costs of running applications in cloud are compared with both the capital expense and operational costs for on-premises deployment. For example, routine on-premises operations like backup and restore, monitoring, reporting, and troubleshooting are to be revised and integrated with cloud storage management which has both operation and cost implications.
The takeaway is “Don’t go to bed without developing a cost model.”
Some Thoughts on Security
Cloud is not secure? I happen to believe in most scenarios, cloud is actually very secure, and more secure than running datacenter on premises. Cloud security is a topic itself and certainly far beyond the scope of the discussions here. Nevertheless, you will be surprised how fast a cloud security question can be raised and answered, many times without actually answering it. In my experience, many answers of cloud security questions seem surfacing themselves once the context of a security question is correctly set. Keep the concept, separation of responsibilities, vivid whenever you contemplate cloud security. It will bring so much clarity. For SaaS, there is very limited what a subscriber can do and securities at network, system, and application layers are much pre-determined. PaaS like Windows Azure platform, on the other hand, presents many opportunities to place security measures in multiple layer with defense in depth strategy as show below.
Recently some Microsoft datacenters hosting services in the cloud for federal agencies have achieved FISMA certification and received ATO (Authorization to Operate). This is a very proof that cloud operations can meet and exceed the rigorous standard. The story of cloud computing will only get better from here.
Cloud is not secure? Think again.
[To Part 1, 2, 3, 4, 5, 6]
To accelerate the learning of private cloud, a direct and effective way is to walk through the process of deploy one. And that is what this blog post and screencast series will deliver by detailing the essential operations and steps to deploy and manage a service deployed to a private cloud in SCVMM 2012 including:
The process I am focusing on in this series starts from the signoff of a to-be-deployed application, here StockTrader. And in this series, I as a Private Cloud Administrator will walk through the process to “deploy” StockTrader as a service to a target private cloud. How the application was developed, configured, and packaged are not the subjects here. How it is to be deployed as a service to a target private cloud is. Deploying and managing an application as a service is an important concept and a key delivery of VMM 2012. The following further explains.
Notice that in VMM 2012 a service means specifically a set of VMs which collectively delivers a business function. At operational level, this set of VMs can be configured, deployed, and managed as a whole, i.e. one entity. This is achieved in VMM 2012 by employing a service template. By predefining the application architecture with the content, configurations, deployment operations, and procedures of an intended application in a VMM 2012 service template, we can now essentially deploy an application architecture with a running instance of an intended application, i.e. deploy an application as a service. And by managing the instance of a service template, we are now managing all associated resources of a running instance of an intended application which may encompasses multiple VM instances in multiple tiers.
This is an end-to-end sample application based on Windows Communication Foundation and ASP.NET. StockTrader is designed as a high-performance application that can seamlessly scale out across multiple servers with load-balancing and failover at the service-request level. In addition, the application can be deployed to Windows Azure Platform, a private cloud, or a hybrid environments with securely communication between Windows Azure instances and on-premise services. It illustrates many of the .NET enterprise development technologies for building highly scalable, rich "cloud-connected" applications.
The StockTrader application package I downloaded from http://connect.microsoft.com (find more details at the end of in this blog post) includes pre-baked syspreped vhd images, application code, scripts, app-v packages, and a service template which defines the multi-tier application architecture, the operations and procedures, the dependencies and intelligence, etc. with VM templates. We will use the provided service template to deploy StockTrader as a service to a target private cloud.
From a consumer’s point of view, regardless where and how StockTrader is deployed, it is a web application. The cloud connotation is relevant to mainly a service provider to signify the ability to deploy, exhibit, and manage an application with the 5-3-2 principle of cloud computing or NIST SP 800-145.
The test lab is a simple environment including a windows domain with a VMM 2012 and a Hyper-V host as members. This lab is the starting point of a private cloud environment. It is a test lab, not an idea nor a realistic representation all components/functions needed to deliver a comprehensive private cloud solution. A comprehensive private solution including configuration management, deployment vehicle, process automation, service/help desk, virtual machine manager, self-service portal, etc. is what System Center 2012 delvers. For those who would like to build a test lab similar with mine, here is the hardware and software information:
You will need 64-bit hardware to build a Windows domain with a domain controller, a SCVMM 2012 server, and a Hyper-V host to get started with a simple yet realistic enough test lab. The Hyper-V host need to have access to the hardware since it needs to be a root/parent partition run virtual machines. A great poster to help you better understand Hyper-V is available at http://aka.ms/free. The rest two, a domain controller and a SCVMM 2012 box, can be physical or virtual machines. And as needed, other System Center 2012 family members can be later added into the environment to form a comprehensive private cloud solution. Having a SCVMM 2012 server and a Hyper-V host into a Windows domain is the beginning and the essentials to start building a private cloud solution.
I set up the environment with my laptop where the booted Windows Server 2008 R2 SP1, i.e. the root partition, is a Hyper-V host as a member of the contoso.corp domain which includes a domain controller and a SCVMM 2012 server are both virtual machines and each running as guest OS. The following are the hardware information.
As far as the hardware is concern, RAM is a significant resource in virtualization and where I will spend my money.
[To Part 2, 3, 4, 5]
Continuing our Windows Azure how-to series, Yung Chou shows us how easy it is to capture a Virtual Machine as an image in Windows Azure and then use it as a template to deploy additional VMs. Yung also walked through the process to attach a data disk as a local storage for keeping user and application data. Sign up Windows Azure 90-Day Trial, tune in, and follow through the process to realize the power of Windows Azure and cloud computing.
Websites & Blogs:
Follow @technetradio Become a Fan @ facebook.com/MicrosoftTechNetRadio Subscribe to our podcast via iTunes, Zune, Stitcher, or RSS
The Windows® 7 and Windows Server® 2008 R2 operating systems introduce DirectAccess, a new solution that provides users with the same experience working remotely as they would have when working in the office. With DirectAccess, remote users can access corporate file shares, Web sites, and applications without connecting to a virtual private network (VPN). Further DirectAccess separates intranet traffic from Internet traffic as shown on the right and reduces unnecessary traffic on the corporate network.
DirectAccess requirements include:
Here’s how DirectAccess works:
Notice the DirectAccess connection process happens automatically once a DirectAccess client boots up without requiring a user to log on.
The first business to understand cloud computing is to know what the term, service, means since it has been used autonomously and extensively to explain cloud technologies. Service in the context of cloud computing means “capacity on demand” or simply “on demand.” Notice that on-demand here also implies real-time response and ultimately with anytime, anywhere, and any device accessibility. The idea is straightforward. Basically, as a service bell is ringed, the requested resources are magically made available. So, IT as a Service means IT on demand. And now it should be apparent what news as a service, catering as a service, or simply my business as a service means. And we can clearly explain the three cloud computing delivery methods. SaaS means software on demand; simply an application can be readily available for an (authorized) user. PaaS offers a programming environment (or platform) enabling the development and delivery of SaaS. And IaaS empowers a user with the ability to provision infrastructure, i.e. deploy servers with virtual machines, on demand. Further, at an implementation level with current technologies, cloud computing also destines that virtualization (namely an abstraction of the underlying complexities of topology, networking, monitoring, management, etc. from provided services) is put in place. Such that a user can consume or acquire SaaS, PaaS, and IaaS without the need to own and deploy the required hardware, reconfigure the cabling, and so on.
The term, cloud, regardless public, private, and everything in between means the 5-3-2 principle of cloud computing (see above) is applicable. The 5 characteristics listed in the 5-3-2 principle are the criteria to differentiate a cloud from a non-cloud application, and also concisely outline the benefits of cloud computing. This recognition is the essence of cloud computing. And in my view, much of the confusion in cloud computing discussion has been due to lack of an understanding of the 5-3-2- principle. For instance, many are confused about and mistakenly consider remote access or anything via Internet as cloud computing. This assumption is incorrect and inconclusive. My rule of thumb is that those exhibiting the 5 characteristics are cloud applications and those who don’t are not. Above all, the 5-3-2 principle, or more specifically NIST Definition of Cloud Computing, scopes the subject domain of cloud computing with current technologies and presents a definition that is structured, disciplined, and with clarity.
So public cloud is a cloud and the 5-3-2 principle applies. The term, public, in the context of cloud computing, refers to Internet, general availability, and for subscription when applicable. Windows Live and Hot Mail for example, are Microsoft SaaS offerings in public cloud for consumers, while Office 365, Microsoft Online Services, and Microsoft Dynamics CRM Online are for businesses. They all are cloud applications because:
The 5 characteristics exhibiting in the above mentioned cloud applications are vivid and without ambiguity.
At the same time, private cloud is also a cloud and dedicated, hence private, to an organization. As explained in Highly Virtualized Computing vs. Private Cloud, ubiquitous access and pay-as-you-go model may not be essential in private cloud. Still the applicability of the 5-3-2 principle to all cloud applications including private cloud should be very clear here. So, for example, the reasoning to answer the following question is actually straightforward.
First, with the 5-3-2 principle, we can easily determine if an application is a cloud applications. Then the strategy is to discover which of the 5 characteristics are missing and how relevant they are to the business requirements. For instance:
And it is certainly up to an organization to decide how critical the 5 characteristics are and if all or selected ones are applicable to a targeted delivery. The lesson here is not necessarily an academic debate if a particular feature like self-service should be a requirement of private cloud. The crucial element is to have a predictable way (namely the 5 characteristics) to identify what are relevant to business requirements.
One interesting observation of cloud computing is that many seem having some understanding, yet few with a complete picture since this is a subject touching very much every aspect of IT. Many can highlight some points of cloud computing, yet few with a structured and disciplined approach of explaining cloud computing since cloud computing is a very complex proposal on both business and technical sides. I believe a productive way to discuss cloud computing is to focus on the fundamentals, and have a clear understanding of what cloud is about and why, before framing it with a particular business or implementation. Employ the 5-3-2 principle to organize the message and describe cloud computing with your own words. You will find out that once grasped the concept, you can navigate through a cloud computing conversation with clarity, substance, and productivity.
This is it! We had waited and waited, and it's finally here. Windows 7 is now generally available. With Windows 7, there's never been a better time to be a PC. For all you IT Professionals out there, let me highlight the 3 key deliveries:
and innovations introduced in Windows 7 and make pertinent information readily available for you here.
Making people productive anywhere
Making people productive is not that hard. In your office plugging company’s network with a laptop loaded with apps, you can be productive. Making people productive “anywhere” on the other hand is a very challenging effort for IT, while facing the mass amount of mobile devices and increasingly complex network computing environment today. The growing numbers of mobile workforce and branch offices are at the same time demanding corporate resources seamlessly available regardless the required infrastructure and organizational boundaries. Two Windows 7 solutions to facilitate remote access are BranchCache and DirectAccess.
Managing risks through enhanced security and control
Security is nothing we need to much justify the need in today’s network computing environment. It is critical, imperative, and all too often costly. From Windows Vista, Windows Vista SP1, to Windows 7, BitLocker has been expanded from a single drive, multiple drives, now to portable media. Windows 7 offers security enhancements enabling a user to secure data from unauthorized access very easily with BitLocker-to-Go, for example. In Windows 7 Explorer, highlight a portable drive, right-click to turn on BitLocker-to-Go. It is that readily available, easy to do, and readable with Windows XP. There is really no reason not to do it since it is so little to do, yet with so much control and so strong protection on data. As a memory stick is now with 32 GB and beyond capacity, BitLocker-to-Go is one very cost-effective way to protect data from unauthorized access. For a large company, BitLocker technology with group policies offers a software based enterprise solution of hard disk encryption. You don’t need to look for a solution and end up with a second-best solution. It is in Microsoft Vista and it is much enhanced in Microsoft Windows 7.
In an enterprise environment, software restriction is one of the most difficult enforcements. Not only it needs a mature infrastructure to provide software inventories, metering, and on-going monitoring, but the required skill sets to develop, test, and manage those software restriction policies are hard to find, take years to develop, and come with very high costs. Windows 7 and Windows Server 2008 R2 together present AppLocker as a vehicle with which a system administrator can provision a policy to deny/allow execution, installation, or usage of a target application based on the application's digital signature by deriving a publisher rule defined and enforced with a Group Policy Object without programming. A complex requirement, for instance allowing task workers to access Office 2007 and later, but not PowerPoint when accessed by contractors, can be done with AppLocker in a few mouse clicks without any scripting.
Reducing cost by streamlining PC management
Many thought without a direct migration path, i.e. in-place upgrade, from Windows XP to Windows 7, the deployment of Windows 7 must be a tedious and tricky process. In fact, Windows 7 offers a number of vehicles making the migration an intuitive and straightforward process. For consumers and small businesses, Easy Transfer makes migrating from Windows XP to Windows 7 absolutely “easy” and, in my view, fun actually. Scanstate and Loadstate, two key utilities in USMT (User State Migration Tools) make a migration process very logical and easy to understand. Hard-Link Migration leaves and remaps data in place and significantly reduces the time needed to place large amount of user data in a typical PC refresh scenario.
In the past two years, with Microsoft’s introduction of virtualization strategies and solutions, there are many options in resolving compatibility issues at an application or OS level while reducing TCO and increasing flexibilities in deploying and managing IT resources in the long run. Specific to Windows XP compatibility issues, Windows 7 Professional and above offer Windows XP Mode (via a free download) with a local virtualization of Windows XP SP3 machine. So those applications developed specific for Windows XP can now essentially run in a Windows 7 environment with a few steps to set up a virtualized Windows XP SP3 run-time environment to host those Windows XP specific applications. Further an application running in Widows XP Mode can be seamlessly integrated into the Start/All Programs menu of a host Windows 7 machine. Notice Windows XP Mode alone is designed for a relatively small deployment since there is basically no built-in system management function. For a large scale deployment, MED-V or Microsoft Enterprise Desktop Virtualization, one of the six offerings that come with MDOP (or Microsoft Desktop Optimization Pack available through Software Assurance program) is the solution to manage local desktop virtualization with the abilities to provision a MED-V workspace policy to deploy XP Mode with standardized settings and a consistent user experience, etc. While MED-V 1.0 SP1 to be available in the first quarter of 2010 with host support for Windows 7, notice that both MED-V 1.0, MED-V 1.0 SP1 will leverage Microsoft Virtual PC 2007 which does not required hardware assisted virtualization.
With the introduction of Windows Azure Connect, many options for an on-premises application to integrate with or migrate to cloud at an infrastructure level are available. The integration and migration opportunities will become apparent by examining how applications are architected for on-premises and cloud deployments. These concepts are profoundly important for IT pros to clearly identify, define, and apply while expanding the role and responsibilities into a cloud or service architect. In Part 2, let’s first review computing models before making cloud computing a much more exciting technical expedition with Windows Azure Connect.
Then Traditional 3-Tier Application Architecture
Based on a client-server model, the traditional n-tier application architecture carries out a business process in a distributed fashion. For instance, a typical 3-tier web application as shown below includes:
When deployed on premises, IT has physical access the entire infrastructure and is responsible for all aspects in the lifecycle including configuration, deployment, security, management, and disposition of resources. This had been a deployment model upon which theories, methodologies, and practices have been developed and many IT shops operated. IT controls all resources and at the same time is responsible for the end-to-end and distributed runtime environment of an application. Frequently, to manage an expected high volume of incoming requests, load-balancers which are expensive to acquire and expensive to maintain are put in place associated with the front-end of an application. To improve data integrity, clusters which are expensive to acquire and, yes, expensive to maintain are configured at the back-end. Not only load-balancer and clusters increase the complexities and are technically challenging with skillsets hard to acquire, but both fundamentally increase the capital expenses and the operational costs throughout the lifecycle of a solution and ultimately the TCO.
Now State-of-the-Art Windows Azure Computing Model
Windows Azure Platform is Microsoft’s Platform as a Service, i.e. PaaS solution. And PaaS here means that an application developed with Windows Azure Platform (which is hosted in data centers by Microsoft around the world) is by default delivered with Software as a Service, or SaaS. A quick review of the 6-part Cloud Computing for IT Pros series, one will notice that I have already explained the computing concept of Windows Azure (essentially Microsoft's cloud OS) in Computing Model and Fabric Controller. Considering Windows Azure computing model, Web Role is to receive and process incoming HTTP/HTTPS requests from a configured public endpoint, i.e. a web front-end with an internet-facing URL specified during publishing an application to Windows Azure. A Web Role instance is deployed to a (Windows Server 2008 R2) virtual machine with IIS. And the Web Role’s instances of an application are automatically load-balanced by Windows Azure. On the other hand, Worker Role is like a Windows service or batch job, which starts by itself and is the equivalent middle-tier where business logic and back-end connectivity stay in a traditional 3-tier design. And a Worker Role instance is deployed with a virtual machine without IIS in place. The following schematic illustrates the conceptual model.
VM Role is a definition allowing a virtual machine (i.e. VHD file) to be uploaded and run with Windows Azure Compute service. There are some interesting points of VM Role. Supposedly based on separation of responsibilities, in PaaS only Data and Application layers are managed by consumers/subscribers while Runtime layer and below are controlled by a service provider which in the case of Windows Azure Platform is Microsoft. Nevertheless, VM Role in fact allows not only Data and Application, but also Runtime, Middleware, and OS layers all accessible in a virtual machine controlled by a subscriber of Windows Azure Platform which is by the way a PaaS and not IaaS offering. This is because VM Role is designed for addressing specific issues, and above all IT pros need to recognized that it is intended as a last resort. Information of why to employ VM Role and how is readily available elsewhere, and not repeated here.
So, with Windows Azure Platform, the 3-tier design is in fact very much applicable. The Windows Azure design pattern employs Web Role as a front-end to process incoming requests as quickly as possible, while Worker Role as a middle-tier to do most of the heavy lifting, namely execute business logic against application data. The communications between Web Role and Worker Role is with Windows Azure Queue and detailed elsewhere.
With Visual Studio and Windows Azure SDK, the process of developing a Windows Azure application is highly transparent to that of an on-premise application. And the steps to publish a Visual Studio cloud project are amazingly simple to simply uploading two files to Windows Azure Platform Management Portal. The two files are generated when publishing an intended cloud project in Visual Studio. They are a zipped package of application code and a configuration file with cspkg and cscfg file extensions, respectively. The publishing process can be further hardened with certificate for higher security.
Compared with on-premises computing, there are noticeable constraints when deploying application to cloud including:
These constraints are related to enabling system management of resource pooling and elasticity which are part of the essential characteristics of cloud computing.
Two important features, high availability and fault tolerance, are automatically provided by Windows Azure. Which can significantly reduce the TCO of an application deployed to cloud compared with that of an on-premises deployment. Here, details of how Windows Azure achieves automatic high availability and fault tolerance are not included. A discussion of this topic is already scheduled to be published in my upcoming blog post. Stay tuned.
An Emerging Application Architecture
With Windows Azure Connect, to integrate and extend a 3-tire on-premises deployment to cloud is now relatively easy to do. As part of Microsoft PaaS offering, Windows Azure Connect automatically configures IPSec connectivity to securely connect Windows Azure role instances with on-premises resources, as indicated by the dotted lines in the following schematic. Notice that those role instances and on-premises computers to be connected are first grouped. And all members in a group are exposed as a whole and at the group level the connectivity is established. With IPSec in place, a Windows Azure role instance can join and be part of an Active Directory in private network. Namely, server and domain isolation with Windows Authentication and group polices can now be applied to cloud computing resources without significant changes of the underlying application architecture. In other words, Windows security model and system management in a managed environment can now seamlessly include cloud resources, which essentially makes many IT practices and solutions directly applicable to cloud with minimal changes.
With the introduction of cloud computing, an emerging application architecture is a hybrid model with a combination of components deployed to cloud and on-premises. With Windows Azure Connect, cloud computing can simply be par of and does not necessarily encompass an entire application architecture. This allows IT to take advantages of what Windows Azure Platform is offering like automatic load balancing and high availability by migrating selected resources to cloud, as indicated with the dotted lines in the above schematic, while managing all resources of an application with consistent security model and domain policies. Either the front-end of an application is in cloud or on premises, the middle-tier and the back-end can be a combination of resources with cloud computing and on-premises deployment.
Start Now and Be What’s The Next
With Windows Azure Connect, both cloud and on-premises resources are within reach to each other. For IT pros, this reveals a strategic and urgent need to convert existing on-premise computing into a cloud-ready and cloud-friendly environment. This means, if not already, to start building hardware and software inventories, automating and optimizing existing procedures and operations, standardizing authentication provider, implementing PKI, providing federated identity, etc. The technologies are all here already and solutions readily available. For those feeling Windows Azure Platform is foreign and remote, I highly recommend familiarizing yourselves with Windows Azure before everybody else does. Use the promotion code, DPEA01 to get a free Azure Pass without credit card information. And make the first step of upgrading your skills with cloud computing and welcome the exciting opportunities presented to you.
Having an option to get the best of both cloud computing and on-premises deployment and not forced to choose one or the other is a great feeling. It’s like… dancing down the street with a cloud at your feet. And I say that’s amore.
<Back to Part 1: Concept>
To deploy an application as a service to a private cloud in VMM 2012, a service template is the key. In this second article of the 5-part blog post series as shown below, let’s walk through the process to make a service template ready for use.
For those who would like to build a test lab, download Windows Server 2008 R2 SP1 and System Center products including VMM 2012. There are also free eBooks and posters illustrated many important concept of virtualization.
This is a main delivery of VMM 2012. And a noticeable differentiator from VM 2008 R2 is VMM 2012 is designed with the service concept and a private cloud in mind. A service in VMM 2012 is a set of VMs collectively delivering a business function, and they are configured, deployed, operated, and managed as a whole. And a service template is a vehicle to realize the service concept.
Physically an XML file, a service template encapsulates “everything” needed to do a push-button deployment of an application architecture with a running instance of an target application. Just imagine all the knowledge and tasks other than hardware allocations involved in an application deployment from application architecture to configurations, operations, and procedures are all orchestrated and encapsulated in this XML file. Here the hardware allocation is managed by VMM 2012 with the private cloud fabric and transparent to an application. And specifically “everything” of an application deployment I mean:
Importing into Private Cloud Fabric
To deploy an application as a service into target private cloud, first make all resources relevant to the deployment visible in private cloud fabric. And this can be easily done by simply first xcopy the StockTrader package, as shown on the right, into a library share of a VMM 2012 already configured as part of the private cloud fabric. (The information to download StockTrader is detailed at the end of Part 1.) Then in the admin console of VMM 2012, import the service template as shown above.
By default VMM 2012 refreshes a library share in 60 minutes as shown below. Depending on how often changes are introduced as well as the network topology and bandwidth, this refresh interval should be set.
As needed, an administrator can simply right-click and manually refresh a library share in VMM 2012 admin console, as shown on the left here, to index and make a newly added resource available upon refresh. Once the application package appears in the library share, we can now import the StockTrader service template. As VMM 2012 reads in the content of a service template for the first time, all resources referenced by the service template are validated against the private cloud fabric settings. For instance, when developing/testing application in a development environment, the employed credentials and network naming are often different from those in production. Individual settings must be validated against the corresponding ones in a target environment. Once validated, the application and associated resources become ready for employment in the private cloud fabric.
Recall that fabric is an important abstraction in cloud computing and signifies the ability to discover, identify, and manage computing resources. The presumption is that if a resource is added into one of the three resource pools in private cloud fabric, it can be discovered, identified, and managed by VMM 2012. And the importing process is in essence to examine a service template and flag settings for corrective actions, as applicable, such that all resources referenced by the service template are validated via an associated library servers where the resources reside.
The following illustrates the process of importing the StockTrader service template. If you want to import sensitive settings such as passwords, product keys, and application and global settings that are marked as secure, select the Import sensitive template settings check box. If you do not want to import sensitive data, you can update the references later the import process.
When VMM 2012 examines a service template, those references not properly resolved are list with yellow waning triangles. In such case, edit and validate an entry by clicking the pencil icon. Each entry with a red cross is actually an indicator that the referenced resource is validated, as shown below.
Like many Microsoft products, behind the scene, it is implemented with PowerShell. And a set of scripts associated to a series of operations with specified settings can be easily generated for later batch processing and automation. The following shows the View Scripts button available for generating PowerShell script during a service template import process.
Upon a successfully import, the service template is now listed as a resource available for deployment. Check the properties, as shown below, to reveal important information including service settings and dependencies defined in the service template.
StockTrader is a 4-tier application and in the service template properties, as illustrated below. The VHD, server app-v package, customization scripts, etc. to be installed are all listed under an associated VM template. When instantiating a VM instance, these dependencies become in effect and ensure all requirements are orchestrated and met along a deployment process.
At this time, the StockTrader service template is successfully imported and ready for use. Next is to examine the application architecture defined and configured in the service template. Life is good so far.
[To Part 1, 3, 4, 5]
This is the fifth article of a series to review the following five BI vehicles in SharePoint 2010
was a separate product. Now included in SharePoint 2010, PerformancePoint becomes a set of services configured as a service application, and surfaces itself in a web part page with Key Performance Indicators (KPIs), Scorecards, Analytic Charts and Grids, Reports, Filters and Dashboards, etc. Each of these components interacts with a server component handling data connectivity and security. This integration with SharePoint 2010 brings opportunities to better analyze data at various levels, while SharePoint security and repository framework provides consistency, scalability, collaboration, backup and recovery, and disaster recovery capabilities. One very interesting analytics tool in PerformancePoint is the Decomposition Tree which enables a user to navigate through mass amount of data in a visual and initiative way to decompose, surface, and rank data based on selected criteria. The user experience is shown below.
PerformancePoint is installed by default in SharePoint 2010. It can be easily configured as a service application in Central Admin and deployed in a SharePoint farm as shown below. Overall, this integration makes Business Intelligence much more approachable in system integration and administration. PerformancePoint planning, administration, developers and IT pros centers, and MSDN blog are good resources to find out more information.
(A cross-posting from Microsoft SharePoint Experts Blog)
I want to call out and invite IT professionals interested in achieving Microsoft certifications to join, participate, and contribute to Windows Server Early Experts Challenge. This program is to learn about the latest version of Windows Server with excelling in related Microsoft certification exams in mind.
The Challenge involves a series of Knowledge Quests - starting with the Apprentice Quest below - and each Quest ends with a special completion certificate for you to promote your new knowledge! To make it easy to participate, each Quest is developed in a modular format that you can complete based on your own schedule and availability.
The first five Knowledge Quests are Apprentice, Installer, Explorer, Networker and Virtualizer. These Knowledge Quests target the objectives in Exam 70-410: Installing and Configuring Windows Server 2012.
Let me acknowledge that the contents presented in the Early Expert Challenge series are based on Keith Mayer’s work. HIs enthusiasm, efforts, and impact on helping IT pro communities adopt Windows Server 2012 have been inspirational, effective, and significant.
This program leverages the Microsoft Virtual Academy (MVA) for some of our free online study resources. You will need to first register for an MVA account using your Microsoft Account (aka., Windows Live ID) via the link below …
In this first knowledge quest, you will learn and explore the key new technical capabilities of Windows Server 2012 across the product pillars of virtualization, management, networking and storage, etc. to properly position them for relevant usage scenarios.
The seven modules in this course, through video and whitepaper, provide details of the new capabilities, features, and solutions built into the product. With so many new features to cover, this course is designed to be the introduction to Windows Server 2012. After completing this course, you will be ready to dive deeper into Windows Server 2012 through additional Microsoft Virtual Academy (MVA) courses dedicated to each topic introduced in this “Technical Overview.”
Alternate option: You can also attend a free Windows Server 2012 First Look Clinic at a Microsoft Learning partner near you if you'd prefer an in-person training experience.
With so much to learn in Windows Server 2012, building your own lab environment is the best way to REALLY learn new technology! You can download the Windows Server 2012 installation bits and start the process! We'll be using these installation bits in the coming weeks in the additional Knowledge Quests of the "Early Experts" Challenge. Be sure to download the bits in "VHD" format (not "ISO" format) as we'll be using the VHD bits to build your study lab and in future Knowledge Quests for hands-on activities.
Follow this step-by-step guide to build your own study lab as a dual-boot environment on your existing desktop or laptop PC. We'll leverage this study lab environment in future Knowledge Quests for hands-on activities. Hands-on experience with Windows Server 2012 will help you greatly in mastering the knowledge and skills needed to successfully pass the certification exams.
Participate in our Online Study Group Community on LinkedIn to post questions you may have, share your insights and collaborate with other members as we all prepare for certification! Each of us has unique insight and by participating in this community, we'll be able to expand our technical knowledge beyond our own experiences.
Now that you've completed this Knowledge Quest, be sure to share your success with your social network using one of the buttons below for Twitter, LinkedIn or Facebook. By sharing your success, you'll also help to encourage others to join our study group and increase the number of IT Pros working together to help grow our collective technical knowledge and share even more community insight that benefits us all!
Have you completed Steps 1 through 5? If so, follow these steps to validate your lab completion and claim your "Early Experts - Apprentice" certificate:
Once you've submitted your certificate request, feel free to keep going with the next Knowledge Quest below!
After you've completed the "Early Experts" Apprentice Quest, keep going with the next Knowledge Quest to continue your preparation for the MCSA on Windows Server 2012 Exams:
To setup your Windows Server 2012 lab for the "Early Experts" Challenge and/or IT Camp, you'll need a PC that meets the following requirements:
NOTE: If your PC does not meet the above requirements, do not continue with this process. Instead, you may prefer to build a Windows Server 2012 box in the cloud by leveraging Windows Azure Virtual Machines. Building your lab in the cloud will allow you to complete most hands-on activities in this study group, but will not permit you to perform hands-on activities related to Windows Server 2012 Hyper-V.
DISCAIMER: This process installs Windows Server 2012 in a dual-boot scenario using Boot-to-VHD features in Windows Vista, Windows 7 and Windows 8. While this process is not intended to disrupt your existing OS installation, these steps are for use at your own risk. No support or warranties are implied or provided.
One way to describe cloud computing is to base on the service delivery models. There are three, namely Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) and depending on which model, a subscriber and a service provider hold various roles and responsibilities in completing a service delivery. Details of SaaS, PaaS, and IaaS are readily available and are not repeated here. Instead, a schematic is shown below highlighting the various functional components exposed in the three service delivery models in cloud computing compared with those managed in an on-premises deployment.
Essentially, cloud computing presents a separation of a subscriber’s roles and responsibilities from those of a service provider’s. And by subscribing a particular service delivery model, a subscriber implicitly agrees to relinquish certain level of access to and control over resources. In SaaS, the entire deliveries are provided by a service provider through cloud. The benefit to a subscriber is there is ultimately no maintenance needed, other than the credentials to access the application, i.e. the software. At the same time, SaaS also means there is little control a subscriber has on how the computing environment is configured and administered outside of a subscribed application. This is the user experience of, for example, some email offering or weather reports in Internet.
In PaaS, the offering is basically the middleware where the APIs exposed, the service logic derived, the data manipulated, and the transactions formed. It is where most of the magic happens. A subscriber in this model can develop and deploy applications with much control over the applied intellectual properties.
Out of the three models, IaaS provides most manageability to a subscriber. Form OS, runtime environment, to data and applications all are managed and configurable. This model presents opportunities for customizing operating procedures with the ability to on-demand provision IT infrastructure delivered by virtual machines in cloud.
An important take-away is that we must recognize and be pre-occupied with the limitations of each service delivery model when assessing Cloud Computing. When a particular function or capability like security, traceability, or accountability is needed, yet not provided with a subscribed service, a subscriber needs to negotiate with the service provider and put specifics in a service level agreement. Lack of understanding of the separation of responsibilities in my view frequently results in false expectations of what cloud computing can or cannot deliver.
As IT architectures, methodologies, solutions, and cloud computing are rapidly converging, system management plays an increasingly critical role and has become a focal point of any cloud initiative. A system management solution now must identify and manage not only physical and virtualized resources, but those deployed as services to private cloud, public cloud, and in hybrid deployment scenarios. An integrated operating environment with secure access, self-servicing mechanism, and a consistent user experience is essential to be efficient in daily IT routines.
App Controller is a component and part of the self-service portal solution in System Center 2012 SP1. By connecting to System Center Virtual Machine Manager (SCVMM) servers, Windows Azure subscriptions, and 3rd-party host services, App Controller offers a vehicle that enables an authorized user to administer resources deployed to private cloud, public cloud, and those in between without the need to understand the underlined fabric and physical complexities. It is a single pane of glass to manage multiple clouds and deployments in a modern datacenter where a private cloud may securely extend it boundary into Windows Azure, or a trusted hosting environment. The user experience and operations are consistent with those in Windows desktop and Internet Explorer. The following is a snapshot showing App Controller securely connected to both on-premise SCVMM-based private cloud and cloud services deployed to Windows Azure.
A key delivery of App Controller is the ability to delegate authority by allowing a user to connect to multiple resources based on user’s authorities, while hiding the underlying technical complexities.
An user can then manage those authorized resources by logging in App Controller and authorized by an associated user role, i.e. profile. In App Controller, a user neither sees, nor needs to know the existence of cloud fabric, i.e. under the hood how infrastructure, storage virtualization, network virtualization, and various servers and server virtualization hosts are placed, configured, and glued together.
When first logging into App Controller, a user needs to connect with authorized datacenter resources including SCVMM servers, Windows Azure Subscriptions, and 3rd party host services.
The user experience of App Controller is much the same with that of operating a Windows desktop. Connecting App Controller with a service provider on the other hand is per the provider’s instructions. However the process will be very similar with that of connecting with a Windows Azure subscription.
Connecting App Controller with Windows Azure on the other hands requires certificates and information of Windows Azure subscription id. This routine although may initially appear complex, it is actually quite simple and logical.
Establishing a secure channel for connecting App Controller with a Windows Azure subscription requires a private key/public key pair. App Controller employs a private key by installing the associated Personal Information Exchange (PFX) format of a chosen digital certificate, and the paired public key is in the binary format (.CER) of the digital certificate and uploaded to an intended Windows Azure subscription account. The following walks through the process.
For those who are familiar with PKI, use Microsoft Management Console, or MMC, to directly export a digital certificate in PFX and CER formats from local computer certificate store. Those relatively new to certificate management should first take a look into what certificates IIS are employing first to better understand which certificate to use.
Since App Controller is installed with IIS, acquiring a certificate is quite simple to do. When installing App Controller with IIS, a self-signed certificate is put in place for accessing App Controller web UI with SSL.
The certificate store of an OS instance can be accessed with MMC.
The two export processes, for example, created two certificates for connecting App Controller with Windows Azure as the following.
Upon connecting to on-premise and off-premise datacenter resources, App Controller is a secure vehicle enabling a user to manage authorized resources in a self-servicing manner. It is not just the technologies are fascinating. It is about shortening the go-to-market, so resources can be allocated and deployed based on a user’s needs. This is a key step in realizing of IT as a Service.
In today’s episode Yung Chou shows us how we can create a Virtual Machine using Windows Azure. Sign up your free 90-day trial, if not already.In this how-to video Yung creates a Windows Server 2012 virtual machine within a matter of minutes, showing us what options are available as well as how you can manage and remote into it.
Follow @yungchou Become a Fan @ facebook.com/MicrosoftTechNetRadio Subscribe to our podcast via iTunes, Zune, Stitcher, or RSS
This is a nice compilation of pertinent information of deploying Windows 7. For those who are focusing on Windows 7 deployment in an enterprise environment, the following are in my view essential readings as well.
Learn about the new features of Windows Server 2008 R2 in the areas of virtualization, management, the Web application platform, scalability and reliability, and interoperability with Windows 7. Download Introducing Windows Server 2008 R2, written by industry experts Charlie Russel and Craig Zacker along with the Windows Server team at Microsoft.
I have introduced this e-book a while ago. A great resource to get some technical depth on Microsoft virtualization solutions this is. Also included here are some of my blog posts which you may find worth reviewing. Registration is required to download this book.
For those who would like to try and get familiar with Windows 7 and Windows Server 2008 R2, follow the following links to download, install, and test it out. Here also include is the download information of Forefront and System Center which are essential for securing and managing enterprise infrastructure.
Reference: Microsoft virtualization cost saving whitepaper, the ROI tool and training
The referenced white paper presents case studies of Microsoft customers including:
and examines how virtualization technology simplifies their IT infrastructure, streamlines IT processes, and ultimately reduces the total cost of ownership. Also included is information based on Microsoft's experience as below:
In my view, strategies in general to relatively quickly reduce IT infrastructure and support costs with virtualization solutions are, not in a particular order, to:
To transform existing IT into a hybrid environment mixed with physical and virtualized computing resources, server virtualization (i.e. server consolidation) often is where it starts. Running multiple instances in a single physical machine is not a new concept and many of us have already experienced with some host virtualization solutions like Virtual PC and Virtual Server.
To realize what your organization can benefit from Microsoft virtualization solutions,
Essentially, first identify your best candidates for server consolidation with this free downloadable tool, Microsoft Assessment and Planning (MAP) Solution Accelerator. With its agent-less inventory, performance data gathering, and auto-generated proposal and report generation capabilities, MAP lets you conduct network-wide readiness assessments so you can quickly and efficiently determine the right servers to target for Hyper-V. After determined how many servers to consolidate, you can use the free Microsoft HyperGreen Tool to figure out how much energy you’ll save and the environmental impact of those savings. Simply plug in the number of servers you are going to consolidate, and HyperGreen generates a report detailing your reductions in kilowatts, money and CO2 emissions. And use the Microsoft Integrated Virtualization ROI Tool to estimate your return on investment in Microsoft virtualization solutions, including server, desktop and management. As our customers have shown, the results can be transformational.
We, Microsoft Platform Technology Evangelists, this month are writing a series of articles that step through building your very own private cloud by leveraging Windows Server 2012, Windows Azure Infrastructure as a Service (IaaS) and System Center 2012 Service Pack 1. Week-by-week, we’ll be walking through the steps to envision, plan and implement your very own private cloud to take your existing data center to the next level.
The significance of managing a hybrid cloud is due to the potential complexities and complications of operating on resources deployed across various facilities among corporate datacenters and those of 3rd party cloud service providers. Multiple management software, inconsistent UI, heterogeneous operating platforms, non- or poorly-integrated development tools, etc. can and will noticeably increase overhead and reduce productivity.
The entire blog post series is at http://aka.ms/PrivateCloud with the latest updates.
Windows Azure relevant to Microsoft private cloud solutions is, in my view, as critical as what Active Directory means to Windows infrastructure. In a Windows domain, Active Directory holds the one version of truth and is the ultimate authority of all resources defined. Similarly when it comes to Microsoft cloud computing, there is no question that Windows Azure is the de facto platform as an extension of Active Directory in the cloud. While enterprise IT is transitioning form on-premise deployment to an emerging architecture of hybrid cloud, IT professionals are facing unprecedented challenges to change from managing servers deployed on premise to managing services delivered with hybrid cloud, and at the same time extraordinary opportunities to upgrade and expand an individual's skill profile and become a leader in cloud initiatives and a contributor in IT communities.
For IT professionals, a productive and direct way to learn and master Microsoft cloud computing solutions is to walk through and gain hands-on experience of the features available in Windows Azure. And the 90-day free trial and many readily available resources offer IT professionals at no cost to access, experience, and experiment deploying cloud resources of VMs, web sites, media and mobile services, virtual networks, etc. There are now many options for IT professionals to better deliver services. The following highlights the available features in Windows Azure and the significance to IT professionals.
This post is to provide a quick reference of the installation flow of Microsoft System Center Application Virtualization (App-V) 4.5 Management Server. The steps to configure the server, import applications, and validate the settings are not included in this post and to be discussed in a screencast currently in development and soon to be published in this blog. The presented screen flow was captured during an installation of an App-V Management Server on a Windows Server 2008 Enterprise version with a machine name, App-V, and a local instance of Microsoft SQL 2005 SP2 on a virtual machine based on Hyper-V.
For those who have previously worked on SoftGrid 4.x infrastructure, the installation of App-V Management Server appears very familiar and uneventful. The RTSPS port shown in screen 10 is by default set to 322. If you are putting in place a brand new virtualization infrastructure, do take time to review the 4.5 documentation and plan it out. Particularly the content location where the App-V packages are placed as shown in screen 13. Once the packages are put in place and working, it is error prone and can be tedious to validate the content location in all packages, should the content location be later changed.
To get the latest information of Microsoft Application Virtualization, reference the following:
Virtually speaking about Cloud Computing series by Yung Chou includes
About This Video
In today’s episode, Yung Chou outlines steps that you can take to become a private cloud expert in your organization. Using Microsoft’s System Center 2012, Yung describes how you can deliver IT as a service, by creating private clouds as well as how to manage your IT environment in a more agile way.
Video: WMV | MP4 | WMV (ZIP) | PSP Audio: WMA | MP3
Are you ready for Private Cloud? Take a free 10-minute assessment
If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:
This is the part 2 of a 4-part Mad About MED-V series. This screencast presents the user experience of running MED-V applications by going through essential user operations of a MED-V client.
The Mad About MED-V screen series include:
and each link is to be updated once the associated screencast is published. The remainder of this posting highlights some of the content presented in Part 2.
As discussed in Part 1 of this series, a MED-V workspace policy optionally allows a MED-V application integrated into the All Programs menu of the host computer as shown below, despites the fact that the MED-V application is configured and running in a Virtual PC behind the scene.
To run a MED-V application, the workspace must first be started. A MED-V client can be loaded at Widows startup time if specified in the MED-V Client Settings, in such case a workspace can be also set to start automatically. This ensures the workspace is always in place, should a user require running a MED-V application once the computer has been started. And if the workspace has not been initialized, it will start on demand followed by bringing up the application upon completing the workspace initialization. Once a workspace is started, additional options like locking/restarting/stopping workspace become available when right-clicking the MED-V client icon in the system tray. A user also at this time has the access to utilities like the File Transfer tool as shown below. The Fire Transfer tool enables a user to transfer files between the host computer and the MED-V application running in the Virtual PC in the background.
In MED-V workspace policy, a MED-V administrator can optionally configure a color border to surround a running MED-V application as shown above. The setting of showing a color border can be easily changed or disabled within workspace policy by a MED-V administrator.
A MED-V workspace policy can be configured to automatically redirect a request for a target website from the host computer to the browser in the Virtual PC. This allows every request to a target URL with a web application incompatible with the browser installed on the host computer gets redirected to a compatible browser running in the Virtual PC behind the scene. The following screen capture shows a request redirected from the host computer which runs IE7 to the IE6 (with a red border) running in the hidden Virtual PC.
In today’s episode, Sr. IT Pro Evangelist, Yung Chou kicks off his new “Virtually Speaking” series as he sets a baseline understanding of what cloud computing is at its core, how it has evolved over several decades and what its true value is to IT Professionals, consumers and the technology industry as a whole.
Click here to check out TechNet Virtual Lab: System Center Virtual Machine Manager 2012 - Building a Service Template
The content of this post was based on Windows Server 2008 R2. However the concepts remains applicable and the implementations are much the same with those in Windows Server 2012.
The ability to deliver a desktop with full fidelity over a network, while deploying applications on demand and with hardware independence, is an IT reality with Windows 7, Windows Server 2008 R2, and Application Virtualization (App-V) which is part of Microsoft Desktop Optimization Pack (MDOP). This screencast highlights how these three amazing technologies work as a solution platform, by demonstrating key user scenarios. Notice that if to implement the VDI solution in a Windows 2003 functional level domain, one must extend the AD schema to Windows Server 2008 level.
For more information, I have also published a number of blog posts and screencasts on Microsoft virtualization solutions including: