A follow-up of this posting with screencast is available.
Event subscription has been one of the most requested server features by sys admins. Combined with task scheduling, this is a cost-effective and customizable tool to get a consolidated view of monitored activities and events in target servers, and timely issue alerts. In Windows Server 2008 subscribing and forwarding events with triggers to send out alerts can be done very easily as the following:
1. Create a subscription from Event Viewer.
2. Configure the subscription based on your requirements. The shown configuration settings are for demonstration and not necessarily recommended.
Make sure clicking User and Password and providing the user credentials.
3. Once configured, the subscription is listed as ready. Right-click to start running the task.
4. Now the subscribed events will be listed under the Forward Events log. Notice a subscribed event may take some time to show up in the log after it has occurred at a targeted server. If user credentials and minimizing the latency are specified, the forwarding should happen within a minute or so.
To schedule a task for sending out alerts upon the arrival of a subscribed event,
1. Start Task Scheduler from the Administrative Tools.
2. Configure the task based on your requirements. The shown configuration settings are for demonstration and not necessarily with recommended.
3. Once configured, the task is listed as ready. Right-click to start running the task.
The series focusing on cloud essentials for IT professionals includes:
In Part 1, I talked about what “service” in the context of cloud computing means. Cloud is all about delivering services, i.e. making resources available on demand based on needs, paid by use, and.with the characteristics of ubiquitous network access, resource pooling, etc. Still we need to clearly define what cloud is. Without a common definition for a subject as broad as cloud computing it is hard to navigate through the overwhelming business and technical complexities. So here’s the six million question.
What Is Cloud
It is important to understand that there are services delivery models and deployment models. And both are needed to fully describe what cloud is. There are 3 ways to deliver services via cloud.
or SaaS is a model where an application is available on demand. It is the most common form of cloud computing delivered today. Microsoft Office 365 including: Exchange Online, SharePoint Online, Lync Online and the latest version of Microsoft Office Professional Plus suite is an SaaS offering to businesses.
or PaaS is a platform available on demand for development, testing, deployment and on-going maintenance of applications without the cost of buying the underlying infrastructure and software environments. Windows Azure Platform is a cloud-computing platform on which Microsoft’s internal IT (MSIT) organization has quickly built and deployed the Social eXperience Platform (SXP) to enable social media capabilities across Microsoft.com as documented.
On deployment, there are two base models. Public cloud is cloud computing made available through Internet to the general public or targeted users and is owned by an organization offering cloud services. An example is Microsoft Windows Live as free public cloud offerings for consumers, and Microsoft Online: Office 365 for businesses. Private cloud, on the other hand, is cloud available solely for an organization regardless if the cloud capabilities are managed by the organization or a third party and exists on premise or off premise. Based on the two models, some derive additional models like hybrid cloud, community cloud, etc. to highlight the implementation or intended audiences. For private cloud, two service delivery models: PaaS and IaaS are applicable since in a private setting, one can not deliver SaaS without having PaaS in place. Noticeis a solution for building private cloud. Hyper-V Cloud is a set of initiatives, guidelines, and offerings to help emperies deliver IaaS in a managed environment. Also the above mentioned delivery models are significant since once a model is selected to fulfill business objectives, responsibilities are implicitly agreed upon and accepted by the party hosting the cloud facility and the other subscribing the services.
Separation of Responsibilities
An important attribute of Cloud Computing is the separation of a subscriber’s responsibilities from those of a service provider’s. And by subscribing a particular service delivery model, a subscriber in essence agrees to relinquish certain level of access to and control over resources managed by the service provider. As I have discussed in Cloud Computing Primer for IT Pros, we must recognize and be pre-occupied with the limitations of each service delivery model when assessing Cloud. When a particular function or capability like security, traceability, or accountability is needed yet not provided with an intended delivery model, a subscriber needs to either negotiate with the service provider and put specifics in a service level agreement, or employ a different delivery model such that a desired function becomes available. Lack of understanding of the separation of responsibilities in my view frequently results in false expectations of what Cloud Computing can or cannot deliver.
[To Part 1, 2, 3, 4, 5, 6]
As stated in Microsoft Windows Server product roadmap, a server release update is expected 2 years after a major release. Windows Server 2008 was released in 2008. So the next server release update should be in by 2010 as Windows Server 2008 R2 (or Release 2) and a reviewers guide is available. In Microsoft product release cycle, an update release integrates the previous major release with the latest service pack, selected feature packs, and new functionality. And because an update release is based on the previous major release, customers can incorporate it into their environment without any additional testing beyond what would be required for a typical service pack. Any additional functionality provided by an update would be optional and thus not affect application compatibility or require customers to recertify or retest applications.
In Windows Server 2008 R2, Terminal Services is renamed to Remote Desktop Services (RDS). RDS introduces the new Remote Desktop Connection Broker – an expansion of the Session Broker in Windows Server 2008 – which provides the administrator with a unified experience for setting up user access to both Virtual Desktop Infrastructure (VDI) and traditional session-based remote desktops. Together with Hyper-V and System Center Virtual Machine Manager, the Remote Desktop Connection Broker enables a VDI solution. The Remote Desktop Connection Broker it complements shared RDS infrastructure components in Windows Server 2008, such as Remote Desktop Web Access or Remote Desktop Gateway. Windows Server 2008 R2 also introduces a series of platform enhancements for remote desktop users – such as support for multiple physical monitors, redirection of multimedia and 3D content, including Vista Aero, and enhanced, bi-directional audio support. To follow the development of RDS, this Team Blog is good place to start.
This renaming is not just about getting a new name for Terminal Services, a technology we have been using for a long time. This is more about fundamentally validating, aligning, and integrating Terminal Services with emerging paradigm like virtualization infrastructure as shown below.
We know it is critical to have a management solution in place while introducing and transforming existing IT infrastructure into a heterogeneous environment in which physical and virtualized computing resources including data, storage, application, servers, desktops, networks, and peripherals are managed seamlessly and transparently. Terminal Services is Presentation Virtualzation and we should and need to manage it just like other virtualiztion solutions.
PKI is heavily employed in cloud computing for encrypting data and securing transactions. While Windows Server 2012 R2 is developed as a building block for cloud solutions, there is an increasing demand for IT professionals to acquire proficiency on implementing PKI with Windows Server 2012 R2. This two-part blog post series is to help those who, like me, perhaps do not work on Active Directory Certificate Services (AD CS) everyday while every so often do need to implement a simple PKI for assessing or piloting solutions better understand and become familiar with the process.
I believe the most effective way to learn AD CS is to walk through the process, build a test lab, practice and learn from mistakes. You can download Windows Server 2012 R2 VMs from http://aka.ms/R2 and build a simple AD environment with Hyper-V like the following to test out these steps.
The following six steps form the core process of implementing PKI. The common practices are to first build a root CA with a standalone server, followed by configuring a subordinate CA on a member server for issuing certificates, while securing the root CA by taking it offline and bringing it back online only when issuing a subordinate CA certificate. Notice Part 1 include the first four steps, while the rest are in Part 2. All descriptions and screen captures are based on Windows Server 2012 R2.
This is the first AD CS role to be installed in an enterprise PKI. It is a trust anchor and establishes the root of a trust hierarchy. To secure the root CA, a common practice is to keep it offline to minimize the exposure. And bring it online only when issuing a subordinate CA certificate. The process is to simply add and configure AD CS role as a Certificate Authority (CA) on a non-domain joined server
Once CA is installed, configure the AD CS role as a standalone root CA. Here, as shown below, I set the key length to 4096 and name it as ycCorpRootCA.
Subsequently I also set the validity period to 2 instead of the default 5 years.
Once the CA is created, need to configure the CA properties with the information of the subordinate CA. Click the Extensions tab and add a CDP pointing to the subordinate CA which will be the one actually distributing certificates. The following figure shows a target CDP with optional settings is configured as the following:
Also a new location for AIA with an optional setting as
Now publish the revocation list.
In MMC Certificate snap-in, export the .cer file, i.e. without the private key, and also copy the content of c:\windows\system32\certsrv\certenroll to an intended location, here \\rootCA\_pocket$, for later access from a subordinate CA as depicted below.
At this time, the root CA of yc.corp is in place.
Here, I am integrating a subordinate CA into AD to publish her certificate to all domain clients. The process is to first add AD CS role with CA and Web Enrollment services in a target member server, subCA, as shown, followed by configuring the Setup type as Enterprise CA and the CA Type as a subordinate CA as shown below.
And the following shows that it is named as ycCorpSUBCA and a request file is saved locally.
On the subordinate CA, install the root CA certificate in the local machine’s Trusted Root CA certificate store.
Also create c:\inetpub\wwwroot\certdata and copy the crl and crt files from the root CA server to as the following.
Now copy the subordinate CA’s request file from the subordinate CA to the root CA’s share folder for acquiring a certificate in p7b with complete certificate chain for the subordinate CA accordingly. On the root CA server, open the CA console to submit the request file, subca.yc.corp-SUBCA-CA.req, as shown below. Once submitted, the request will be placed in the folder, Pending Requests. Once issued, the certificate will then listed in the folder, Issued Certificates.
Once the p7b certificate is exported, open it and examine all certificates for establishing the trust as shown below.
Finally, on the subordinate CA server, at this time the CA service is stopped. Use CA console to install the p7b certificate followed by starting the CA service.
The following shows that once the CA service has successfully started, the icon is now with a green check mark.
At this time, distribute the root CA certificate to the domain by importing the root CA certificate into Trusted Root CA of Public Key Policies at an intended domain level GPO, and then the subordinate CA is in place. The following shows, for example, importing the root CA certificate, ycCorpRootCA.cer, into a GPO linked at the domain level.
With the CA infrastructure ready, next deploy certificate templates. On the subordinate CA server, open the CA console followed by right-clicking and managing Certificate Templates folder.
Now from Certificate Template Console, duplicate a target template and modify the definitions. The following are sample template definitions for smart card logon and web server.
As the following shows the Certificate Templates Console now include the two newly defined templates.
The next is to publish the two templates for issuing certificates. This is done from Certificate Templates folder of CA console. The steps are depicted as the following.
At this time, newly created certificate templates are published. Test the new web server template by requesting a domain certificate from IIS console as shown below.
And the following shows a web server certificate was subsequently issued accordingly. This certificate can then bind with a port to establish SSL.
This step is to auto-enroll users such that much of legitimate maintenance like enrolling, renewing, and updating certificates as applicable can be carried out automatically. These user settings are in Public Key Policies of a GPO. The following are the steps carried out in the DC, dc.yc.corp, to enable auto-enrollment and certificate enrollment policy of an AD CS client.
On the subordinate CA, run gpupdate/force to refresh Group Policy. And use mmc to check the user certificate store and a user certificate issued by the subordinate CA from auto-enrollment should be in place as shown below.
If certificate auto-enrollment does not work as expected, use the Certificate Template Console to check the security settings of the properties of a questioned certificate template.
One essential characteristics of cloud computing is a self-service mechanism. Both NIST SP 800-145 and Chou’s 5-3-2 Principle have discussed well. The self-servicing capability is essential since not only it reduces support cost fundamentally, but making it easy for a user to consume provided services will continually promote the usage and ultimately accelerate the ROI. In System Center 2012 SP1, App Controller is the self-service vehicle for managing a hybrid cloud based on SCVMM, Windows Azure, and 3rd party hosting services.
This article assumes a reader is familiar with System Center 2012 SP1, and particularly System Center Virtual Machine Manager (SCVMM) and App Controller. Those who are new to System Center 2012 SP1 should first download and install at least SCVMM 2012 SP1and App Controller 2012 SP1 from http://aka.ms/2012 to better follow the presented content.
The concept of a role-based security model in SCVMM is to package security settings and policies on who can do what, and how much on an object into a single concept, the so-called user role. The idea of a user role is to define a job function which a user performs as opposed to simply offering a logical group of selected user accounts.
To delegate authority, a user role is set with tasks, scope, and quotas based on a target business role and assigned responsibilities. The members of a user role are then with the authority to carry out specific tasks on authorized objects for performing a defined business function. For instance, a first-tier help desk support may perform a few specific diagnostic operations on a VM or service, but not debugging, storing, or redeploying it, while a datacenter administrator as an escalation path for the first-tier help desk can do all. In this case, a help desk support and an escalation engineer are to be defined as two user roles for delegating authority.
Operationally, creating a user role is to configure a profile which include membership, scope, resources, credentials, etc. A user role defines who can do what and how much on an authorized resource. And in essence a defined user role is a policy imposed on those who are assigned with this role, i.e. having a membership of this role.
To set up a user role in SCVMM, use the admin console and go to Setting workspace followed by clicking Create User Role from the ribbon as shown below. There are four user roles profiles available in SCVMM 2012 SP1. Each profile includes membership, scope, accessible networks and resources, allowed operations, etc.
The self-service model of SCVMM is to employ App Controller and SCVMM admin console as the self-service vehicle and enables an authorized user to self-manage resource consumption based on SLA with minimal IT involvement in the lifecycle of a deployed resource and without the need to expose the underlying fabric which is a key abstraction in cloud computing.
A difference of using App Controller and SCVMM is that the former does not reveal the underlying fabric regardless, while the latter will according to the user role of an authenticated user.
In System Center 2012 SP1, there are a number of new operations available for App Controller as documented in http://technet.microsoft.com/en-us/library/jj605414.aspx. These operations as listed below facilitate the migration and deployment of resources among SCVMM-based private clouds, Windows Azure, and 3rd party hosting services.
Cloud is here to stay and hybrid is the way to go. Be ready. Learn, master, and take advantage of it. Make profits. Grow a career. Eat well and sleep well while welcoming XaaS, Everything as a Service, which we will have a lot to talk about soon.
Caleb’s understanding of virtualization and Hyper-V was a surprise to me since I did not sit down and gave him a technical discussion or training of any kind in Microsoft virtualization, and yet he seemed quite comfortable with creating and operating a virtual machine (VM) in Hyper-V. He probably learned how to install Windows Server 2012 by being around in my home office after he’s out of school in those afternoons a few weeks ago, since while building VM images and developing demos I would then go through installing servers and verified settings back and forth many times.
What got me excited is not necessary that he’s able to follow the wizard and clicking through the settings. What has impressed me is that his confidence in describing the process, his comfort level in navigating through the UI, and his abilities to visualize some of the essential concepts of virtualization such as adding Hyper-V role, composing a VM, setting Dynamic Memory, creating a Virtual Switch, etc. And of course, most importantly snacked along the way and never left any cookies. :)
Microsoft US Platform Technology Evangelists, have published a series of articles that step through building your very own Private Cloud by leveraging Windows Server 2012, Windows Azure Infrastructure as a Service (IaaS) and System Center 2012 Service Pack 1. This series presents the methodology of building a private cloud from overview, fabric concepts and construction, deployment, management, to extending it to Windows Azure and beyond.
And ultimately extend existing on-premise establishments with off-premise deployments and take the best of both models with a hybrid scenario. The following is a list of the entire deliveries. I recommend bookmarking this post (http://aka.ms/privatecloud) and the TechNet Radio series by Keith Mayer and Yung Chou (http://aka.ms/bpc) to walk through the entire process of building a private cloud.
My personal thanks to Keith Mayer (http://keithmayer.com) for his many contributions and leading efforts in organizing this series.
The process of building a private cloud needs to start with the fundamentals, both conceptually and technically. This blog post series intends to address both and goes through a process of forming the concepts, securing the fundamentals, examining what is under the hood, visualizing all with the abstraction. At the same time, TechNet private cloud solution portal at http://technet.microsoft.com/en-us/cloud/private-cloud offers IT professionals opportunities to further advance the learning in architecture, planning, and implementations of cloud and datacenter solutions with Microsoft technologies.
Below is the weekly breakdown of each topic published in this series to facilitate your learning.
Microsoft private cloud solutions are with the basic building blocks, Windows Server 2012, and the comprehensive management solution, System Center 2012 SP1. Both present key enabling technologies in building and extending a private cloud. IT professionals must recognize the urgency of mastering these two major technical components for maintaining relevancy in Microsoft infrastructure going forward.
Get prepared to follow along by first acquiring the essential components and building blocks for constructing and extending a private cloud!
Prepare for the MCSE: Private Cloud certification exams with these popular FREE exam study guides:
As of August, 2011, US NIST has published a draft document (SP 800-145) which defines cloud computing and outlines 4 deployment models: private, community, public, and hybrid clouds. At the same time, Chou has proposed a leaner version with private and public as the only two deployment models in his 5-3-2 Principle of Cloud Computing. The concept is illustrated below.
Regardless how it is viewed, cloud computing characterizes IT’s capabilities with which a set of authorized resources can be abstracted, managed, delivered, and consumed as a service, i.e. with capacity on demand, without the concerns of underlying infrastructure. Amid a rapid transformation from legacy infrastructure to a cloud computing environment, many IT professionals remain struggling in better understanding what is and how to approach cloud. IT decision makers need to be crisp on what are private cloud and public before developing a roadmap for transitioning into cloud computing.
Private Cloud and Public Cloud
Private cloud is a “cloud” which is dedicated, hence private. As defined in NIST SP 800-145, private cloud has its infrastructure operated solely for an organization, while the infrastructure may be managed by the organization or a third party and may exist on premise or off premise. By and large, private cloud is a pressing and important topic, since a natural progression in datacenter evolution for the post-virtualization-era enterprise IT is to convert/transform existing establishments, i.e. what have been already deployed, into a cloud-ready and cloud-enabled environment, as shown below.
NIST SP 800-145 points out that public cloud is a cloud infrastructure available for consumers/subscribers and owned by an organization selling cloud services to the public or targeted audiences. Free public cloud services like Hotmail, Windows Live, SkyDrive, etc. and subscription-based offerings like Office 365, Microsoft Online Services, and Windows Azure Platform are available in Internet. And many simply refer Internet as the public cloud. This is however not entirely correct since Internet is in generally referenced as connectivity and not necessary a service with the 5 essential characteristics of cloud computing. In other words, just because it is 24x7 accessible through Internet does not make it a cloud application. In such case, cloud computing is nothing more than remote access.
Not Hybrid Cloud, But Hybrid Deployment
According to NIST SP 800-145, hybrid cloud is an infrastructure of a composition of two or more clouds. Here these two or more clouds are apparently related or have some integrated or common components to complete a service or form a collection of services to be presented to users as a whole. This definition is however vague. And the term, hybrid cloud, is extraneous and adds too few values. A hybrid cloud of a corporation including two private clouds from HR and IT, respectively, and both based on corporate AD for authentication is in essence a private cloud of the corporation, since the cloud as a whole is operated solely for the corporation. If a hybrid cloud consists two private clouds from different companies based on established trusts, this hybrid cloud will still be presented as a private cloud from either company due to the corporate boundaries. In other words, a hybrid cloud of multiple private clouds is in essence one logical private cloud. Similarly a hybrid cloud of multiple public clouds is in essence a logical public cloud. Further, a hybrid cloud of a public cloud and a private cloud is either a public cloud when accessing from the public cloud side or a private cloud from the private cloud side. It is either “private” or “public.” Adding “hybrid” only confuses people more.
Nevertheless, there are cases in which a cloud and its resources are with various deployment models. I call these hybrid deployment scenarios including:
I have previously briefly talked about some hybrid deployment scenarios. In upcoming blogs, I will walk through the architectural components and further discuss either scenarios.
A few interesting observations I have when classifying cloud computing. First, current implementation of cloud computing relies on virtualization and a service is relevant only to those VM instances, i.e. virtual infrastructure, where the service is running. Notice that the classification of private cloud or public cloud is not based on where a service is run or who owns the employed hardware. Instead, the classification is based on whom, i.e. the users, that a cloud is operated/deployed for. In other words, deploying a cloud to a company’s hardware does not automatically make it a private cloud of the company’s. Similarly a cloud hosted in hardware owned by a 3rd party does not make it a public cloud by default either.
Next, at various levels of private cloud IT is a service provider and a consumer at the same time. In an enterprise setting, a business unit IT may be a consumer of private cloud provided by corporate IT, while also a service provider to users served by the business unit. For example, the IT of an application development department consumes/subscribes a private cloud of IaaS from corporate IT based on a consumption-based charge-back model at a departmental level. This IT of an application development department can then act as a service provider to offer VMs dynamically deployed with lifecycle management to authorized developers within the department. therefore, when examining private cloud, we should first identify roles, followed by setting proper context based on the separation of responsibilities to clearly understand the objectives and scopes of a solution.
Finally, community cloud as defined in NIST SP 800-145 is really just a private cloud of a community since the cloud infrastructure is still operated for an organization which now consists a community. This classification in my view appears academic and extraneous.
<Back to Part 1>
Recognizing “workspace” is a key concept for a user to become productive with SPW 2010, I want to focus on the three types of workspaces available in SPW 2010. They are:
Regarding software requirements, a SharePoint Workspace in SPW 2010 can synchronize only with a site running on Microsoft SharePoint Server 2010, SharePoint Foundation 2010, or SharePoint Online servers. While a SharePoint Files Tool in Groove 2007 can synchronize with a SharePoint document library running on Microsoft Office SharePoint Server 2007, Windows SharePoint Services, and later.
SharePoint Workspace in SPW 2010 is a new construct allowing a user who is also a SharePoint content owner to acquire a “local and personal” copy of selected libraries and lists of a SharePoint site. The user can work on the content locally and SPW 2010 will synchronize the changes automatically and on demand with those libraries and lists in the SharePoint site.
When there is connectivity, the changes made to the local copy of libraries and lists are automatically synchronized with the corresponding items in an associate SharePoint site. SPW 210 treats all local changes as high priority and initiates an immediate synchronization with SharePoint. When there is no connectivity, changes made in SharePoint workspaces are stored locally. The changes made offline are synchronized automatically the next time the user connects to the server.
The synchronization between a SharePoint Workspace and the associated libraries and lists of a SharePoint site is bi-directional. Consequently SPW 2010 introduces changes made in a SharePoint Workspace to SharePoint; SPW 2010 also brings in changes made directly in SharePoint by other authorized users to the SharePoint Workspace. The bi-directional synchronization is implied whenever data synchronization happens between a SharePoint Workspace and an associated libraries and lists of a SharePoint site. This two-way synchronization between a SharePoint Workspace and SharePoint is the vehicle to extend SharePoint content creation and some content management form SharePoint to desktop.
SPW 2010 is a response to the business needs of taking the content of a SharePoint site offline due to the increasing mobility in the work environment. Ultimately, a SharePoint Workspace is a “personal” copy of libraries and lists of a SharePoint site that a content owner chooses to take offline. The term, personal, here indicates a noticeable departure of work pattern in SPW 2010 from that in Groove 2007. The following explains.
The SharePoint Files Tool in Groove 2007 is a “tool” in a workspace and not a workspace by itself. A SharePoint Files Tool synchronizes with a target SharePoint document library. And the members of a Groove 2007 workspace where a SharePoint Files tool is added can by default access the content of this tool, i.e. a local copy of an intended SharePoint document library, unless the permissions of the tool are altered within the workspace. On the other hand, a SharePoint Workspace in SPW 2010 is not a tool in a workspace, but a workspace by itself, and has one and only one member, the user who creates the SharePoint Workspace. A user share the changes made in a SharePoint Workspace with other authorized SharePoint users by content synchronization with the corresponding items in a related SharePoint site.
In other words, a SharePoint Workspace is intended for the content owner to have anytime access and can (check out as needed and) work on the content without the need to maintain connectivity with SharePoint. A SharePoint Workspace is nevertheless NOT intended for sharing content; the sharing should still go through synchronization with SharePoint, i.e. via SharePoint infrastructure and security model. While in Groove 2007, it is a different concept: the workspace construct and its tools including SharePoint Files Tool are solely for sharing with workspace members. There are also other implications, like data encryption, that SPW 2010 users and those who are used to Groove should be aware of. The following is a table depicting the encryption in SPW 2010 as published in SPW team blog.
Another important distinction of SPW 2010 from Groove 2007 is that a SharePoint Workspace in one computer DOES NOT synchronize across multiple computers where the same SPW 2010 account is restored. A user will need to create a SharePoint Workspace on each computer, although the user’s SPW account is restored in each computer and the SharePoint Workspace in each computer synchronizes with the same libraries and lists of a SharePoint site. While in Groove 2007, a workspace is automatically synchronized to all computers in which the same user account is restored.
One obvious reason to create a SharePoint Workspace is to have offline access to SharePoint content. Additionally, many may prefer working in a SharePoint Workspace, instead of accessing and administering SharePoint content via a browser, because the tools in a SharePoint Workspace provides a quick and easy navigation among libraries and lists, as compared with working directly on SharePoint sites using a Web browser. For example, changing the folder structure in a SharePoint Workspace is simple and very similar to the operations in Windows Explorer, while the same changes made directly in a SharePoint site using a browser interface will require some operational knowledge in SharePoint administration. Also one can switch among lists and libraries in a SharePoint Workspace by clicking with the mouse, which is essentially instantaneous. While the same context switching using a browser may result in reloading web pages, which is relatively slow and tedious. For a system administrator managing libraries and lists in multiple SharePoint sites, one can create local copies of those libraries and lists with corresponding SharePoint Workspaces, and organize them in the Launchbar as shown (and followed by right-clicking or simply dragging an intended SharePoint Workspaces to desktop to create shortcuts) for quick access and easy navigation. And as changes are made, synchronize the content with SharePoint. This also gives a consistent user experience in managing SharePoint site content, regardless if a user is online or offline.
In simple terms, a SharePoint Workspace gives a content owner and only this content owner access to a local copy of SharePoint libraries and lists at any time, whether there is connectivity with the associated SharePoint site or not. The simplicity and familiarity of performing many standard tasks, like folder arrangements, adding new items to lists and libraries, etc. also allow a user to focus more on the quality, and less on the specific operational requirements of managing and producing SharePoint contents.
Creating SharePoint Workspace
Two ways there are. Directly from SharePoint Site Actions, a user can click Sync to SharePoint Workspace as shown below to create a local copy of the site content for synchronization. Or a user can create a SharePoint Workspace form the Launchbar and in the process the user must specify the web address of and be authenticated by an intended SharePoint site.
Here it shows the content in a SharePoint Workspace can optionally be checked out to avoid editing conflicts with other people who have access to the same content on the SharePoint site.
Unsupported Content Types
SPW 2010 does not support all SharePoint sites. And not all content types of SharePoint lists and libraries as shown below are supported in SPW 2010 either. Calendar, survey, and Wiki are, for example, non-supported types. A SharePoint site with a content type not supported by SPW 2010 will not have the option to “Sync to SharePoint Workspace” in SharePoint Site Actions.
Deleting SharePoint Workspace
This operation removes the local copy of SharePoint content; this deletion has no effect and does not delete the corresponding content stored on a SharePoint site. After deleting a SharePoint Workspace, one can create a new SharePoint Workspace referencing the same SharePoint content. This is sometimes a quick fix for a SharePoint Workspace in an unknown state.
Coauthoring SharePoint Content
Office 2010 introduces “coauthoring,” a long-waited collaboration feature. Although coauthoring is and should be a topic by itself, a brief discussion is here to highlight some exciting scenarios using SPW 2010 as described below:
So the settings are: SharePoint 2010, SPW 2010, and Word 2010; and the document is stored in SharePoint. All authors use a SharePoint Workspace to acquire a local copy of the document. All authors can make changes to the document regardless if there is connectivity between SPW 2010 and SharePoint 2010. All authors synchronize the changes made locally via the SharePoint Workspace.
Here, a SharePoint Workspace is the synchronization vehicle, the platform for co-authoring SharePoint document without the concern of network connectivity. The operational model is to have multiple clients synchronize with a centralized copy in SharePoint and not a direct peer-to-peer synchronization.
This coauthoring scenario gets even more exciting when the OS platform is Windows 7 and the machine is configured as a DirectAccess client. DirectAccess allows a DirectAccess client to connect to a private network securely without VPN. Basically whenever there is internet connectivity, a user can connect to corporate domain network. And with internet access, the coauthoring with synchronization can then happen anytime, anywhere, and on any network with a DirectAccess client
SPW 2010 has a security option to scan all incoming and outgoing files to protect against viruses. This virus scanning feature is supported if you are running Norton AntiVirus Personal Edition 2002 or higher. However the virus scanning feature is not supported, if you are running Norton AV Corporate Edition or Sophos Anti-Virus.
This is the original workspace type in Groove 2007, before the product name changed to SPW 2010. When creating a new Groove workspace in SPW 2010, a user can choose between 2010 (the default) and 2007 versions. Each workspace version has a different set of productivity tools like Documents, Discussion, and Calendar. A member of a 2010 workspace must be running SPW 2010. All members of a 2007 workspace must be running Groove 2007 or later.
With Groove workspaces, one can collaborate beyond organization boundaries with external partners and offsite team members. Groove workspaces in SPW 2010 continue to leverage the peer-to-peer features as those functioning in Groove 2007. Those having used Groove 2007 before can expect much similar, if not identical, Groove functionality in SPW 2010.
Within a Groove workspace, the content is by default synchronized automatically to all workspace members. When a member is online, all inbound and outbound messages (i.e. application and user data) are immediately received and sent, respectively. When a member is offline, all inbound messages are queued in the Groove Server Relay designated for the user and all outbound messages are stored locally. A discussion of Groove infrastructure and deployment models is available elsewhere and far beyond the scope of this article.
In a workspace created in Groove 2007, the SharePoint Files Tool which can synchronize with and only with a target SharePoint document library is available. However, in a Groove workspace with the 2010 version created in SPW 2010, there is no such tool.
The above shows tools added by default to a 2010 version of Groove workspace include Documents, Discussions, and Calendar. There is no SharePoint Files Tool in the workspace tool set.
The above shows tools added by default to a 2007 version of Groove workspace are Files and Discussion. The SharePoint Files Tool is included in the workspace tool set.
A frequently asked question about a Groove workspace is the size limitation. One can check the workspace properties to find out the current workspace size. For optimal performance, limit the size of a Groove workspace to 2 GB or less. In fact, SPW 2010 by design cannot send/replicate a Groove workspace exceeding 2 GB to new invitees.
The automatic content synchronization of a Groove workspace among members and user routines in SPW 2010 are very much the same with those in Groove 2007. For peer-to-peer collaboration using Groove, a Groove infrastructure based on Groove PKI needs to be in place. For those who are not familiar with how Groove 2007 works and would like to know more, the following information may be helpful.
There are ways: using instant messaging within SPW 2010, via Outlook, and as a file to deliver a workspace invitation. One operational detail a user should be aware of is: if to invite others with a workspace invitation file, the workspace can be sent, i.e. replicated, to an invitee only from the SPW 2010 device on which the invitation file was created. Needless to say, the workspace will not be sent to invitees other than when that SPW 2010 device is online.
As an alternative to a Groove workspace, one can create a Shared Folder which is visible to Windows file system across all computers on which the same user account is restored. Because the content is exposed to local Windows file system, a Shared Folder is searchable. Previously in Groove 2007, Shared Folder did not supported in 64-bit OS. It is now in SPW 2010.
If cloud computing is not confusing enough, there is also this so called private cloud. And what is private cloud? I am hoping at this time you have reviewed my Cloud Computing for IT Pros series and have a clear understanding of what a service is and what cloud computing is. These are key concepts. And equally important, you know the 5-3-2 Principle of Cloud Computing and why an application is a cloud application while others may not. Generally speaking, there are 5 essential characteristics, 3 delivery methods, and 2 deployment models (or 4 if following NIST definition) in cloud computing. Does not matter it is public cloud or private cloud. If it is classified as cloud computing, it should at least exhibit the 5-3-2 principle as the core set of attributes. With that in mind, so what is private cloud?
Private cloud? Well, it is a cloud, so the 5 essential characteristics of cloud computing apply. The term, private, here means dedicated and a private cloud is a cloud dedicated to an organization. The classification here is based on the intended users and not the ownership of the infrastructure. Namely, an organization has a dedicated cloud does not necessarily mean the organization must own the infrastructure on which a dedicated cloud is running. A obvious example is a private cloud running on an infrastructure owned and managed by a 3rd party hosting company. So a subscribing company may possibly own the data, software, configurations, and instances, but not the physical boxes and underlying infrastructure. To find out more of running private cloud in this fashion, a list of private cloud hosting companies is readily available.
Perhaps a more commonly assumed definition of private cloud is an on-premises deployment of cloud computing. In other words, all including the servers, cabling, software, running instances, etc. are owned and managed by an organization behind its enterprise firewall, as shown above. Many enterprises assume this definition of private cloud due to an existing deployment of on-premises IT resources. While transitioning into private cloud, it is a logical step to build one by employing already deployed hardware and software.
Ultimately cloud computing is to better deliver applicaitons. The goal of constructing a private cloud can be acquiring IaaS, PaaS, or SaaS. Based on the objectives, an organization, for example, may simply seek the ability to efficiently deploy/manage servers to provide maximal flexibility for develoying and testing applicaitons, and in this case IaaS is what and all the organization needs. While the servers are deployed via IaaS, applications running within these servers do not have to be cloud applications. The applications can very well be traditional (i.e. non-cloud computing) ones. The point is that to pursue a private cloud, it is not necessarily to acquire all three (IaaS, PaaS, and SaaS) delivery methods. Nevertheless, for enterprise it is only logical to start with IaaS to fundamentally and strategically convert existing IT establishments into a cloud-ready environment. For pursuing a private cloud, IT should have IaaS in place first which will fundamentally provide the mechanism for resource pooling, scalability, and elasticity.
Microsoft private cloud solutions is called Hyper-V Cloud. Which is a set of guidelines as shown here on the right and offerings on building private cloud with IaaS using readily available technologies, i.e. Windows Server 2008 R2 and System Center Virtual Machine Manager. Hyper-V Cloud is exciting since not only it increase the ROI on existing deployment, it also strategically places a foundation to integrate Windows Azure platform offered in public cloud. Ultimately, enterprise will be able to manage physical, virtualized, and cloud (private and public) with a single pane of glass provided by System Center.
Above all, it does not matter if the delivery method is IaaS, PaaS, or SaaS. As far as a user is concerned, whatever your service/application is, it is always SaaS even if your application is not cloud-based. Application is what this is all about. So when it comes to implement private cloud which will eventually change how your IT delivers services, it is an expensive proposition on both cost and customer satisfaction. Be clear on short-term checkpoints and long-term business goals. Scope down but be very strategic in overall implementation.
This is the sixth and last article of a series to review the following five BI vehicles in SharePoint 2010:
Business reports back in mainframe and early PC days used to be tedious to generate, ill to read, and painful to share. The administration and skills needed to organize, develop, and distribute data and reports are not trivial. I can still remember my consultant days working on JCLs and COBOL for customizing business reports in various mainframe shops. Today with some key integrations and tools, it is much easier to generate reports using web services and report generator.
In SharePoint 2010, a report server can be configured as part of a SharePoint deployment. The integration is provided through SQL Server and the Reporting Services Add-in for SharePoint Products. This integration provides benefits in storage, security, and document access. Once configured, opening a report in SharePoint will behind the scene establish a session with the associated Report Server which retrieves and processes the data followed by displaying the results in Report Viewer Web Part in SharePoint. Essentially the reporting services can now be consumed directly from SharePoint document libraries with SharePoint content management and security models. The following depicts the architecture and the steps to enable this integration:
In addition, SQL Reporting Services is also integrated with Report Builder 3.0 which is a feature-rich report authoring tools for end users. Sparklines and data bars, maps, and indicators are some of the new features to enhance data visualization of KPIs in a report. For those who would like to learn more, there is much information readily available for mastering Report Builder 3.0.
(A cross-posting from Microsoft SharePoint Experts Blog)
This 3-part article details the 12 routines that I consider a Windows Server 2008 user ought to know first to accelerate the learning and adoption of Windows Server 2012 without the need of a touch device. For those IT professionals who are working towards becoming private cloud experts, it is imperative to master Windows Server 2012 which is an essential component in establishing a private cloud. And the earlier those master Windows Server 2012 platform, the sooner those will be become leaders in the IT transformation into private cloud computing. There is everything to gain to start learning Windows Server 2012 now as opposed to later.
The content of this series is based on Windows Server 2012 Beta as of May, 2012. It is intended for those who are familiar with the administration of Windows Server 2008 (or later) to become comfortable and productive with Windows Server 2012 within an hour using conventional input devices like a keyboard and a mouse, while a touch device may not be immediately available. The 12 routines as the following are to facilitate the learning. They are certainly not complete, nor the only ways to operate Windows Server 2012.
I organize the contents into 3 parts. Part 1 will cover the first two routines mainly on the usability. The next eight in part 2 are essential user operations, and part 3 (although not about operating a non-touch device) is to highlight two important facts: the wireless support and an error message that a user is likely to experience when trying to initially connect to Windows Server 2008 R2 SP1 or other earlier version of Windows servers. The three parts together should provide pertinent information sufficient for an experienced Windows server user to get productive quickly on this exciting new version of Windows Server.
In Windows Server 2012, there are the Start screen as shown above, the traditional desktop and apps, and Metro style apps. The Start screen is now the default landing screen upon logging in and the hub of all installed applications. The traditional desktop itself is now the Metro app, Desktop. And the user experience of Desktop of Windows Server 2012 is very similar with the desktop experience in Windows Server 2008. Many UI features available in the desktops of Windows Server 2008 and Windows 7 are very much applicable. For examples, shaking to minimize windows, snapping to resize or compare contents in two windows, minimizing all open windows by clicking at the lower right corner of the desktop, etc. work the same in Windows Server 2012. Nonetheless, since Windows Server 2012 is for both touch devices and those based on traditional keyboard-mouse inputs, there are new features and operations from a user experience point of view to accommodate the inputs from a touch device and keyboard and mouse within an OS instance.
1. Where Have All the Apps Gone
The first order of business in learning a new system is to find out where and what all the apps are, and make those frequently used apps easy to access for routine operations. From the get-go on the Start screen after logging on as local administrator, place cursor on the background color (i.e. not on a tile) and right-click. The All Apps click button will appear at the lower left part of the screen, as shown here:
Clicking All Apps bring up the Apps screen revealing all the apps currently installed in the system. On the Apps screen, right-click an app to mark and pin/unpin the app to desktop/taskbar, and right-click again to unmark, as needed. The following screen capture shows Resource Monitor is currently pinned to the Start screen and not to a taskbar. A user will notice that Apps screen lists out frequently used admin tools including: Control Panel, Services, Event Viewer, PowerShell, Windows Explorer, etc. which can be pinned to Start screen and taskbar for a direct access, as preferred. To get back to the Start screen at this time, mouse-click at lower left corner (LL) or simply press the Windows Logo key.
2. The Four Corners
These are what I call four "Magic Corners" on a Windows Server 2012 screen, i.e. LL UL, LR, and UR indicating lower left, upper left, lower right, and upper right corners, respectively, as shown on the Start screen earlier. Place cursor and click at each of the magic corners to toggle screens, list out inactive metro style apps, access settings of current screens, etc. These corners are to perform some essential user operations in Windows Server 2012 with a keyboard and a mouse as input devices. Apparently, mouse-clicks at LL or UL are to perform something similar to swipes across the left edge of a touch device screen, while LR and UR are for swipe actions across the right edge, for instance.
Either LL or the Windows Logo key is where to toggle between the Start screen and the last accessed app. This provides a direct and immediate access to the Start screen which is the logical hub of all the apps installed in current OS instance. When confused, just return to the Start screen and go from there.
Moving cursor to UL on a screen will give a thumbnail view of the last Metro style app accessed. And moving the cursor from UL down along the left edge of the screen will reveal all Metro style apps currently inactive. The following screen capture illustrates the steps to bring up an inactive Metro app from the Start screen.
Moving cursor to either LR or UR will bring up the so called Charms showed in transparency with the background. Moving cursor up or down at this time along the edge will highlight the Charms with a black bar and also show the current day, date, time, network, and power status. Notice Charms provides an access to the Settings options of both the current screen and the Start screen as well. Below are two screen captures of Charms showing one in transparency with the background, the other with a black bar and current time, day, date, etc displayed.
Moving cursor to the top edge changes the cursor from an arrow to a little hand, other than when on the Start screen. At this time, drag the screen down and then to the left or the right edges of the screen will, when applicable, snap the app accordingly. These operations make sense if one Imagines doing this by touching the top edge of the screen, dragging and swiping a current app to the left or the right edge of a touch device screen and snapping the app in place.
The following image shows the desktop snapped to the left with a thumbnail view of each app open on the desktop at the time, while I was actively working on PC Settings. Dragging the boundary of the two app to expand the area of desktop will snap PC Settings to the right, while the two apps remain both on the screen at the same time.
There are various ways to drag an app and snap it to the left or right, or the bottom to close it. Together with the Windows Logo key, one will be able to navigate among Metro apps and Start/Apps/Settings screens quite easily.
Knowing how to operate the four magic corners with keyboard-mouse inputs is essential for navigating among apps. The next is to know how to carry out routine user operations which Part 2 will cover. [To Part 2, 3]
[This is a cross-posting from http://aka.ms/yc.]
With Windows Azure PowerShell Cmdlets, the process to customize and automate an infrastructure deployment to Windows Azure Infrastructure Services (IaaS) has become a relatively simple and manageable task. This article examines a sample script which automatically deploys seven specified VMs into three target subnets based on a network configuration file with two availability sets and a load-balancer. In several of test runs I have done, the entire deployment took a little more than half an hour.
The developer PowerShell script assumes Windows Azure PowerShell Cmdlets environment has been configured with an intended Windows Azure subscription. A network configuration file is at a specified location.
This file can be created from scratch or exported from Windows Azure NETOWRK workspace by first creating a virtual network. Notice in the netcfg file an affinity group and a DNS server are specified.
The four functions represent the key steps in performing a deployment including building a virtual network, deploying VMs, and configuring availability sets and load-balancers, as applicable.
Within the script, session variables and constants are assigned. The network information needs to be identical with what is defined in netcfg file since they are not automatically populated from an external source at this time.
To deploy multiple VMs programmatically, there is much information repeated. And in some parameters are defined with default values from session variables or the initialization section. When calling a routine, a default value can be overridden by an assigned/passed value.
Since other than the first VM, deploying additional VMs into an existing service does not need to specify the virtual network name. I however employ the same routing with VNetName specified for deploying the first and subsequent VMs into a service, it produces a warning for each VM after the first VM deployed into a service as shown in the user experience later.
Simply passing in an array to the routines and an availability set or a load-balancer will automatically built. These two functions streamline the processes.
Rather than creating a cloud service while deploying a VM, I decided to create individual cloud services first. This makes troubleshooting easier since it is very clear when services are created.
With a target service in place, I simply loop through and deploy VMs based on the names specified in an array. So to deploy or place VMs into an availability set or a load-balancer becomes simply by adding machine names into a target array.
Here, three frontend servers are load-balanced and configured into an availability set.
I kicked off at 3:03 PM and it finished at 3:36 PM with 7 VMs deployed into target subnets including a SharePoint and a SQL servers. The four warnings were due to deploying additional VMs (specifically dc2, sp2013, fe2, and fe3) into their target services while in the function, DeployVMsToAService, I have VNetName specified.
Seven VMs were deployed with specified names.
dc1 and dc2 were placed in the availability set, dcAvSet.
fe1, fe2, and fe3 are placed in the availability set, feAvSet.
The three were load-balanced at feEndpoint as specified in the script.
All VMs were deployed into target subnet as shown in the virtual network, foonet.
The script also produces a log file capturing the states while completing tasks. As a VM is deployed, the associated RDP file is also downloaded into a specified working directory as shown below.
To make the code production ready, more error handling needs to be put in place. The separation of data and program logic is critical here. PowerShell is a very fluid programming tool. If there is a way to run a statement, PowerShell will try to do just that. So knowing how data are referenced and passed in runtime is becoming critical. And by separating data and program logic will noticeably facilitate the testing/troubleshooting process. And the more experience you have scripted and run it with various data, the more you will discover how PowerShell actually behaves which is not necessarily always the same with other scripting language.
A key missing part of the script is the ability to automatically populated the network configuration from or to an external source. Still, the script provides a predictable, consistent, and efficient way to deploy infrastructure with Windows Azure Infrastructure. What must come after is to customize the runtime environment for an intended application, once VMs are in place and followed by installing the application. IaaS puts VMs in place as the foundation for PaaS which provides the runtime for a target application. It is always about the application, never should we forget.
The main delivery of App-V 4.6 is 64-bit supportability. The rest product features and functions are much the same, if not identical, with those of App-V 5.1 SP1.The following shows App-V 4.6 Windows Desktop Client and App-V 4.6 Client for Remote Desktop Services (or Terminal Services) installed in a 64-bit operating system.
Notice this 21-minute screencast is not a tutorial of App-V 4.6. The viewers are expected to be already experienced with App-V and familiar with App-V infrastructure. The presented App-V user experience is based on a server-based deployment scenario with a full App-V infrastructure with packages streamed in RTSPS over port 332. Using RTSPS provides high security since the communication between App-V Servers and Clients is signed and encrypted. The following table depicts the methods for deploying virtual application packages to terminal servers and Windows desktops. In the screencast, I employed an App-V Management Server with local SQL Server 2008. The demo environment consists of virtual machines running within my laptop which is a Windows Server 2008 R2 with Hyper-V role added.
Source: Application Virtualization 4.5 for Terminal Services
The configurations of the demo environment is highlighted in the topology diagram shown below. Here contoso.corp is an Active Directory domain with an App-V infrastructure of the following components.
To minimize the number of virtual machines needed, I installed App-V Management Server, App-V Admin Console, and SQL Server 2008 in the domain controller, dc.contoso.corp. While App-V 4.6 Terminal Services (App-V/TS) Client was installed in the Remote Desktop Session Host (RDSH), app.contoso.corp. And App-V 4.6 Windows Desktop Client was installed in a managed Windows 7 desktop, w7ent.contoso.corp. The domain, contoso.corp was configured with DirectAccess with w7ent as a DirectAccess client.
In the demos, I first talked about how the demo evironemnt is configured. And with the App-V default application which is the test application installed with App-V Management Console, I added domain admins as the authorized users to verify the readiness and correctness of the App-V infrastructure. Later I used a test account, alice, to test the streamed App-V applications. Notice the demo environment was constructed to mainly present the user experience of App-V 4.6 with minimal complexity. No attempt was made to optimize the performance, server placement, or user profile management.
Additional resources on App-V:
For those who would like to try and get familiar with Windows 7 and Windows Server 2008 R2, follow the following links to download, install, and test it out. Here I also include the download information of Forefront and System Center which are essential for securing and managing enterprise infrastructure.
This is the fourth article of a series to review the following five BI vehicles in SharePoint 2010:
A picture is worth a thousand words. This cannot be more applicable to what Visio Services can deliver. A feature of SharePoint 2010, Visio Services enables data-bound Visio drawings to be viewed in a web browser. This feature is for sharing Visio drawings and letting authorized users view Visio diagrams in a SharePoint library without having Visio or the Visio Viewer installed on their local computers. Visio Services can also refresh data and recalculate the visuals of a data-connected Visio drawing hosted on a SharePoint 2010 site. So a user will always see the latest and up to date information in a visual form. For instance, a complex manufacturing supply chain can be presented with clarity and simplicity, and up to date status with Visio Services as shown below. A Visio Services overview is a good starting point to better understand this feature. And the installation and administration of Visio Services are very easy to follow.
Visio Services can display Visio drawings using a Web Part without having a locally installed Microsoft Visio 2010 on the client computer. However Visio Services is not for creating or editing Visio diagrams. To create, edit, and publish diagrams to Visio Services, an author must have a locally installed Microsoft Visio Professional 2010 or Microsoft Visio Premium 2010.
Available only with SharePoint Server 2010 Enterprise Client Access License (ECAL), Visio Services must be deployed, provisioned, and enabled before first use. In addition, one must have Microsoft Visio Professional 2010 or Microsoft Visio Premium 2010 in order to save diagrams to SharePoint as Web drawings.
To view a Visio drawing based on a SharePoint list or an Excel workbook connected to an Excel Services, a user must be authenticated and authorized by the SharePoint 2010 hosting the content. And three authentication methods are supported:
While developing enterprise service architecture, planning for services that access external data sources is something not to overlook. For a service application as one the following using a delegated Windows identity to access an external source, the external data source must reside within the same domain with the SharePoint 2010 farm where the service application is located or the service application must be configured to use the Secure Store Service.
Namely Delegation of a Windows identity, Windows domain, and Secure Store Service are a few things to keep in mind if a service application to access a data store beyond the SharePoint farm where the service application is running. In other words, do the right thing to plan your Visio Services deployment.
Over the years, VMware has contributed much to server virtualization and made an impact on the IT industry. A great competitor, VMware has been and in my view made Microsoft a stronger and better IT solution provider. Both have been trying hard to help enterprise IT deliver much with less. And a great news is that competitions and open dialogues benefit tremendously to our customers and the IT industry in general. The two companies in my view have however a fundamentally different perspective in addressing cloud computing challenges. Let me be clear. This blog post is not about feature parity. This blog presents my personal view on important considerations for assessing a cloud computing solution platform and is intended to help IT technical leadership and C level decision makers look into the fundamental principles which will ultimately have a profound and long-term impact on the bottom line of adopting cloud computing. The presented criteria apply to Microsoft as much as to any other solution providers in consideration.
In cloud computing, resources presented for consumption are via abstraction without the need to reveal the underlying physical complexities. And in current state of cloud computing, one approach is to deliver consumable resources via a form of virtualization. In this way, a server is in fact a virtual machine (VM), an IP address space can in reality logically defined through a virtual network layer, and a disk drive appearing with a continuous storage space is as a matter of fact an aggregate of the storage provided by a bunch of disks, or JBOD. All cloud computing artifacts eventually consist of resources categorized into three pools, namely compute, networking, and storage. The three resource pools are logically simple to understand. Compute is the ability to execute code and run instances. Networking is how instances and resources are glued or isolated. And storage is where the resources and instances are stored. And these three resource pools via server virtualization, network virtualization, and storage virtualization collectively form an abstraction, the so-called fabric, as detailed in “Resource Pooling, Virtualization, Fabric, and Cloud.” Fabric signifies the ability to discover and manage datacenter resources. Sometimes we refer the owner of fabric which is essentially a datacenter management solution as fabric controller which manages or essentially owns all the datacenter resources, physical or virtual.
Cloud computing is about providing and consuming resources on demand. It is about enabling consumption via the management of resources which happen to be virtualized in this case. In cloud computing, we must go beyond virtualization and envision fabric as the architectural layer of abstraction. Fabric management needs to be architected as a whole into a solution. Such that the translation between fabric management and virtualization operations in each resource pool can be standardized, automated, and optimized.
So, at an architectural level, look into a holistic approach of fabric management, i.e. a comprehensive view of how the three resource pools integrated and complementary to one another. Let me here recognize that virtualization is very important, while fabric is critical.
When it comes to deploying VMs, a VM template is a tool many IT pros are familiar with and have been relying on. And this is a developed topic and both VMware vCenter and Microsoft Virtual Machine Manager provide various ways to construct VM templates. In cloud computing, deployment is nevertheless not about individual VMs. Modern computing models employ distributed computing in multiple application tiers while each tier may have multiple VM instances taking incoming requests or processing data. A typical example is a three-tier web application including a frontend, mid-tier, and backend and this application is meaningful only when all three tiers are considered and operated as a whole, and not individually. The essence is that a cloud deployment is more about an application architecture which is a set of VMs delivering a target application as one entity, than individual VMs.
So, from a deployment point of view, demand the details on architecting, defining, and deploying an application architecture and not simply on VM templates. The anchor has to be set at an application architecture so considerations on deploying the application are consistent with how an application instance is consumed and managed.
A similar concept to VM template vs. application architecture is servers vs. services. Here a server is the server OS instance running in a deployed VM. “Service” is a term at an operations level including a set of servers (which forms an application architecture) identified and managed as one entity while collectively delivering a target application. In the context of cloud computing, a service carries a set of attributes, five to be specific as defined in NIST SP 800-145 and summarized in the 5-3-2 Principle of Cloud Computing. Deploying a server (or a VM) and deploying a service denote very different capabilities. Deploying ten VMs is a process of placing ten individual servers, and it suggests little on the scope, relationship, scalability, and management of the ten servers. At the same time, deploying ten instances of a service denotes there is one (set of) service definition and with ten instantiations. How many total VMs in the ten instances is not as significant as that the ten service instances deliver a level of scalability, consistency, and predictability since all are from the same service definition. Further since all instances are based on a the same service definition, there is an opportunity by implementing and rolling out changes of the definition to running instances in a systematic and controlled fashion and hence to minimize downtime via “service” management. An upgrade domain, for instance, is a logical construct for implementing a rolling upgrade for multiple instances of a service to ensure minimal, if any, downtime. As the IT industry is adopting cloud computing and the number of VMs and application instances continues to rapidly increase as evidenced by market research, a solution platform setting the focal point on service architecture and management and not just VMs is essential. A service is also how cloud computing is delivered and consumed. IaaS, PaaS, and SaaS are announced with the term, service, is a clear indication on how significant the role a service plays. It is “the” way in cloud computing to deliver resource for consumption. If it is not delivered as a servicer, it is not cloud.
So, from an application point of view, what a customer cares is a service and what a service provider should pay attention is what is running in a server and not a server itself. The ability to drill down and gain business and technical insights from not just servers, but the applications running in the servers is what matters. For instance, for a database application what is critical to know and respond to is the health of databases and not just the state of the server which hosts the database application. A cloud solution platform needs to be about the workload configuration and service management and not about the servers deployment.
So investigating how a service is defined, constructed, deployed, and managed including how fault domain and upgrade domain are applied to a deployed service in a proposed solution, how availability and SLA are defined relevant to a service, etc. will reveal the strategy a solution platform is based on.
In this post virtualization era, enterprise IT accelerates cloud computing adoption by shortening the transition from a private cloud into a hybrid deployment scenario, or simply a hybrid cloud. Here a hybrid cloud is a private cloud with a cross-premises deployment. For example an on-premises private cloud with some off-premises resources is a form of hybrid cloud. A hybrid cloud offers an opportunity for keeping sensitive information on premises while taking advantages of the flexibility and readiness that a 3rd-party public cloud can provide to host non-sensitive data. The idea of a hybrid cloud surfaces an immediate challenge: how to enable a user to self-serve resources in a cross-premises deployment. Self-servicing is an essential characteristic in cloud computing and plays a crucial role in fundamentally minimizing training and support cost while continually promoting resource consumption. Consistent user experience with on-premises and off-premises deployments, SSO maturity and federated identity, an easily implemented delegation model, and inter-op capabilities with 3rd-party vendors are imperative.
Demand a management platform with an ability to manage resources not just physical and virtualized, but also those deployed to a private cloud, a public cloud, or a hybrid cloud. This is critical to any cloud deployment.
Choosing a cloud computing platform is a tremendous task and a critical path for enterprise IT to transition into the next generation of computing. IT leadership and decision makers should exercise more forethought and institute a comprehensive management solution while the first opportunity arises to facilitate fabric construction with cloud computing methodology. Keep staying focused on constructing, deploying, and managing:
For enterprise IT, the determining factor of a successful transformation is the ability to continue managing not only what has been established, but what is emerging; not only physical and virtualized, but those deployed to private, public, and hybrid clouds; not only from one solution platform, but vSphere, Hyper-V, Citrix and beyond.
This series focusing on cloud essentials for IT professionals includes:
One very important concept in cloud computing is the notion of fabric which represents an abstraction layer connecting resources to be dynamically allocated on demand. In Windows Azure, this concept is implemented as Fabric Controller (FC) which knows the what, where, when, why, and how of the resources in cloud. As far as a cloud application is concerned, FC is the cloud OS. We use Fabric Controller to shield us from the need to know all the complexities in inventorying, storing, connecting, deploying, configuring, initializing, running, monitoring, scaling, terminating, and releasing resources in cloud. So how does FC do it?
A key technology makes cloud computing a reality is virtualization. An apparent and production example is that Windows Azure abstracts hardware through virtualization and creates a virtual machine (VM) for each Role instance. Here, VMs and the underlying Hypervisor together offers multiple layers of isolations and virtualizing a computing resource further allows it to be moved to any number of physical hosts in a data center. The following schematic illustrates the implementation of Windows Azure computing model discussed in Part 3 of the series. Each instance of either a Web Role or a Worker Role is running in an individual VM. And depending on the configuration of an application, there can be multiple instances of a given Role.
A virtual machines is physically a virtual hard disk (VHD) file which has a number of advantages. For instance, not only it is easier to manage files compared with that of working with physical partitions, disks, and machines, but a VHD file can be maintained while offline, i.e. without the need to boot up the OS image installed in the VHD file. Virtual Machine Servicing Tool (VMST), is a such tool freely available from Microsoft. There have been many and active discussions on server virtualization, desktop virtualization, Application Virtualization (App-V), and Virtual Desktop Infrastructure (VDI). And many IT organizations have already started consolidating servers and introduced various forms of virtualization into their existing computing environments, as reported in many case studies.
Make no mistake nevertheless. Virtualization is not a destination, but a stepping stone for enterprise IT to transform from then a hardware-dependent and infrastructure-focused deployment vehicle into now and going forward a user-centric and cloud-friendly environment. Although virtualization is frequently motivated for cost saving, I believe the long-term and strategic business benefits are however resulted from deployment flexibility. Facing the many challenges and unknowns already in place and ahead brought by Internet, IT needs to make sure new investments are strategic, at the same time transform excising establishments into something flexible and agile. IT needs the ability to manage computing resources, both physical and virtualized, transparently and on a common management platform, while securely deploying applications to authorized users anytime, anywhere and on any devices. Fundamentally, virtualization provides abstractions for manageability and isolations for security to dynamically scale and secure instances of workloads. For enterprise IT, virtualization is imperative, a critical step towards building a cloud-friendly and cloud-ready environment. The takeaway is that virtualization should be in every enterprise IT’s roadmap, if not already. And a common management platform with the ability to manage physical and virtualized resources transparently is essential and should be put in place is as soon as possible.
The concept of fabric in Microsoft’s implementation in production exhibits itself in the so-called Fabric Controller or FC which is an internal subsystem of Windows Azure. FC, also a distribution point in cloud, inventories and stores images in repository, and:
For FC to control a deployed instance inside of a VM and carry out all the above tasks, there are Agents in place. The following schematic depicts the architecture.
When FC is building a node in data center, Fabric Agent (FA) is included in and automatically initialized in the root partition. FA exposes an API letting an instance interact with FC and is then used to manage Guest Agent (GA) running in a guest VM, i.e. child partition. The manageability is logically established with the ability for FC to monitor, interact, trust, and instruct FA which then manages GAs accordingly. Behind the scene, FC also makes itself highly available by replicating itself across groups of machines. In short, FC is the kernel of cloud OS and manages both servers and services in the data center.
This is another term getting overused and confusing. I am as guilty as anyone for using the term frequently without putting it in context. Not all AppFabrics are exactly the same after all. There are, as shown below, Windows Server AppFabric and Windows Azure AppFabric. The former is available as extensions to the Application Server role of Windows Server, while the latter provides cloud-based services to connect users and applications across the Internet. Both are part of Microsoft’s application infrastructure (or middleware) technologies.
Relevant to Windows Azure, many seem assuming Windows Azure AppFabric and FC are the same, similar, or related. This is incorrect, because they are not. Windows Azure AppFabric is a cloud middleware offering a common infrastructure to name, discover, expose, secure, and orchestrate web services on the Windows Azure platform. A number of services Windows Azure AppFabric includes:
The Service Bus service can traverse firewalls and NAT devices without forfeiting the security afforded by these devices to relay messages from clients through Windows Azure to software running on-premises. The Access Control offers a claims-based mechanism federated with Active Directory Federation Services (AD FS) 2.0 accessible by Windows Azure, other cloud, and on-premises applications. For those would like to know more technical details and develop cloud applications based on Windows Azure AppFabric, there are good references including: an overview whitepaper and An Introduction to Windows Azure AppFabric for Developers.
In an over simplified description, FC is the kernel of Windows Azure (a cloud OS) and manages the hardware and services in a data center, while Windows Azure AppFabric is a cloud middleware for developing applications. For IT pros, I consider a must-read overview article of Windows Azure is available elsewhere. And a nicely packaged content, Windows Azure Platform Training Kit, is also a great way to learn more more about the technology.
In Part 2, I basically said cloud is to provide “Business as a Service” i.e. making a targeted business available on demand. In digital commerce, much of a business is enabled by IT. Therefore, cloud is to in essence deliver “IT as a Service” or IT available on demand, i.e. anytime, anywhere, on any device. This is what we want IT to become via cloud. Realize that “on-demand” in the context of cloud computing also implies a set of attributes as describer in Part 1 including: ubiquitous network access, resource pooling, pay per use, and so on.
Nonetheless, IT is not about implementing technologies which is a means and not the end. All the infrastructure, servers, desktops, SaaS/PaaS/IaaS, public cloud, private cloud, etc. is about one thing and one thing only. That is to provide authorized users “applications” so that with which transactions are made and businesses are carried out. Either in the cloud or on-premises, it is about applications. So, how is a cloud application different than a traditional one? If so, in what way as far as IT pros are concerned.
Traditional Computing Model
A typical 3-tier application includes front-end, middle-tier, and beck-end. For a web application, the front-end is a web site which presents an application. Middle-tier holds the business logic while connecting to a back-end where the data are stored. And along the data path, load balancers (LB) are put in place to optimize performance, as well clusters are constructed for high availability. This analytical model is well understood and modeled. And the 3-tier architecture represents a mainstream design pattern for applications recently developed prior to the emerging cloud era. The concept is illustrated below and some may find there are some similarities to the idea applicable to architecting a cloud application.
Cloud Computing Model
Microsoft Windows Azure abstracts hardware through virtualization and provides on-demand, cloud-based computing, where the cloud is a set of interconnected computing resources located in one of more of data centers. Generally speaking, like a 3-tier design there are 3 key architectural components of a cloud application based on Windows Azure: Compute, Storage, and Fabric Controller, as shown below. In this model, Compute is the ability to execute code, i.e. run applications. Storage is where the data resides. In Windows Azure, Compute and Storage are defined with Roles, and offered as system services. A Role has configuration files to specify how a component may run in the execution environment. While Fabric Controller is a subsystem which monitors and makes decisions on what, when, and where to run and optimize a cloud application. I will talk more about Fabric Controller in Part 4 of this series, meanwhile here let’s examine more on Compute and Storage components.
Specifically, in Compute service, there are Web Role, Worker Role, and VM Role. Web Role implemented with IIS running in a virtual machine.is to accept HTTP and HTTPS requests from public endpoints. And in Windows Azure, all public endpoints are automatically load balanced. Worker Role on the other hand does not employ IIS, is an executable for computation and data management, and functions like a background job to accept requests and perform tasks. For example, Worker Role can be used to install a user specified web server or hosting a database as needed.
Roles communicate by passing messages through queues or sockets. The number of instances of an employed Role is determined by an application's configuration and each Role is assigned by Windows Azure to a unique Windows Server virtual machine instance. An employment of Windows Azure computing model for a real-life shopping list application is shown below. The actual development process and considerations are certainly much more, as discussed elsewhere.
On the other hand, VM Role is a virtual machine. A developer can employ VM Role (namely upload an OS image in VHD) to run Windows services, schedule tasks, and customize the run time environment of a Windows Azure application. This VHD is created using an on-premises Windows Server machine, then uploaded to Windows Azure. Once it’s stored in the cloud, the VHD can be loaded on demand into a VM role and executed. Customers can and need to configure and maintain the OS in the VM role. The following outlines the methodology.
Do keep in mind that VM Role is however stateless. Specifically, VM Role is designed to facilitate deploying a Windows Azure application which may require a long, fragile, or non-scriptable (i.e. can-not-be-automated) installation.This role is especially suited for migrating existing on-premises applications to run as hosted services in Windows Azure. There are an overview and step-by-step instructions readily available detailing how to successfully deploy Windows Azure VM Role.
The other component in a cloud application is Windows Azure Storage services with five types of storage including:
And within a Compute node, there are two types:
There are tools to facilitate managing Storage instances. A graphical UI like Azure Storage Explorer can make managing and viewing stored data a productive experience. Notice the above mentioned storage types are however not relational databases which many applications are nowadays built upon. SQL Azure, part of Windows Azure platform, is SQL in the cloud. And for DBAs, either Microsoft SQL server on the ground or SQL Azure in the cloud, you manage it very much the same way.
A example of using Windows Azure storage is presented with the following schematic. This is a hosted digital asset management web application. It uses a Worker Role as the background processor to generate and place images into and later retrieve by Web Role as the front-end from the store implemented with Windows Azure BLOB services.
In summary, much of our architectural concepts of a traditional on-premises 3-tier application is applicable to designing cloud applications using Windows Azure’s computing model. Namely, employ Web Role as front-end to accepting HTTP/HTTPS requests, while Worker Role to perform specific tasks like traditional asp.net services. There are various types of storage Windows Azure provides. There is also SQL Azure, Microsoft SQL Server in the cloud, making it convenient to migrate existing data or integrate on-premises databases with those in the cloud.
A key feature delivered by VMM 2012 is the ability to deploy an application based on a service template which enables a push-button deployment of a target application infrastructure. VMM 2012 signifies a direct focus, embedded in product design, on addressing the entire picture of a delivered business function, rather than presenting fragmented views from individual VMs. VMM 2012 makes a major step forward and declares the quintessential arrival of IT as a Service by providing out-of-box private cloud product readiness for enterprise IT.
In this fourth article of the 5-part series on VMM 2012,
I further explain the significance of employing a service template.
This is in my view the pinnacle of VMM 2012 deliveries. The idea is apparent, to deliver business functions with timeliness and cost-effectiveness by standardizing and streamlining application deployment process. of Here I focus on the design and architectural concepts of a service template to help a reader better understand how VMM 2012 accelerates the process of a private cloud with consistency, repeatability, and predictability. The steps and operations to deploy and administer a private cloud with a service template will be covered in upcoming screencasts as supplements to this blog post series.
The term, service, in VMM 2012 means a set of VMs to be configured, deployed, and managed as one entity. And a service template defines the contents, operations, dependencies, and intelligence needed to do a push-button deployment of an application architecture with a target application configured and running according to specifications. This enables a service owner to manage not only individual VMs, but the business function in its entirety delivered as a (VMM 2012) service. Here, for instance, a service template developed for StockTrader is imported and displayed in the Service Template Designer of VMM 2012 as below revealing
Application Deployment as Service via IaaS
Since VMM 2008, Microsoft has offered private cloud deployed with IaaS. Namely a self-service user can be authorized with the ability to provision infrastructure, i.e. deploy VMs to authorized environment, on demand. While VMs can be deployed on demand, what is running within those VMs when and how is however not a concerned of VMM 2008.
VMM 2012 on the other hand is designed with service deployment and private cloud readiness in mind. In addition to deploying VMs, VMM 2012 can now deploy services. As mentioned earlier, a service in VMM 2012 is an application delivered by a set of VMs which are configured, deployed, and maintained as one entity. More specifically, VMM 2012 can deploy on demand not only VMs (i.e. IaaS), but VMs collectively configured as an instance of a defined application architecture for hosting a target application by employing a service template. As VMs are deployed, an instance of a defined application architecture is automatically built, and a target application hosted in the architecture becomes functional and available. VMM 2012 therefore converts an application deployment into a service via IaaS.
The Rise of Service Architect
Importantly, a service template capturing all relevancies of an application deployment is an integral part of the application development and production operations. A seasoned team member (whom I call Service Architect) with a solid understanding of application development and specifications, private cloud fabric construction, and production IT operations is an ideal candidate for authoring service templates.
Context and Operation Models
In a private cloud setting, enterprise cloud admin constructs fabric, validates service templates, and acts as a service provider. Service owners are those self-service users authorized to deploy services to intended private clouds using VMM 2012 admin console and act as consumers. Therefore, while enterprise IT constructs fabric and validates service templates, a service owner deploys services based on authorized service templates to authorized private clouds on demand. Notice a self-service user can access authorized templates and instances of VMs and services, private clouds, VM instances, etc. A self-service users nevertheless does not see the private cloud fabric in VMM 2012 admin console or App Controller.
Setting the context at an application level, a service owner deploys a service based on an authorized service template to an authorized private cloud on demand. And here a service owner acts as a service provider. At the same time, an authorized end user can access the application’s URL and acts as a consumer. In this model, an end user does not know and there is no need to know how the application is deployed. As far as a user is concerned, the user experience of accessing a private cloud is similar to accessing a web application.
Standardization, Consistency, Repeatability, and Predictability
What are specified in a service template including static definitions and pre-defined criteria of the what, how, when, and inter-dependency and event-driven information to automate the deployment process of an application. To be able to deploy an application multiple times with the same service template in the same environment, there is also instance information like machine names which are generated, validated, and locked down by VMM 2012 right before deployment when clicking Configure Deployment from Service Template Designer. The separation of instance information from static variables and event-driven operations among VMs of an application included in a service template offers an opportunity to standardize a deployment process with consistent configurations, repeatable operations, and predictable outcomes.
Service Template is in essence a cookie cutter which can reproduce content according to predefined specifications, in this case the shape of a cookie. A service based on a VMM 2012 service template can deployed multiple times on the same fabric, i.e. the same infrastructure, by validating the instance information in each deployment. This is similar to using the same cookie cutter with various cookie dough. The instances are different, the specifications are nonetheless identical.
Deployment with a service template can greatly simplify an upgrade scenario of an already deployed application. First, the production application infrastructure of StockTrader can be realistically and relatively easily mimicked in a test environment by configuring and deploying the same service template to a private cloud for development, such as an isolated logical network of 192.168.x.x subnet defined in the Network pool of the private cloud fabric. in VMM 2012. A new release, 2011.11.24 for example, of the application based on a service template (Release 2011.11) can then be developed and tested in this development environment.
Once the development process is concluded and the service template of Release 2011.11.24 is ready to be deployed, a cloud administrator can then import the service template and associated resources, as applicable, into the private cloud fabric, followed by validating the resource mapping so all references in Release 2011.11.24 are pointing to those in production. To upgrade an application from Release 2011.11 to Release 2011.11.24 at this point is simply a matter of applying the production instance to the service template of Release 2011.11.24. It is quite straightforward form VMM 2012 admin console by right-clicking the instance to be upgrades and setting a target template as show below.
This process is wizard-driven. Depending on how an application’s upgrade domain is architected, current application state, and the natures of changes, application outage may or may not be necessary. The following highlights a process of replacing a service template from Release 2011.11 to Release 2011.11.24 on an instance of StockTrader service.
There are different ways that a new service template can be applied to a running instance. For an authorized self-service user, the above process can also be easily carried out with App Controller, which I will detail in Part 5 of this blog post series.
In VMM 2012, deleting a running service will stop and erase all the associated VM instances. Nevertheless, the resources referenced in the service template are still in place. To delete a service template, all configure deployments and deployed instances must be deleted first.
As private clouds are built and services are deployed, releases of services can be documented by archiving individual service templates with associated resources. Notice this is not about backing up instances and data associated with the instances of an application, but to as preferred keep records of all resources, configurations, operations, and intelligence needed to successfully deploy the application.
With the maturity of virtualization and introduction of cloud computing, IT is changing with an increasing speed and the industry is transforming as we speak. VMM 2012 essentially substantiates the arrival of IT as a Service in enterprise IT. While the challenges are overwhelming, the opportunities are at the same time exciting and extraordinary. IT professionals should not and must not hesitate anymore, but get started on private cloud and get started now. Be crystal clear on what is cloud and why virtualization is far from cloud. Do master Hyper-V and learn VMM 2012. And join the conversation and take a leading role in building private cloud. With the strength of a new day dawning and the beautiful sun, there is no doubt in my mind that a clear cloudy day is in sight for all of us.
[To Part 1, 2, 3, 4, 5]
This is part 2, a continuation of the 3-part series of getting productive on Windows Server 2012, should you not have a touch device readily available. The user operations walked through here are fundamental and for most Windows users this post will be an easy read. However, if not already, one should read part 1 first.
Windows Server 2012 is designed with private cloud and System Center 2012 in mind. While the IT industry is transitioning into cloud computing and embracing consumerization of IT, as an IT professional, the two are my career priorities in the foreseeable future.
3. Settings of Current Screen
On any screen, use the combined keys, Windows Logo key + i, to bring up Settings options for the current screen. Or a user can also move cursor to UR/LR, i.e. the upper right or lower left corners of the computer screen, to bring up the Charms (as shown in part 1) in which Settings option is available as well.
On the right is a sample setting of the Start screen. Notice the lower portion highlight the characteristic settings of the PC, i.e. current OS instance. Click “More PC settings” at the bottom to bring up the PC Setting screen to personalize Lock and Start screens, add user account pictures, etc.
The upper area displays the settings of current page. Notice that by default, “Show administration tools” is configured as “Yes” in the Settings of Start screen as shown on the right. This enables administration tools to appear on the Start and Apps screens. When this is set to “No”, administration tools will not appear in Start, Apps, and search results.
4. Search from the Start Screen
Typing something on the Start screen, regardless where the cursor is, will instantaneously invoke the search function, use what has just been typed to form a pattern, and list out those applications, if any, matched the pattern. Searching from the Start screen is similar to the desktop search of a Windows Server 2008 desktop. The following two screen captures show after typing “fir” on the Start screen, the Search identified one application, “Windows Firewall,” and five Settings related entries matched the pattern.
For accessing a known application like the Run dialog or command prompt, one quick way is to directly type “run” or “cmd” on the Setting screen following by hitting the Enter key.
5. Windows Explorer
Within Windows Explorer, a user can right-click a folder from the navigation tree to pin the folder to the Start screen as illustrated below.
And right-click an application, here myApp.exe, with the Shift key pressed at the same time will provide options to run as administrator or a different user from the one currently logged on, in addition to pinning the app to the Start screen or the taskbar.
An interesting observation is that in Windows Server 2012 a user apparently pins objects directly to either Start screen or taskbar, and not to the desktop. That means we may start to see many clean and roomy Windows Server 2008 desktops now. And a user may become more selective on what to pin and where. As an option, a user can still create a shortcut and place it on the desktop.
6. Windows Store Apps
In Windows Server 2012 Beta, there are only a handful Windows Store apps included with a default installation. There are however many more Windows Store apps included in a default install of Windows 8. A Windows Store app when open and not in use is sent to the background, becomes inactive, and frees its resources. Notice that similar to a cell phone, tablet PC, or other mobile computing device, it is not necessary to close a Windows Store app when not in use. And there are routines to operate on Metro style apps.
Placing the cursor at the UL corner will show the thumbnail of those Windows Store apps currently inactive, and right-click from the UL corner will display the option to close or snap a Windows Store app, when applicable, as shown on the right. Also moving the cursor to the top edge of the screen so the cursor turns into a hand followed by dragging the app to the bottom edge will also close the Windows Store app. This as it appears is similar to swipe across the bottom edge of the screen on a touch device for closing an app.
7. Server Manager
This is the logical hub for configuring and administering both the local and remote Windows servers. By default, Server Manager starts automatically at logon. This setting is in Manage/Server Manager Properties of the upper right menu bar as the following screen capture shows. Notice under Tools is where administration tools are listed including Event Viewer, Task Scheduler, Windows PowerShell ISE, etc.
The menu bar displays a Red Flag, when applicable, indicating some process/task failure and a need for operator’s attention. The welcome screen also highlight 3 orange tiles with Quick Start, What’s New, and Learn More information. Thou shalt not miss them. To hide these tiles, the setting is in View.
8. If You Need to Run, Don’t Walk
The beloved Run dialog is still there. On the Setting screen, simply type “run” will bring up the Search dialog and list the Run application. Or use the combined keys, Windows Logo key + r, to bring up the Run dialog, as needed. And as expected in Run dialog or Windows Explorer, typing CMD will faithfully bring up a long-missed command prompt.
9. Run As J. Smith
From Windows Explorer, right-click with Shift key on an intended executable will allow the program to run as administrator or a different user from the one currently logged in, as shown earlier under Windows Explorer. To run as an administrator from the Start screen, right-click an intended app to get the option, as applicable. Here shown on the right, PowerShell ISE is set to run as administrator from the Start screen.
10. Desktop Experience Feature
The assumption is that there is a seldom need to personalize the desktop background of a server. Hence, a default Windows server installation does not automatically add Desktop Experience feature. And different from that in Windows Server 2008, this setting is, as illustrated below, now moved and available under User Interface and Infrastructure. As always, adding this feature followed by enabling the Theme service will enable the personalization feature for changing the background of a desktop session.
At this point, a Windows server user with the information in the first 2 parts of this blog post series should be able to get productive quickly with Windows Server 2012. In part 3, two important facts I want to bring your attention to.
[To Part 1, 3]
In January, our team had a fun project to tell 31 stories, present 31 opportunities for IT professionals to get started on Windows Server 2012 and Windows Azure, something we all feel very passionate about. Cloud computing is an exciting movement and offering so much to grow as an individual, as an organization, as a business.
Find out who is your area Evangelist, stay in touch with the team, and move forward with the communities. Together, let’s welcome the challenges, embrace the changes, get started, learn it, master it, and take advantages of it. Now here are your 31 opportunities:
Personally, I see Windows Azure Connect is a killer app to facilitate the adoption of cloud computing. For all IT pros, this is where we take off and reach out to the sky while dancing down the street with a cloud at our feet. And that’s amore.
What It Is
To simply put, Windows Azure Connect offers IPSec connectivity between Windows Azure role instances in public cloud and computers and virtual machines deployed in a private network as shown below.
Why It Matters
The IPSec connectivity provided by Windows Azure Connect enables enterprise IT to relatively easily establish trust between on-premises resources and Windows Azure role instances. A Windows Azure role instance can now join and be part of an Active Directory domain. In other words, a domain-joined role instance will then be part of a defense-in-depth strategy, included in a domain isolation, and subjected to the same name resolution, authentication scheme, and domain policies with other domain members as depicted in the following schematic.
In Windows Azure Platform AppFabric (AppFabric), there is also the so-called Service Bus offering connectivity options for Windows Communication Foundation (WCF) and other service endpoints. Both Windows Azure Connect and AppFabric are very exciting features and different approaches for distributed applications to establish connectivity with intended resources. In a simplistic view, Windows Azure Connect is set at a box-level and more relevant to sys-admin operations and settings, while Service Bus is a programmatic approach in a Windows Azure application and with more control on what and how to connect.
A Clear Cloudy Day Ahead
Ultimately, Windows Azure Connect offers a cloud application a secure integration point with private network resources. At the same time, for on-premises computing Windows Azure Connect extends resources securely to public cloud. Both introduce many opportunities and interesting scenarios in a hybrid model where cloud computing and on-premises deployment together form an enterprise IT. The infrastructure significance and operation complexities at various levels in a hybrid model enabled by Windows Azure Connect bring excitements and many challenges to IT pros. What a great development in cloud computing. And I realize there’s indeed a place in the world for a gambler and where skies are blue and dreams do come true.
<Next to Part 2: Application Integration/Migration Model>
A private cloud delivers business functions as services, while virtualization virtualizes computing resources supporting the private cloud. They are two different concepts, address different issues, and operate at different levels in enterprise IT. A private cloud goes far beyond virtualization and virtualization is not a private cloud. To conclude this two-part series as listed below, here are the specifics regarding a private cloud vs. virtualization.
An essential part of a private cloud is virtualization that offers opportunities in reducing infrastructure costs, increasing operational efficiency, and improving deployment flexibility. A server instance deployed with a VM offers many advantages over one deployed with a physical machine. And VMs facilitate the implementations of resource pooling, rapid elasticity, and hence a private cloud. Notice that the benefits of virtualization including lower costs, higher efficiency, flexible deployment, etc. however do not translate themselves directly to "capacity on demand" which is much more and what a private cloud delivers.
In addition to missing the five essential characteristics as criteria, it is not required to deliver virtualization with SaaS, PaaS, or IaaS. Namely, virtualization is not necessarily presented as a "service" while a private cloud always is. What virtualization offers is to virtualize resources without specifying how the virtualized resources are made available to users. In other words, virtualization introduces a mechanism to facilitate implementations of some, but not all, of the private cloud requirements. To equate virtualization to a private cloud is to mistakenly present part of a solution as the solution itself.
The term, cloud, denotes the abilities to exhibit the five essential characteristics with one of the three possible delivery methods of a service as stated in the 5-3-2 Principle of Cloud Computing and NIST SP-800-145. When one claims having a “cloud” solution, we can easily verify it with a few simple questions: Can a user self-serve? How accessible? Is it delivered with (or at least a flavor of) SaaS, PaaS, or IaaS? Is it elastic? Does it have some analytics component?
From a private cloud’s view point, two of the five essential characteristics of cloud computing become optional. They are ubiquitous access and consumption-based chargeback model. This is absolutely not to suggest the two are not applicable. They very much are for any solution qualified with the term, cloud, including a private cloud. There are however legitimate reasons to consider the two differently regarding a private cloud.
Specific to a private cloud, there are however different priorities on the five essential characteristics. One of the five essential characteristics becomes optional in a private cloud setting. Either on premises or hosted by a 3rd party, a private cloud is expected to exhibit self-service, resource pooling, and elasticity of cloud computing. In addition, even if chargeback is not technically or politically realistic in an enterprise setting, analytics must be designed in to provide insight of resource consumption and usage patterns. This is due to self-servicing and elasticity can make a workload very dynamic, and statistical data based on analytics are imperative for planning capacities based on realistic and meaningful data. These concepts are illustrated as below:
Ubiquitous access in cloud computing implies anytime, anywhere, any device accessibility to a service. While considering a private cloud, there are scenarios in which general accessibility may not be the intent. The required data confidentiality, integrity, and availability of a private cloud may prevent a service owner from offering a general access. Instead, business requirements may demand, in addition to user credentials, a further restricted access based on a combination of isolation at various layers including a device type, IP address range, port designation, domain membership, constrained delegation, protocol transition, etc. Namely, accessibility of a private cloud should be based on corporate information security policies and not necessarily an architecturally defined requirement. The concept of information security can be best summarized with the so-called C-I-A triad as shown above.
While transitioning into cloud computing, a realistic approach for enterprise IT is to build a private cloud by first transforming the existing infrastructure components and applications into a target cloud environment. With infrastructure components and an application architecture already put in place, a chargeback model may not always be technically feasible or administratively necessary. To charge back, show back, or sponsor an application without a consumption-based cost morel is up to an organization’s priorities and therefore not necessarily an architectural requirement of a private cloud.
Nonetheless, a chargeback mechanism signifies not only the ability to recover costs, but the critical need of designing analytics into a service. By offering self-service and elasticity in a private cloud, the resource utilization can become very dynamic and unpredictable. The ability to monitor, capture, and process utilization data for capacity planning for supporting anytime readiness of a service has become imperative.
There is no question that virtualization is an enabling technology in transforming enterprise IT into a cloud environment. The reality is that virtualization is one component of a private cloud solution. Virtualization is nonetheless unequivocally not a private cloud itself. What a decision maker must recognize is that, for building a private cloud, virtualization, resource pooling, elasticity, and self-service model are to be architected as a whole with consistency, compatibility, and integration. Such that the architecture can fundamentally realize the 5-3-2 Principle of Cloud Computing with a predictable and maximal ROI in the long run. The discovery, deployment, configuration, and management of a target resource from bare medal to runtime are all to be based on a common management platform and implemented with a comprehensive system management solution. Further, a consistent user experience in managing physical, virtualized, as well as cloud resources is critical to warrant a continual and increasing cost reduction of on-going technical support and training.
Above all, a private cloud solution offers a technical architecture to strategically advance go-to-market which has become increasingly critical to the survival of a business facing the unpredictable workloads supported by proliferation of mobile devices and triggered by instant data storms in a highly connected business computing environment. IT professionals must not confuse virtualization with a private cloud. The former is a technically centric and an important piece of private cloud puzzles for virtualizing resources. While the latter focuses on servicing customers with on-demand accessibility and always-on readiness of a target application. The 5-3-2 Principle of Cloud Computing and this two-part series reveal what a private cloud is about. And that is to strategically build a go-to-market vehicle, such that enterprise IT can fulfill business needs and exceed user expectations. With a private cloud, IT can leverage business opportunities generated by market dynamics and offer a user experience with anytime, anywhere, on any device productivity. The ability of acquiring IT capabilities on demand with a private cloud is in essence a reality of "IT as a Service."
[This article is a cross-posting from http://aka.ms/yc.]
Power of Choices
For enterprise, cloud computing presents tremendous opportunities to re-architect IT for the future. From infrastructure to business model and management, as depicted in the following, IT now has options to redistribute, reorganize, reset priorities which perhaps not feasible in a traditional on-premises only computing model. IT can shift the cost from capital expense to operational, let a 3rd party to run those functions not in the core business, employ cloud to easily scale up or out, etc. And the benefits of having a dedicated cloud are apparent and attractive. And that is the so-called private cloud, a dedicated cloud which can be run on premises or hosted off premises by a 3rd party.
Essence of Cloud
How to deploy and where to deploy a service do require strategic planning and solid fundamentals in application architecture, such that the benefits of cloud computing can be realized in a timely and predictable fashion. There are specific capabilities that cloud computing is expected to provide. Essentially, regardless how cloud computing is deployed or delivered, applications need to engineer in the quality and exhibit the five characteristics of cloud computing, namely:
Else, one can call it whatever one wants. It is however not the cloud computing that IT industry is talking about.
Approaching Private Cloud
Depending on the priorities of a business, the above 5 characteristics may not be all relevant for building private cloud. For instance, perhaps ubiquitous network access is not required since in a private cloud setting IT may not want resources to be that accessible for security reasons. Corporate may not critically need a pay per use model due to the complexity and feasibility to implement chargeback. To approach private cloud, there needs an overall vision to define the goals for the next three to five years. Private cloud is expensive and mechanism to ensure predictable results must be put in place. Enterprise IT needs to enforce hardware/software standards in datacenter and automate/optimize operations and procedures, when possible. Application architect need to develop cloud application design guidelines to ensure application manageability and readiness for running in a mixed (i.e. cloud and on-premises) environment. On top of applications, a common platform to manage physical, virtualized, and cloud resources transparently and in a consistent fashion is strategic and imperative to transition into cloud computing. Without a unified way to manage not only virtual machines, but the workloads; without a single pane of glass to manage the applications within private cloud, but public cloud as well, the cloud transformation will introduce many manageability issues into IT operations, and the process is likely to become divergent, with run-away costs, and eventually non-manageable.
For transforming into cloud computing, there are developed strategies. The above shows a logical progression from an on-premises establishment towards off-premises cloud computing. Along the process, virtualization is taking place and will continue the momentum. (Gartner Symposium ITexpo 2010) This is because for cloud computing to be a reality, virtualization is essential in current technology. Virtualization also implies that a management solution needs to be in place. For cloud to scale rapidly with pooled resources with location transparency, virtual machines and the ability to manage the physical hardware, virtual machines and workloads running within are essential. The management solution will need to know which virtual machines need attentions, how to operate them, where to place them, and what to do with them. Further the ability to monitor and manage the applications/services running within a workload is as critical since it is the applications/services that we care and not the virtual machines themselves. Once virtualization are introduced and mature, enterprise IT can then migrate into an on-premises private cloud before moving into running datacenter in public cloud, if that is the goal. Depending on the business needs, I imagine many IT shops will settle somewhere among virtualized, private cloud, and public cloud. I believe to cloud or not to cloud, that is not the question. The question is really how much and how far.
Specific to private cloud, the above specifies the key milestones, essential capabilities, and recommendations. Each is crucial for contributing a predictable ROI with cost-effectiveness in the long run. Enterprise IT must rethink how to conduct business, develop a vision with a roadmap to move from structured responses to on-demand deliveries. And private cloud is the very opportunity.
Do-It-Yourself Private Cloud
The private cloud solutions from Microsoft is collectively called Hyper-V Cloud which is a set of guidelines and offerings, and not a packaged product. Hyper-V Cloud offerings is mainly for building private cloud of IaaS as of February of 2011. Fast Track is delivered by partners having worked with Microsoft to combine hardware and software offerings based on a reference architecture for building private clouds. There is also a Service Provider Program to identify a qualified service provider to host a dedicated private cloud for you. Above all, there is Hyper-V Deployment Guide for you to build your own private cloud. Yes, you can do it yourself to build a private cloud. In fact regardless which option you plan to acquire your private cloud, I highly recommend going through the Deployment Guide and building a private cloud yourself in a lab environment to get a much better understanding on how to architect, implement, and manage a cloud computing solution.
Getting Ready for the Changes
For IT pros, cloud is a leap from administering server boxes to managing services. The abstraction layer placed by virtualization makes it a lesser concern of the physical hardware in most cloud computing scenarios. This is hard and a cultural shock for most IT pros. With the continual advancement of virtualization technologies, the changing perspective of IT infrastructure is inevitable. One obvious example is that Microsoft System Center Virtual Machine Manager (VMM) Self-Service Portal (SSP) 2.0 introduces a service-centric view of implementing private cloud of IaaS. SSP 2.0 uses (Solution) Infrastructure, Services, and Roles as the building blocks as shown below. A private cloud of IaaS user will then form a service delivery model with a hierarchy where an Infrastructure consists of Services, while each Service includes Roles, and virtual machines are deployed based on the defined Roles.
While integrated with VMM, SSP 2.0 is a free offering from Microsoft that includes a set of web portals, a data store, a lightweight provisioning engine, and documentation and guidance. Within SSP 2.0, a datacenter administrator will first define the resource pools of network, commuting (RAM), and storage (disk space) resources and the cost model of reserving and allocating these resources. There are also predefined templates for deploying virtual machines to be imported from VMM into SSP 2.0 as datacenter’s resources. An authorized business unit administrator will then register one’s business unit followed by making a request for creating a Solution Infrastructure and the included Services and Roles. Once a request is approved by datacenter admin, an authorized user can deploy virtual machines on demand based on approved computing and storage quota. The cost of reserving and deploying an Infrastructure is calculated according to a chargeback model.
The following is a sample solution infrastructure for a Staffing solution. The hiring service is to post job opening, accept resumes, and run through the interview process. Once a candidate is hired as an employee, HR will create a record with Employee Information service and establish employment history and confidential records, while the employee can use the same service to maintain the personal data like home address, phone numbers, etc. The significance of doing this in private cloud is that once the cloud is defined, it is centrally managed with self-serving, on-demand, workflows, and chargeback capabilities. The system is monitored and managed by VMM. On a regular basis, usage IT can now generate reports to conclude the amount to charge back to business units based on their usage.
The provisioning and deploying of a so-called Infrastructure here are quite different than traditional way deploying servers in an on-premises computing environment. Because the deployment is carried out with virtual machines, the computing and storage requirements can be and are provisioned on demand by changing the specifications of a change request. Once approved by datacenter admin via workflow, SSP 2.0 will then allow an authorized user to allocate resources within permitted quota. An authorized user can create/deploy virtual machines on demand as shown in the following. The construction of an “infrastructure” is now much more focused on designing and deploying the “service” with requested capacity and not so much on the involved physical hardware and topology. This allows IT to focus more on enabling business and not constantly running cables and setting up servers. This is called Infrastructure as a Service with private cloud in action.
Personally what gets me most excited about cloud is that all I have discussed are within the reach today. Either to consume, build, or be a cloud, there are so many opportunities to improve IT service deliveries and offer a better experience to users, and grow professionally at the same time. Changes are happening and coming strong. I however see this time they are exciting and for the better. Start immediately. Start now to accept the changes, master the changes, and win all the changes.
So let it be known. I am an IT pro and private cloud was my idea.
As IT architectures, methodologies, solutions, and cloud computing are rapidly converging, system management plays an increasingly critical role and has become a focal point of any cloud initiative. A system management solution now must identify and manage not only physical and virtualized resources, but those deployed as services to private cloud, public cloud, and in hybrid deployment scenarios. An integrated operating environment with secure access, self-servicing mechanism, and a consistent user experience is essential to be efficient in daily IT routines.
App Controller is a component and part of the self-service portal solution in System Center 2012 SP1. By connecting to System Center Virtual Machine Manager (SCVMM) servers, Windows Azure subscriptions, and 3rd-party host services, App Controller offers a vehicle that enables an authorized user to administer resources deployed to private cloud, public cloud, and those in between without the need to understand the underlined fabric and physical complexities. It is a single pane of glass to manage multiple clouds and deployments in a modern datacenter where a private cloud may securely extend it boundary into Windows Azure, or a trusted hosting environment. The user experience and operations are consistent with those in Windows desktop and Internet Explorer. The following is a snapshot showing App Controller securely connected to both on-premise SCVMM-based private cloud and cloud services deployed to Windows Azure.
A key delivery of App Controller is the ability to delegate authority by allowing a user to connect to multiple resources based on user’s authorities, while hiding the underlying technical complexities.
An user can then manage those authorized resources by logging in App Controller and authorized by an associated user role, i.e. profile. In App Controller, a user neither sees, nor needs to know the existence of cloud fabric, i.e. under the hood how infrastructure, storage virtualization, network virtualization, and various servers and server virtualization hosts are placed, configured, and glued together.
When first logging into App Controller, a user needs to connect with authorized datacenter resources including SCVMM servers, Windows Azure Subscriptions, and 3rd party host services.
The user experience of App Controller is much the same with that of operating a Windows desktop. Connecting App Controller with a service provider on the other hand is per the provider’s instructions. However the process will be very similar with that of connecting with a Windows Azure subscription.
Connecting App Controller with Windows Azure on the other hands requires certificates and information of Windows Azure subscription id. This routine although may initially appear complex, it is actually quite simple and logical.
Establishing a secure channel for connecting App Controller with a Windows Azure subscription requires a private key/public key pair. App Controller employs a private key by installing the associated Personal Information Exchange (PFX) format of a chosen digital certificate, and the paired public key is in the binary format (.CER) of the digital certificate and uploaded to an intended Windows Azure subscription account. The following walks through the process.
For those who are familiar with PKI, use Microsoft Management Console, or MMC, to directly export a digital certificate in PFX and CER formats from local computer certificate store. Those relatively new to certificate management should first take a look into what certificates IIS are employing first to better understand which certificate to use.
Since App Controller is installed with IIS, acquiring a certificate is quite simple to do. When installing App Controller with IIS, a self-signed certificate is put in place for accessing App Controller web UI with SSL.
The certificate store of an OS instance can be accessed with MMC.
The two export processes, for example, created two certificates for connecting App Controller with Windows Azure as the following.
Upon connecting to on-premise and off-premise datacenter resources, App Controller is a secure vehicle enabling a user to manage authorized resources in a self-servicing manner. It is not just the technologies are fascinating. It is about shortening the go-to-market, so resources can be allocated and deployed based on a user’s needs. This is a key step in realizing of IT as a Service.