There are two important concepts in VMM 2012 SP1 to understand Microsoft private cloud solutions. “Fabric” and “Service Template” they are.
In Microsoft private cloud solutions, VMM is the management solution for virtualized resources. In the context of cloud computing, virtualization now encompasses three disciplines. In addition to server virtualization which many IT professionals are familiar with, network virtualization and storage virtualization are included as the three resource pools to together form the so-called fabric. VMM 2012 and later has the (fabric) abstraction architected in. Designed with constructing and managing fabric in mind, VMM has becomes a key enabler in implementing a private cloud solution.
In cloud computing, fabric is an abstraction signifying the ability to discover, identify, and manage a resource. And there are three resource pools: Compute, Network, and Storage integrated with one another to collectively form the fabric. Namely a resource added into one of the three resource pools will by default become part of the fabric and automagically a managed object. Here the Compute pool represents all resources relevant to the computing power, cpu cycles, and execution of code. The Network pool is how resources are glued together or isolated. And the Storage pool is where digital assets are stored.
In VMM 2012 SP1, Admin Console (as shown on the left) substantiates the concept of fabric and the three resources pools with visual presentations. Clicking the Fabric workspace will display the three resource pools on the navigation pane as Servers (as Compute), Networking, and Storage. Each pool includes groups of components and configurations to support designated functions. One major part of building a private cloud is to establish the three resource pools by adding and configuring server, network, and storage virtualization solutions and components into an associated resource pool.
Above the fabric are resources available for consumption. While under the fabric are three resource pools managed by VMM offering computing power, networking capabilities, and storage space to fulfill requests with elasticity. Fabric offers simplicity and shield a user from those complexities under the hood.
Conceptually, the term, service, in the context of cloud computing means capacity on demand. Hence IaaS or Infrastructure as a Service means infrastructure available on demand. For IT professionals, infrastructure means servers and in cloud computing servers are deployed as VMs since all consumable resources in cloud computing are all virtualized. Therefore, IaaS becomes the ability to deploy VMs on demand.
PaaS is Platform as a Service. An application platform means a target runtime environment for an application. PaaS is then a runtime environment available on demand. A runtime environment includes DLLs, APIs, registry, services, etc. which are configured after a server OS is put in place which is what IaaS delivers. This suggests that PaaS has a dependency on IaaS.
SaaS is Software as a Service. Software is essentially an application. SaaS says an application available on demand. While an application is to run in a target runtime environment, if the target runtime environment is available on demand, the application can consequently become available on demand. For instance, a .Net application is to run in a .Net Framework environment which is what Windows Azure PaaS offers. Since the .Net Framework runtime environment is available on demand, a .Net application deployed to Windows Azure can then become available on demand, i.e. deployed with SaaS. In other words, SaaS relys on PaaS of a target runtime environment.
This relationship between IaaS, PaaS, and SaaS presents a logical approach for transforming enterprise IT into a cloud computing setting. That is to start with IaaS, transition to PaaS, and ultimately deliver SaaS. This concept is realized in a VMM Service Template deployment.
In VMM, a service has an operational definition as a set of VMs deployed and managed as one entity and collectively delivers a LOB application. This definition is significant.
A VMM service template is a deployment blueprint capable of encapsulating everything needed for deploying an application including application architecture, contents, requirements, configurations, processes, tasks, and operations. With a service template, IT can now deploy, configure, and substantiate an instance of a target application with consistency and predictability. The introduction of a service template makes deployment as a service a reality.
For example, the following is a service template with a web frontend, a mid-tier on operations, another mid-tier as business service layer, and a SQL backend. With each machine tier, a VM template is put in place with hardware profile, OS profile, application profile, and database profile as applicable. Associated with the four VM templates, there are two web application packages, a server app-v package for order processing, a server app-v package for business services, and five database deployment packages respectively.
Here these four VM templates collectively deliver a LOB application suggests this set of VMs represents the application architecture. By deploying this web application as a service in VMM denotes that the application architecture can be managed as a single entity. The ability to deploy an application architecture, i.e. a set of VMs to collectively deliver an application, is a realization of IaaS. Namely VMM can provision the infrastructure (i.e. deploy a set of VMs) of an application on demand.
Since the entire application architecture can be put in place as one entity, the processes to configure a target runtime environment for the application can be automated and carried out upon completion of deploying the architecture. For instance, once the set of VMs forming the multi-tier architecture of the web application is in place, the process can subsequently install web server role, .Net Framework, server app-v, and SQL Server on selected VMs and validate interdependencies like protocol, APIs, ports, rules, etc., if any, among these VMs. The outcome is a set of VMs configured to provide a target runtime environment. Since the application architecture is deployed on demand, the runtime environment can be automatically configured upon the application architecture is deployed, hence the application runtime environment (i.e platform) is available on demand. This is essentially transitioning from IaaS to PaaS.
As the target runtime is configured, the process can then kick off the application installation procedures. This is when frontend IIS server, mid-tier operations and business service servers, and backend SQL server are installed with web application packages, server app-v packages, and database packages respectively. Application parameters, customizations, and interdependencies among servers at the application layer are at this time set and validated. Upon the target application is successfully installed and started, an instance of the application is then substantiated. And because the runtime environment (or platform) is available on demand, the application running in the runtime environment can now be installed and becomes available on demand. Which is SaaS.
A service template can in essence encapsulate everything needed to successfully deploy a target application. The process starts with IaaS to deploy the application architecture, followed by transitioning to PaaS when configuring runtime environment, and then installing the target application and presenting the application with SaaS.
Once a service template is validated against fabric, i.e. all resources referenced in the template are correct and available in fabric, an application can be deployed by substantiating (i.e. deploying) an instance of the service template. Since the deployment of VMs, the runtime environment configurations, and the application installations and customizations can all be automated, the process to deploy a service template can be simplified to a few mouse clicks. And a successful deployment of a service template results with an instance of the target application.
There are many details embedded in a service template to make each service template deployment isolated from one another, and a unique application instance to the fabric. Nevertheless, the employment of a service template provides consistency of application design and configurations. It is similar to using the same layout to build a house, while all houses are with the same layouts, all houses are still individually identifiable and unique.
A fundamental approach in cloud computing is to develop process patterns for consistency, repeatability, predictability, and simplicity. Fabric and Service Template in VMM 2012 SP1 are two vivid examples to hide away and replace complexities with patterns and blueprints. Both suggest some form of logical grouping and standardization. Once standardized, automation can follow to increase efficiency and reduce TCO. And for those automated, maximize ROI with optimization. Ultimately VMM 2012 SP1 is about building a private cloud delivering quicker, better, and more, while all with less.
In Part 4 of their Building a Private Cloud with System Center 2012 SP1, Keith Mayer and Yung Chou show us how to configure the storage fabric in Virtual Machine Manager. Tune in as they walk us through the process of adding different types of storage into our fabric such as file server storage (SMB 3.0) and block storage (iSCSI, Fiber Channel , or SAS).
Websites & Blogs:
The entire series is also available via http://aka.ms/bpc.
The entire series is also available via http://aka.ms/bpc.
Build your very own Virtual Network in the Cloud for FREE with the Windows Azure cloud platform, and Enter for a chance to win one of the following fantastic prizes:
You could win a ticket to Microsoft TechEd 2013, a Microsoft Surface Pro or Certification Exam Voucher!
You could win a ticket to Microsoft TechEd 2013, a Microsoft Surface Pro or Certification Exam Voucher!
In addition to a chance to win one of the prizes above, EVERY ENTRANT will receive up to 750 compute hours and up to 35GB cloud storage to use as you’d like each month for 90-Days as part of the Windows Azure free trial program.
You can enter the Microsoft TechEd “Cloud Challenge” Sweepstakes by completing all of the THREE EASY TASKS below to activate a Windows Azure FREE 90-Day Trial Account ( no subscription obligation or fees required ) and build your Virtual Network in the Cloud. Be sure to complete the last task to submit your proof-of-completion for entry into this sweepstakes.
Activate a FREE Windows Azure 90-Day Trial Account to receive up to 750 compute hours and up to 35GB cloud storage to use as you’d like each month for 90-days. After the free 90-day period ends, there is absolutely no obligation required for a paid subscription.
DO IT: Activate a FREE Windows Azure 90-Day Trial NOTE: When activating your FREE Trial for Windows Azure, you will be prompted for credit card information. This information is used only to validate your identity and your credit card will not be charged, unless you explicitly convert your FREE Trial account to a paid subscription at a later point in time.
DO IT: Activate a FREE Windows Azure 90-Day Trial NOTE: When activating your FREE Trial for Windows Azure, you will be prompted for credit card information. This information is used only to validate your identity and your credit card will not be charged, unless you explicitly convert your FREE Trial account to a paid subscription at a later point in time.
Virtual Networks on the Windows Azure Cloud Platform allow you to define a predictable set of virtualized IP subnets upon which you can place one or more Virtual Machines running Windows Server 2012, Windows Server 2008 R2 and Linux. You can even securely connect a Windows Azure Virtual Network to your on-premise environment via a Site-to-Site IPsec VPN tunnel to leverage Windows Azure as a remote datacenter for disaster recovery, online backup, pilots, migrating applications … and MORE!
Complete the steps in this task to sign-in to the Windows Azure Management portal and quickly provision a new Virtual Network in the cloud.
Congratulations! You now have a new virtual network that you can use to connect multiple virtual machines together on the Windows Azure Cloud Platform. Learn more with our FREE Online Training.
Complete the steps in this task to submit your proof-of-completion entry into the Microsoft TechEd “Cloud Challenge” Sweepstakes for a chance to win one of the exciting prizes listed above.
Upon submitting your entry, you will receive a confirmation email within 24-hours.
Now that you’ve built your Windows Azure Virtual Network in the cloud, start leveraging it with these additional free learning resources.
NO PURCHASE NECESSARY. Open only to legal residents of the 50 U.S. states or D.C., 18+. Sweepstakes ends April 30, 2013. For Official Rules, see http://aka.ms/CloudChallenge201304Rules.
Some IT decision makers may wonder, I have already virtualized my datacenter and am running a highly virtualized IT environment, do I still need a private cloud? If so, why?
The answer is a definitive YES, and the reason is straightforward. The plain truth is that virtualization is no private cloud, and a private cloud goes far beyond virtualization. (Ref 1, Ref 2)
Technically, virtualization is signified by the concept of “isolation.” By which a running instance is isolated in a target runtime environment with the notion that the instance consumes the entire runtime environment despite the fact that multiple instances may be running at the same time with the same hosting environment. A well understood example is server virtualization where multiple server instances running on the same hardware while each instance runs as if it possesses the entire runtime environment provided by the host machine.
A private cloud on the other hand is a cloud which abides the 5-3-2 Principle or NIST SP 800-145, the de facto definition of cloud computing. In other words, a private cloud as illustrated above must exhibit the attributes like elasticity, resource pooling, self-service model, etc. of cloud computing and be delivered in a particular fashion. Virtualization nonetheless does not hold, for instance, any of the three attributes as a technical requirement. Virtualization is about isolating and virtualizing resources, while how a virtualized resource is allocated, delivered, or presented is not particularly specified. At the same time, cloud computing or a private cloud, is visualized much differently. The accessibility, readiness, and elasticity of all consumable resources in cloud computing are conceptually defined and technically required for being delivered as “services.”
The service concept is a center piece of cloud computing. A cloud resource is to be consumed as a service. This is why these terms, IaaS, PaaS, SaaS, ITaaS, and XaaS (everything and anything as a service), are frequently heard in a cloud discussion. A service is what must be presented to and experienced by a cloud user. So, what is a service?
A service can be presented and implemented in various ways like forming a web service with a block of code, for example. However in the context of cloud computing, a service can be precisely captured by three words, capacity on demand. Capacity here is associated with an examined object such as cpu, network connections, or storage. One-demand denotes the anytime readiness with any network and any device accessibility. It is a state that previously takes years and years of IT disciplines and best practices to possibly achieve with a traditional infrastructure-focused approach, while cloud computing makes “service” a basic deliver model and demand all consumable resources including infrastructure, platform, and software to be presented as services. Consequently, replacing the term, service, with “capacity of demand” or simply “on demand’ brings clarity and gives substance to any discussion of cloud computing.
Hence, IaaS, infrastructure as a service, is infrastructure on demand. Namely one can provision infrastructure, i.e. deploying virtual machines (since all consumable resources in cloud computing are virtualized) based on needs. PaaS means platform as a service, or a runtime environment available on demand. Notice that a target runtime environment is for running intended applications. Since runtime is available on demand, an application deployed to the runtime will then become available on demand, which is SaaS, or software available on demand or as a service.
Logically, building a private cloud is the post-virtualization step to continue transforming IT into the next generation of computing with cloud-based deliveries. The following schematic depicts Microsoft’s vision of transforming a datacenter from infrastructure-based deployments to a service-centric cloud delivery model.
Once resources have been virtualized with Hyper-V, System Center 2012 SP1 builds and transforms existing establishments into an on-premise private cloud environment based on IaaS. Windows Azure then provides a computing platform with both IaaS and PaaS solutions for extending an on-premise private cloud beyond corporate boundaries and into a global setting with resources deployed off premise. This hybrid deployment scenario is emerging as the next generation IT computing model where IT’s ultimate missions to deliver and support business functions will be carried out and maintained as services.
So what is cloud exactly?
Cloud, as I define it here, is a concept, a state, a set of capabilities such that a targeted business capacity is available on demand. And on-demand denotes a self-servicing model with anytime readiness, any network and any device accessibility. Cloud is certainly not a particular implementation since the same state can be achieved in various implementations as technologies continue to advance and methodologies evolve.
Comparing apples to apples, there is few reason that a business does not prefer cloud computing over traditional IT. Why one would not want to acquire the ability to adjust business capacity based on needs. Therefore, to cloud or not to cloud is not the question. Nor about security is the issue. In most cases, cloud is likely to be more secure as managed by a team of cloud security professionals in a service provider’s datacenter, as opposed to be implemented by IT generalists wearing multiple hats while running an IT shop. Cloud is about how critical the on-demand capability means to a business and for certain verticals, the question is more about regulatory compliance. And above all it is about a business owner’s understanding and comfort level with cloud.
IT nevertheless does not wait, nor can simply maintain status quo. Why private cloud? The pressure to produce more with less, the need to be instantaneously ready and respond to a market opportunity is not just a pursuit of excellence, but a matter of survival in today's economic climate with ever increasing user expectations. One will find out that a private cloud is a vehicle to facilitate and transform IT with increasing productivity and reduced TCO over time as discussed in the Building a Private Cloud blog post series. IT needs a private cloud to shorten go-to-market, to encourage consumption, to accelerate product adoption, to change the dynamics by offering better, quicker, and more with less. That is the reality of IT. That is why.
Back for Part 3 of their Building a Private Cloud with System Center 2012 SP1, Keith Mayer and I continued the conversation and discussed about how to deploy and upgrade to System Center 2012 SP1 Virtual Machine Manager which is essential for building a private cloud.
In Part 2 of their Building a Private Cloud with System Center 2012 SP1, Keith Mayer and Yung Chou lay out the foundation of your Private Cloud by expanding on the features of Windows Server 2012. Tune in as they discuss Scale-up and Scale-out scenarios, how to configure NIC teaming and failover clustering.
Caleb’s understanding of virtualization and Hyper-V was a surprise to me since I did not sit down and gave him a technical discussion or training of any kind in Microsoft virtualization, and yet he seemed quite comfortable with creating and operating a virtual machine (VM) in Hyper-V. He probably learned how to install Windows Server 2012 by being around in my home office after he’s out of school in those afternoons a few weeks ago, since while building VM images and developing demos I would then go through installing servers and verified settings back and forth many times.
What got me excited is not necessary that he’s able to follow the wizard and clicking through the settings. What has impressed me is that his confidence in describing the process, his comfort level in navigating through the UI, and his abilities to visualize some of the essential concepts of virtualization such as adding Hyper-V role, composing a VM, setting Dynamic Memory, creating a Virtual Switch, etc. And of course, most importantly snacked along the way and never left any cookies. :)
Keith Mayer and Yung Chou kick off their multi-part series on how to build a Private Cloud using System Center 2012 SP1. Tune in for part 1 as they discuss the difference between a Private Cloud and a Highly Virtualized Environment, how System Center 2012 components map to Private Cloud requirements as well as how the SP1 release allows for even greater improvements to managing your datacenters and applications.
In February, our team of 11 Microsoft US Platform Technology Evangelists presented 20 opportunities for IT professionals to better understand the migration and deployment of Windows Server 2012 and Windows Azure VM. Cloud is here to stay. With so many options and new scenarios, IT is being challenged with doing not just more, but everything more with less. Planning, executing, and harvesting along the way will be my strategy.
One essential characteristics of cloud computing is a self-service mechanism. Both NIST SP 800-145 and Chou’s 5-3-2 Principle have discussed well. The self-servicing capability is essential since not only it reduces support cost fundamentally, but making it easy for a user to consume provided services will continually promote the usage and ultimately accelerate the ROI. In System Center 2012 SP1, App Controller is the self-service vehicle for managing a hybrid cloud based on SCVMM, Windows Azure, and 3rd party hosting services.
This article assumes a reader is familiar with System Center 2012 SP1, and particularly System Center Virtual Machine Manager (SCVMM) and App Controller. Those who are new to System Center 2012 SP1 should first download and install at least SCVMM 2012 SP1and App Controller 2012 SP1 from http://aka.ms/2012 to better follow the presented content.
The concept of a role-based security model in SCVMM is to package security settings and policies on who can do what, and how much on an object into a single concept, the so-called user role. The idea of a user role is to define a job function which a user performs as opposed to simply offering a logical group of selected user accounts.
To delegate authority, a user role is set with tasks, scope, and quotas based on a target business role and assigned responsibilities. The members of a user role are then with the authority to carry out specific tasks on authorized objects for performing a defined business function. For instance, a first-tier help desk support may perform a few specific diagnostic operations on a VM or service, but not debugging, storing, or redeploying it, while a datacenter administrator as an escalation path for the first-tier help desk can do all. In this case, a help desk support and an escalation engineer are to be defined as two user roles for delegating authority.
Operationally, creating a user role is to configure a profile which include membership, scope, resources, credentials, etc. A user role defines who can do what and how much on an authorized resource. And in essence a defined user role is a policy imposed on those who are assigned with this role, i.e. having a membership of this role.
To set up a user role in SCVMM, use the admin console and go to Setting workspace followed by clicking Create User Role from the ribbon as shown below. There are four user roles profiles available in SCVMM 2012 SP1. Each profile includes membership, scope, accessible networks and resources, allowed operations, etc.
The self-service model of SCVMM is to employ App Controller and SCVMM admin console as the self-service vehicle and enables an authorized user to self-manage resource consumption based on SLA with minimal IT involvement in the lifecycle of a deployed resource and without the need to expose the underlying fabric which is a key abstraction in cloud computing.
A difference of using App Controller and SCVMM is that the former does not reveal the underlying fabric regardless, while the latter will according to the user role of an authenticated user.
In System Center 2012 SP1, there are a number of new operations available for App Controller as documented in http://technet.microsoft.com/en-us/library/jj605414.aspx. These operations as listed below facilitate the migration and deployment of resources among SCVMM-based private clouds, Windows Azure, and 3rd party hosting services.
Cloud is here to stay and hybrid is the way to go. Be ready. Learn, master, and take advantage of it. Make profits. Grow a career. Eat well and sleep well while welcoming XaaS, Everything as a Service, which we will have a lot to talk about soon.
As IT architectures, methodologies, solutions, and cloud computing are rapidly converging, system management plays an increasingly critical role and has become a focal point of any cloud initiative. A system management solution now must identify and manage not only physical and virtualized resources, but those deployed as services to private cloud, public cloud, and in hybrid deployment scenarios. An integrated operating environment with secure access, self-servicing mechanism, and a consistent user experience is essential to be efficient in daily IT routines.
App Controller is a component and part of the self-service portal solution in System Center 2012 SP1. By connecting to System Center Virtual Machine Manager (SCVMM) servers, Windows Azure subscriptions, and 3rd-party host services, App Controller offers a vehicle that enables an authorized user to administer resources deployed to private cloud, public cloud, and those in between without the need to understand the underlined fabric and physical complexities. It is a single pane of glass to manage multiple clouds and deployments in a modern datacenter where a private cloud may securely extend it boundary into Windows Azure, or a trusted hosting environment. The user experience and operations are consistent with those in Windows desktop and Internet Explorer. The following is a snapshot showing App Controller securely connected to both on-premise SCVMM-based private cloud and cloud services deployed to Windows Azure.
A key delivery of App Controller is the ability to delegate authority by allowing a user to connect to multiple resources based on user’s authorities, while hiding the underlying technical complexities.
An user can then manage those authorized resources by logging in App Controller and authorized by an associated user role, i.e. profile. In App Controller, a user neither sees, nor needs to know the existence of cloud fabric, i.e. under the hood how infrastructure, storage virtualization, network virtualization, and various servers and server virtualization hosts are placed, configured, and glued together.
When first logging into App Controller, a user needs to connect with authorized datacenter resources including SCVMM servers, Windows Azure Subscriptions, and 3rd party host services.
The user experience of App Controller is much the same with that of operating a Windows desktop. Connecting App Controller with a service provider on the other hand is per the provider’s instructions. However the process will be very similar with that of connecting with a Windows Azure subscription.
Connecting App Controller with Windows Azure on the other hands requires certificates and information of Windows Azure subscription id. This routine although may initially appear complex, it is actually quite simple and logical.
Establishing a secure channel for connecting App Controller with a Windows Azure subscription requires a private key/public key pair. App Controller employs a private key by installing the associated Personal Information Exchange (PFX) format of a chosen digital certificate, and the paired public key is in the binary format (.CER) of the digital certificate and uploaded to an intended Windows Azure subscription account. The following walks through the process.
For those who are familiar with PKI, use Microsoft Management Console, or MMC, to directly export a digital certificate in PFX and CER formats from local computer certificate store. Those relatively new to certificate management should first take a look into what certificates IIS are employing first to better understand which certificate to use.
Since App Controller is installed with IIS, acquiring a certificate is quite simple to do. When installing App Controller with IIS, a self-signed certificate is put in place for accessing App Controller web UI with SSL.
The certificate store of an OS instance can be accessed with MMC.
The two export processes, for example, created two certificates for connecting App Controller with Windows Azure as the following.
Upon connecting to on-premise and off-premise datacenter resources, App Controller is a secure vehicle enabling a user to manage authorized resources in a self-servicing manner. It is not just the technologies are fascinating. It is about shortening the go-to-market, so resources can be allocated and deployed based on a user’s needs. This is a key step in realizing of IT as a Service.
This lab demonstrates the ability to easily deploy and manage a VM in Windows Azure. Here, this VM happens to be a SQL Server 2012 which makes it more interesting by walking through the process to configure and remotely maintain a SQL Server 2012 instance running in a Windows Azure VM. This is however not intended to be a SQL lab and SQL Server experience is helpful but not required for completing the following tasks:
Placing a SQL database in the cloud and maintaining it remotely is a straightforward concept. Similar to connecting to an on-premise SQL database, a database client configures a connection string and connects to a target database which in this case is a SQL Server 2012 instance running in a Windows Azure VM in the cloud. Regardless where a SQL instance runs much of the sys admin routine is much the same by configuring firewall rules, setting authentication methods, creating SQL users, etc. The following depicts the conceptual model.
A step-by-step, screen-by-screen lab guide as shown detailing the process and steps to deploy, configure, and test database connectivity is available for download.
Here I am making this lab guide available as a download in pdf. This is a lab that I believe will accelerate many of us to better understand cloud computing and Windows Azure. Either you are a system admin or a DBA, go through this lab will connect many dots for you. If nothing else, use this lab as a self-study material for Windows Server 2012 and SQL Server 2012 and update your skill set.
At the same time, I also want to ask all to help sharing this resource broadly across the IT community. So other fellow IT pros can also benefit from it. Click the button to post a short tweet about this document, and you'll automatically receive a direct link to download this lab guide immediately afterwards. I hope you will find the document helpful. If you prefer not to share it with a tweet, email me from this post and I will understand and direct you to download the document.
To do this lab, you will need to have a Windows Azure subscription for deploying VMs. If not already, this is a good opportunity to start and learn Windows Azure. You can sign up and use Windows Azure 90-day free trial at http://aka.ms/90 to do the lab. A screencast as a supplement to the lab guide is available at http://aka.ms/AzureVMSQL.
This particular blog post presents the routines to conduct a RDS Quick Start session-based deployment, which is also an accelerated learning roadmap of RDS in Windows Server 2012. These routines build the essential skills and set the foundation for later carry out a Microsoft’s Virtual Desktop Infrastructure (VDI) deployment. Those who would like get familiar with RDS should first review the article, RDS Architecture Explained.
RDS is the delivery vehicle of Microsoft RemoteApp programs and VDI. In enterprise IT strategies, RDS plays an important role in adopting consumerization of IT and BYOD (or Bring Your Own Device) initiatives by minimizing application and desktop device requirements down to almost just an HTTP session for anytime, anywhere, any network access.
In Windows Server 2008 releases, setting up RDS can be a daunting task. There are many moving parts with various configurations, polices, certificates, etc. to integrate together. This is however not the case anymore. Now in Windows Server 2012, the RDS deployment and maintenance processes have been dramatically simplified and automated with a smooth and rich user experience as presented later in this article.
Above all, RDS realizes flexible desktop concept and the so-called modern work-style where authorized LOB applications with location and device transparencies following a user and not the other way around. RDS is becoming an essential part of enterprise infrastructure for enabling application deployment as a service.
The complexities of what happens under the hood in RDS can easily overwhelm even an experienced Windows administrator. Windows Server 2012 introduces the so-called Quick Start deployment. And as the name suggests it minimizes the infrastructure requirement and makes a deployment a very quick and straightforward process.
Quick Start is an option in RDS deployment during the process of adding roles and features with Windows Server 2012 Service Manager. It dramatically simplifies the deployment process and shortens go-to-market while still providing the ability to add additional RDS servers as needed. The abstraction formed by RDWA, RDCB, and RDSH offers such elegancy that the Quick Start process integrates the three and deploy all to one server in a process rather uneventful. For
For prototyping a centralized remove access environment, demonstrating and testing a VDI solution, or simply building a study lab for self-training, Quick Start is a fast track for getting RDS up and running in a matter of minutes.
At this time, RDS session-based deployment is in place with three sample RemoteApp programs published. Let’s examine the user experience of accessing RDS RemoteApp programs.
Once RDS RemoteApp programs are published, a user can simply access https://the-RDWA-Server-URL/rdwab. Once authenticated, authorized RemoteApp programs are presented to the user.
In January, our team had a fun project to tell 31 stories, present 31 opportunities for IT professionals to get started on Windows Server 2012 and Windows Azure, something we all feel very passionate about. Cloud computing is an exciting movement and offering so much to grow as an individual, as an organization, as a business.
Find out who is your area Evangelist, stay in touch with the team, and move forward with the communities. Together, let’s welcome the challenges, embrace the changes, get started, learn it, master it, and take advantages of it. Now here are your 31 opportunities:
Windows Azure relevant to Microsoft private cloud solutions is, in my view, as critical as what Active Directory means to Windows infrastructure. In a Windows domain, Active Directory holds the one version of truth and is the ultimate authority of all resources defined. Similarly when it comes to Microsoft cloud computing, there is no question that Windows Azure is the de facto platform as an extension of Active Directory in the cloud. While enterprise IT is transitioning form on-premise deployment to an emerging architecture of hybrid cloud, IT professionals are facing unprecedented challenges to change from managing servers deployed on premise to managing services delivered with hybrid cloud, and at the same time extraordinary opportunities to upgrade and expand an individual's skill profile and become a leader in cloud initiatives and a contributor in IT communities.
For IT professionals, a productive and direct way to learn and master Microsoft cloud computing solutions is to walk through and gain hands-on experience of the features available in Windows Azure. And the 90-day free trial and many readily available resources offer IT professionals at no cost to access, experience, and experiment deploying cloud resources of VMs, web sites, media and mobile services, virtual networks, etc. There are now many options for IT professionals to better deliver services. The following highlights the available features in Windows Azure and the significance to IT professionals.
A noticeable capability now available in System Center 2012 SP1 is to COPY a stored VM from on-premise private cloud fabric to Windows Azure. This COPY process is to be initiated from App Controller with an established connection to an intended Windows Azure subscription. A prerequisite of copying a VM is that the VM must be in a “stored” state. Storing a VM and later deploying the stored VM may appear conceptually plain. They are actually quite interesting operations in implementation. These processes under the hood make several transitions while on the surface with App Controller the user experience is amazingly streamlined and simple. The logical model of the associated operations is actually a great tool to better understand how the private cloud fabric works. The following schematic depicts the conceptual model of copying a VM from on-premise private cloud fabric to Windows Azure.
Form a user’s point of view, the process to COPY a VM to Windows Azure requires first storing the VM. A VM once stored becomes a library object, or specifically an object in Cloud Libraries of the Library workspace in VMM admin console as shown below. To store a VM either in App Controller or VMM admin console, simply right-click a target VM and select the option to store it. At this time, the process actually moves/exports the VM from the default VM path (configured in Placement of the associated host properties) to “Stored VM path” defined in the associated cloud properties. Both paths are set with VMM admin console as illustrated in the following.
Once a VM is stored, as shown below the status of the VM will be set as “Stored.” Notice that the operations of storing a VM are very much like those in exporting one. The process will capture the state packaged with the content and configurations of the VM.
At this time, an authorized user can then in App Controller initiate a COPY process to bring a stored VM to Windows Azure. A stored VM can be also redeployed back to the state, where, and when the “Store” process was last performed. [Continued in upcoming posts]
If you cannot make one of these events, you may be able to find a similar event at a New Horizons learning center here.
Attendees are encouraged to participate Early Expert Challenge program and set up a test lab to facilitate the learning. To participate in the afternoon hands-on lab session, you will need to bring your own computer (laptop preferred) with the following minimum configuration:
For more information or to register, visit > www.technetevents.com OR CALL 1-877-MSEVENT
NOTICE TO ALL GOVERNMENT EMPLOYEES – We trust you understand Microsoft's desire to ensure that we – and you – comply with applicable government gift and ethics rules set forth in Federal and State regulations, which restrict/prohibit public sector (government or education) employees from accepting gifts from entities doing or seeking official business with the public sector. A "gift" includes meals, giveaways, free software or other items of value given away at an event. If you wish to receive any of these items, you must pay market value for them or have written approval from your gifting/ethics officer or responsible attorney. Additional information and gift letter are available on registration site.
Microsoft respects your privacy. Please read our online Privacy Statement.
If you would prefer not to receive future promotional emails from Microsoft Corporation please click here. These settings will not affect any newsletters you've requested or any mandatory service communications that are considered part of certain Microsoft services.
To set your contact preferences for Microsoft communications, click here. Microsoft Corporation One Microsoft Way Redmond, WA 98052
VHD is a file format employed in Microsoft virtualization solution. Essentially it operates and behaves much just like a physical hard disk, while in fact it is a file. There has been much information already available regarding VHD and those who are not familiar with this format should review Virtual Hard Disk Getting Started Guide first. There are various way to create and manage a VHD. For those who are deployment focused or prefer operating via a command prompt, DiskPart is available. On the other hand, with GUI there are also Hyper-V Manager and Disk Manager with VHD operations. In this post, the focuses are on the VHD operations with Hyper-V Manager. And there are really just three routines: creating, editing, and inspecting a VHD. One can start these routines from Action dropdown menu and Actions pane of Hyper-V manager once a Hyper-V host is highlighted. To create, edit, or inspect a VHD, simply click the corresponding option as shown above.
The following individual routines present the user experience after a user starts a particular routine by clicking a particular option indicated by the top level heading. Also notice that the term, VHD, depending on the context stands for either a virtual hard disk itself or the format of a virtual hard disk.
This type allocates storage at VHD creation time. The size of a Fixed Size or Fixed VHD, as the name indicates, stays the same throughout the life of a disk. Since all available storage is allocated at creation time, a Fixed VHD offers a predictable and best performance on operations relevant to storage allocation and is recommended for production use.
In the process, Windows Server 2012 defaults the format of a new blank VHD to VHDX and the size to 127 GB. Here, the shown routine reset the size and created a 5GB VHD on the local hard disk. The 5 GB size here is chosen due to limited disk space availability on the associated hard disk. To create a VHD for installing OS, for example, the size of the VHD should be large enough to include OS, patches, applications, temp storage, page files, buffer space, etc.
This type of a VHD is first created with just housekeeping (or header/footer) information, i.e. the name, location, maximum size, etc. of the disk. As data are written into a Dynamic VHD, the total size of the VHD will grow accordingly. Here is a routine to create a 5 GB Dynamic VHD.
So a Dynamically VHD is rather small in size when first created and the size grows as data are written into the disk. At any given time, a Dynamic VHD is with a size of the actual data written to it and the housekeeping information. Notice, upon deleting data from a Dynamic VHD, the space of those deleted data is not reclaimed till an Edit Disk/Compact operation is operated upon which.
A Dynamic VHD is recommended for development and testing, since relatively small footprint to manage. A server intended to run applications not disk intensive is also a possible candidate for a Dynamic VHD. Still when it comes to performance, a Fixed VHD always performs better than a comparable Dynamic VHD in most scenarios by roughly 10% to 15% with the exception of 4k writes, where Fixed VHD performs significantly better as documented in Hyper-V and VHD Performance - Dynamic vs. Fixed.
For backward compatibility, here is a routine to edit and change the format of a disk from VHDX to VHD. Since this operations will create a new disk with a copy of the source content, there is an opportunity to specify both the format and the type of the new disk. And here in addition to the format, the type is changed from Fixed to Dynamic. In other words, the operations to convert a VHD in effect copy the source disk to a newly created disk with a specified format and a selected type.
Converting a format does not apply to a Differencing VHD since both the format and the type are dependencies between a child disk and its parent and not to be changed for the parent-child link to work, although the Convert option is available for a Differencing VHD.
To increase the size of a Dynamic VHD, edit and expand the disk. The process is fairly straightforward.
To permanent introduce changes captured in a child disk, edit a child disk and select the option to merge the child disk into the parent disk. On the left, the process shows that the changes can be directly merged into the parent disk itself or a newly created Dynamic or Fixed disk. This routine is likely to follow a successful test/validation of a target patch or a new device driver against a child disk with an existing deployment image as the parent disk, for example.
Windows Azure is a cloud OS. It is an infrastructure with computing, networking, and storage capacities; a global service publishing and distribution vehicle; and a security and system management framework capable of bridging and extending on-premise resources with those deployed in the cloud. With IaaS combined with the many features Windows Azure offers, the opportunities for enterprise IT as well as small and medium businesses are real and exciting to employ cloud as a delivery platform for LOB services including media and phone apps. Windows Azure combined with Windows Server 2012 and System Center 2012 SP1 provide many options for IT to transition and transform existing establishments into a cloud-friendly, cloud-ready, and cloud-enabled environment. Deploying resources, migrating workloads, and expanding Active Directory to cloud have never been easier with so much predictability and quick ROI, and without compromising quality and security. While for developers, those applications deployed to Windows Azure PaaS environment will be by default delivered with SaaS globally. Windows Azure is a cloud OS, changes how IT does business, and opens many new possibilities to shorten go-to-market. The following schematic depicts Windows Azure features highlighting technical capabilities, target scenarios, and business objectives.
WEB SITES is to rapidly deploy highly scalable web sites on Windows Azure. It allows using languages and open source applications of a site administrator’s choice and deploying content with FTP, Git, and TFS. Integrations with Windows Azure services include SQL Database, Caching, Content Delivery Network (CDN) and Storage. This is an optimal solution for a web presence to start small and scale as traffic grows with scalability, high availability, and built-in monitoring of performance and usage data. This is also a perfect turnkey for running ephemeral, i.e. short-live and transitory sites, for contests, promotions, campaign, prototype, proof of concept, and so on.
VIRTUAL MACHINES is Windows Azure’s IaaS solution. This much needed and long waited capability enables enterprise IT to provision infrastructure and deploy VMs on demand. An administrator can now easily deploy and configure Windows Server and Linux VMs in minutes in cloud. Migrate workloads without having to change existing code and modify network configuration, while able to securely connect those VMs to on-premises corporate networks.
MOBILE SERVICES offers a secure turnkey backend-as-a-service solution readily available for mobile applications. This offer accelerates mobile application development by incorporating structured storage, user authentication, and push notifications, and shortens the process dramatically. The ROI of this offering for mobile application development and deployment is almost immediate.
MEDIA SERVICES has everything for delivering content to a variety of devices, from Xbox, Windows phone, Windows 8, to MacOS, iOS and Android while ingesting, encoding, converting, and protecting content with both on-demand and live streaming capabilities. As media increasingly becomes part of a delivery in both business and social settings, Windows Azure Media Services arrive with tremendous business opportunities and growth.
CLOUD SERVICES, a PaaS offering, provides an on-demand runtime environment. Published API enables developers to build or extend enterprise applications onto Windows Azure with high availability and elastic scale. This is a PaaS environment to deploy applications delivered as SaaS solutions to customers anywhere around the world.
BIG DATA is becoming a pressing issue and on-going challenge for enterprise IT as data continue to explode. We are now confronted with ever-increasing and unplanned bursting of data in the order of magnitude on a daily basis. IT needs to process more data today than those of yesterday’s, yester-week’s, and yester-month’s introduced by growing mobile devices and increasing dynamic traffic trigger by social networks. The new normal of enterprise IT is to have not only the capacity to store and process, but the ability to analyze and derive information, and deliver business values from a massive sample space with numerous data points which continue increasing. Facing this reality, Windows Azure features a 100% Apache Hadoop compatible, enterprise-ready HDInsight service and supports a variety of structured and unstructured data storage options, along with tools to help analyze and extract BI from data of any size. Enterprise IT may not overcome the challenges of big data overnight, the arrival of Windows Azure nonetheless offers a strategic platform to move forward with a convergent solution.
I want to call out and invite IT professionals interested in achieving Microsoft certifications to join, participate, and contribute to Windows Server Early Experts Challenge. This program is to learn about the latest version of Windows Server with excelling in related Microsoft certification exams in mind.
The Challenge involves a series of Knowledge Quests - starting with the Apprentice Quest below - and each Quest ends with a special completion certificate for you to promote your new knowledge! To make it easy to participate, each Quest is developed in a modular format that you can complete based on your own schedule and availability.
The first five Knowledge Quests are Apprentice, Installer, Explorer, Networker and Virtualizer. These Knowledge Quests target the objectives in Exam 70-410: Installing and Configuring Windows Server 2012.
Let me acknowledge that the contents presented in the Early Expert Challenge series are based on Keith Mayer’s work. HIs enthusiasm, efforts, and impact on helping IT pro communities adopt Windows Server 2012 have been inspirational, effective, and significant.
This program leverages the Microsoft Virtual Academy (MVA) for some of our free online study resources. You will need to first register for an MVA account using your Microsoft Account (aka., Windows Live ID) via the link below …
In this first knowledge quest, you will learn and explore the key new technical capabilities of Windows Server 2012 across the product pillars of virtualization, management, networking and storage, etc. to properly position them for relevant usage scenarios.
The seven modules in this course, through video and whitepaper, provide details of the new capabilities, features, and solutions built into the product. With so many new features to cover, this course is designed to be the introduction to Windows Server 2012. After completing this course, you will be ready to dive deeper into Windows Server 2012 through additional Microsoft Virtual Academy (MVA) courses dedicated to each topic introduced in this “Technical Overview.”
Alternate option: You can also attend a free Windows Server 2012 First Look Clinic at a Microsoft Learning partner near you if you'd prefer an in-person training experience.
With so much to learn in Windows Server 2012, building your own lab environment is the best way to REALLY learn new technology! You can download the Windows Server 2012 installation bits and start the process! We'll be using these installation bits in the coming weeks in the additional Knowledge Quests of the "Early Experts" Challenge. Be sure to download the bits in "VHD" format (not "ISO" format) as we'll be using the VHD bits to build your study lab and in future Knowledge Quests for hands-on activities.
Follow this step-by-step guide to build your own study lab as a dual-boot environment on your existing desktop or laptop PC. We'll leverage this study lab environment in future Knowledge Quests for hands-on activities. Hands-on experience with Windows Server 2012 will help you greatly in mastering the knowledge and skills needed to successfully pass the certification exams.
Participate in our Online Study Group Community on LinkedIn to post questions you may have, share your insights and collaborate with other members as we all prepare for certification! Each of us has unique insight and by participating in this community, we'll be able to expand our technical knowledge beyond our own experiences.
Now that you've completed this Knowledge Quest, be sure to share your success with your social network using one of the buttons below for Twitter, LinkedIn or Facebook. By sharing your success, you'll also help to encourage others to join our study group and increase the number of IT Pros working together to help grow our collective technical knowledge and share even more community insight that benefits us all!
Have you completed Steps 1 through 5? If so, follow these steps to validate your lab completion and claim your "Early Experts - Apprentice" certificate:
Once you've submitted your certificate request, feel free to keep going with the next Knowledge Quest below!
After you've completed the "Early Experts" Apprentice Quest, keep going with the next Knowledge Quest to continue your preparation for the MCSA on Windows Server 2012 Exams:
In today’s episode Yung Chou shows us how to use System Center 2012 App Controller to easily configure, deploy and manage virtual machines and services across private and public clouds. In part one of this series he demos for us how to connect App Controller to Windows Azure.
After watching this video, follow these next steps:
Step #1 – Start Your Free 90 Day Trial of Windows Azure and deploy VMs in the cloud Step #2 – Download and install Windows Server 2012 and System Center 2012 Step #3 – Learn, build, and experiment IaaS
Although the published Windows Azure Security Guidance appears to be focused on PaaS. The concept is nevertheless directly applicable to Windows Azure Virtual Machine as I have highlighted on the following diagrams originally from the Guidance.
And when discussing cloud security, ask these questions first:
And as needed, reference the following diagrams to get specifics.
Sign up your Windows Azure 90-day free trial, deploy a Windows Server 2012 and SQL Server 2012 VM in Windows Azure, and test out IaaS solutions. There are also free resources available at http://aka.ms/free.
Follow @technetradio Become a Fan @ facebook.com/MicrosoftTechNetRadio Subscribe to our podcast via iTunes, Zune, Stitcher, or RSS
In today’s episode Yung Chou shows us how to deploy and configure a SQL Server Windows Azure Virtual Machine. Tune in as he creates a new Windows Azure Virtual Machine of SQL Server, then shows you how to access and configure it as well how to test its connectivity using Microsoft WebMatrix. Either to test SQL connectivity, web site development, or Windows Azure service deployment, WebMatrix is easy to use and freely available.
In today’s Windows Azure Virtual Machine how-to, Yung Chou shows us how to customize our virtual machine through load balancing as well as how to make it highly available. Tune in as Yung walks us through configuration and set-up.
Step #1 – Start Your Free 90 Day Trial of Windows Azure Step #2 – Download Windows Server 2012 Step #3 – Begin building your own Virtual Machines in Windows Azure!