If cloud computing is not confusing enough, there is also this so called private cloud. And what is private cloud? I am hoping at this time you have reviewed my Cloud Computing for IT Pros series and have a clear understanding of what a service is and what cloud computing is. These are key concepts. And equally important, you know the 5-3-2 Principle of Cloud Computing and why an application is a cloud application while others may not. Generally speaking, there are 5 essential characteristics, 3 delivery methods, and 2 deployment models (or 4 if following NIST definition) in cloud computing. Does not matter it is public cloud or private cloud. If it is classified as cloud computing, it should at least exhibit the 5-3-2 principle as the core set of attributes. With that in mind, so what is private cloud?
Private cloud? Well, it is a cloud, so the 5 essential characteristics of cloud computing apply. The term, private, here means dedicated and a private cloud is a cloud dedicated to an organization. The classification here is based on the intended users and not the ownership of the infrastructure. Namely, an organization has a dedicated cloud does not necessarily mean the organization must own the infrastructure on which a dedicated cloud is running. A obvious example is a private cloud running on an infrastructure owned and managed by a 3rd party hosting company. So a subscribing company may possibly own the data, software, configurations, and instances, but not the physical boxes and underlying infrastructure. To find out more of running private cloud in this fashion, a list of private cloud hosting companies is readily available.
Perhaps a more commonly assumed definition of private cloud is an on-premises deployment of cloud computing. In other words, all including the servers, cabling, software, running instances, etc. are owned and managed by an organization behind its enterprise firewall, as shown above. Many enterprises assume this definition of private cloud due to an existing deployment of on-premises IT resources. While transitioning into private cloud, it is a logical step to build one by employing already deployed hardware and software.
Ultimately cloud computing is to better deliver applicaitons. The goal of constructing a private cloud can be acquiring IaaS, PaaS, or SaaS. Based on the objectives, an organization, for example, may simply seek the ability to efficiently deploy/manage servers to provide maximal flexibility for develoying and testing applicaitons, and in this case IaaS is what and all the organization needs. While the servers are deployed via IaaS, applications running within these servers do not have to be cloud applications. The applications can very well be traditional (i.e. non-cloud computing) ones. The point is that to pursue a private cloud, it is not necessarily to acquire all three (IaaS, PaaS, and SaaS) delivery methods. Nevertheless, for enterprise it is only logical to start with IaaS to fundamentally and strategically convert existing IT establishments into a cloud-ready environment. For pursuing a private cloud, IT should have IaaS in place first which will fundamentally provide the mechanism for resource pooling, scalability, and elasticity.
Microsoft private cloud solutions is called Hyper-V Cloud. Which is a set of guidelines as shown here on the right and offerings on building private cloud with IaaS using readily available technologies, i.e. Windows Server 2008 R2 and System Center Virtual Machine Manager. Hyper-V Cloud is exciting since not only it increase the ROI on existing deployment, it also strategically places a foundation to integrate Windows Azure platform offered in public cloud. Ultimately, enterprise will be able to manage physical, virtualized, and cloud (private and public) with a single pane of glass provided by System Center.
Above all, it does not matter if the delivery method is IaaS, PaaS, or SaaS. As far as a user is concerned, whatever your service/application is, it is always SaaS even if your application is not cloud-based. Application is what this is all about. So when it comes to implement private cloud which will eventually change how your IT delivers services, it is an expensive proposition on both cost and customer satisfaction. Be clear on short-term checkpoints and long-term business goals. Scope down but be very strategic in overall implementation.
Virtualization vs. private cloud has confused many IT pros. Are they the same? Or different? In what way and how? We have already virtualized most of my computing resources, is a private cloud still relevant to us? These are questions I have been frequently asked. Before getting the answers, in the first article of the two-part series listed below I want to set a baseline.
Lately, many IT shops have introduced virtualization into existing computing environment. Consolidating servers, mimicking production environment, virtualizing test networks, securing resources with honey pots, adding disaster recovery options, etc. are just a few applications of employing virtualization. Some also run highly virtualized IT with automation provided by system management solutions. I imagine many IT pros recognize the benefits of virtualization including better utilization of servers, associated savings by reducing the physical footprint, etc. Now we are moving into a cloud era, the question then becomes “Is virtualization the same with a private cloud?” or “We are already running a highly virtualized computing today, do we still need a private cloud?“ The answers to these questions should always start with “What business problems you are trying to address?” Then assess if a private cloud solution can fundamentally solve the problem, or perhaps virtualization is sufficient. This is of course assuming there is a clear understanding of what is virtualization and what is a private cloud. This point is that virtualization and cloud computing are not the same. They address IT challenges in different dimensions and operated in different scopes with different levels of impact on a business.
To make a long story short, virtualization in the context of IT is to “isolate” computing resources such that an object (i.e. an application, a task, a component) in a layer above can be possibly operated without a concern of those changes made in the layers below. A lengthy discussion of virtualization is beyond the scope of this article. Nonetheless,let me point out that the terms, virtualization, and “isolation” are chosen for specific reasons since there are technical discrepancies between “virtualization” and “emulation”, “isolation” and “redirection.” Virtualization isolates computing resources, hence offers an opportunity to relocate and consolidate isolated resources for better utilization and higher efficiency. Virtualization is rooted in infrastructure management, operations, and deployment flexibility. It's about consolidating servers, moving workloads, streaming desktops, and so on; which without virtualization are not technically feasible or may simply be cost-prohibitive.
Cloud computing on the other hand is a state, a concept, a set of capabilities. There are statements made on what to expect in general from cloud computing. A definition of cloud computing published in NIST SP-800-145 outlines the essential characteristics, how to deliver, and what kind of deployment models to be cloud-qualified. Chou further simplifies it and offers a plain and simple way to describe cloud computing with the 5-3-2 Principle as illustrated below.
To realize the fundamental differences between virtualization and private cloud is therefore rather straightforward. In essence, virtualization is not based on the 5-3-2 Principle as opposed to cloud computing does. For instance, a self-serving model is not an essential component in virtualization, while it is essential in cloud computing. One can certainly argue some virtualization solution may include a self-serving component. The point is that self-service is not a necessary , nor sufficient condition for virtualization. While in cloud computing, self-service is a crucial concept to deliver anytime availability to user, which is what a service is all about. Furthermore, self-service is an effective mechanism to in the long run reduce training and support at all levels. It is a crucial vehicle to accelerate the ROI of a cloud computing solution and make it sustainable in the long run.
So what are specifically about highly virtualized computing environment vs. a private cloud?
For discussing cloud computing, I recommend employing the following theories as a baseline.
Theory 1: You can not productively discuss cloud computing without first clearly defining what it is.
The fact is that cloud computing is confusing since everyone seems to have a different definition of cloud computing. Notice the issue is not lack of definitions, nor the need for having an agreed definition. The issue is not having a well-thought-out definition to operate upon. And without a good definition, a conversation of cloud computing all too often becomes non-productive since cloud computing touches infrastructure, architecture, development, deployment, operations, automation, optimization, manageability, cost, and very aspect of IT. And as explained below, it is indeed a generational shift of our computing platform from desktop to cloud. Without a good baseline of cloud computing, a conversation of the subject results in nothing more than an academic exercise in my experience.
Theory 2: The 5-3-2 principle defines the essence and scopes the subject domain of cloud computing.
Employ the 5-3-2 principle as a message framework to facilitate the discussions and improve the awareness of cloud computing. The message of cloud computing itself is however up to individuals to articulate. Staying with this framework will keep a cloud conversation aligned with the business values which IT is expected to and should deliver in a cloud solution.
Theory 3: The 5-3-2 principle of cloud computing describes the 5 essential characteristics, 3 delivery methods, and 2 deployment models of cloud computing.
The 5 characteristics of cloud computing, shown below, are the expected attributes for an application to be classified as a cloud application. These are the differentiators. Questions like “I am running X, do I still need cloud?” can be clearly answered by determining if these characteristics are expected for X.
The 3 delivery methods of cloud computing, as shown below, are the frequently heard: Software as a Service, Platform as a Service, and Infrastructure as a Service, namely SaaS, PaaS, and IaaS respectively. Here, the key is to first understand “what is a service.” All 3 delivery methods are presented as services in the context of cloud computing. Without a clear understanding of what is service, there is a danger of not grasping the fundamentals as to misunderstand all the rest.
The 2 deployment methods of cloud computing are public cloud and private cloud. Public cloud is intended for public consumption and private cloud is a cloud (and notice a cloud should exhibit the 5 characteristics) while the infrastructure is dedicated to an organization. Private cloud although frequently assumed inside a private data center, as depicted below, can be on premises or hosted off premises by a 3rd party. Hybrid deployment is an extended concept of a private cloud with resources deployed on-premise and off-premise.
The 5-3-2 principle is a simple, structured, and disciplined way of conversing cloud computing. 5 characteristics, 3 delivery methods, and 2 deployment models together explain the key aspects of cloud computing. A cloud discussion is to validate the business needs of the 5 characteristics, the feasibility of delivering an intended service with SaaS, PaaS, or IaaS, and if public cloud or private cloud the preferred deployment model. Under the framework provided by the 5-3-2 principle, now there is a structured way to navigate through the maze of cloud computing and offer a direction to an ultimate cloud solution. Cloud computing will be clear and easy to understand with the 5-3-2 principle as following:
This series focusing on cloud essentials for IT professionals includes:
Power of Choices
For enterprise, cloud computing presents tremendous opportunities to re-architect IT for the future. From infrastructure to business model and management, as depicted in the following, IT now has options to redistribute, reorganize, reset priorities which perhaps not feasible in a traditional on-premises only computing model. IT can shift the cost from capital expense to operational, let a 3rd party to run those functions not in the core business, employ cloud to easily scale up or out, etc. And the benefits of having a dedicated cloud are apparent and attractive. And that is the so-called private cloud, a dedicated cloud which can be run on premises or hosted off premises by a 3rd party.
Essence of Cloud
How to deploy and where to deploy a service do require strategic planning and solid fundamentals in application architecture, such that the benefits of cloud computing can be realized in a timely and predictable fashion. There are specific capabilities that cloud computing is expected to provide. Essentially, regardless how cloud computing is deployed or delivered, applications need to engineer in the quality and exhibit the five characteristics of cloud computing, namely:
Else, one can call it whatever one wants. It is however not the cloud computing that IT industry is talking about.
Approaching Private Cloud
Depending on the priorities of a business, the above 5 characteristics may not be all relevant for building private cloud. For instance, perhaps ubiquitous network access is not required since in a private cloud setting IT may not want resources to be that accessible for security reasons. Corporate may not critically need a pay per use model due to the complexity and feasibility to implement chargeback. To approach private cloud, there needs an overall vision to define the goals for the next three to five years. Private cloud is expensive and mechanism to ensure predictable results must be put in place. Enterprise IT needs to enforce hardware/software standards in datacenter and automate/optimize operations and procedures, when possible. Application architect need to develop cloud application design guidelines to ensure application manageability and readiness for running in a mixed (i.e. cloud and on-premises) environment. On top of applications, a common platform to manage physical, virtualized, and cloud resources transparently and in a consistent fashion is strategic and imperative to transition into cloud computing. Without a unified way to manage not only virtual machines, but the workloads; without a single pane of glass to manage the applications within private cloud, but public cloud as well, the cloud transformation will introduce many manageability issues into IT operations, and the process is likely to become divergent, with run-away costs, and eventually non-manageable.
For transforming into cloud computing, there are developed strategies. The above shows a logical progression from an on-premises establishment towards off-premises cloud computing. Along the process, virtualization is taking place and will continue the momentum. (Gartner Symposium ITexpo 2010) This is because for cloud computing to be a reality, virtualization is essential in current technology. Virtualization also implies that a management solution needs to be in place. For cloud to scale rapidly with pooled resources with location transparency, virtual machines and the ability to manage the physical hardware, virtual machines and workloads running within are essential. The management solution will need to know which virtual machines need attentions, how to operate them, where to place them, and what to do with them. Further the ability to monitor and manage the applications/services running within a workload is as critical since it is the applications/services that we care and not the virtual machines themselves. Once virtualization are introduced and mature, enterprise IT can then migrate into an on-premises private cloud before moving into running datacenter in public cloud, if that is the goal. Depending on the business needs, I imagine many IT shops will settle somewhere among virtualized, private cloud, and public cloud. I believe to cloud or not to cloud, that is not the question. The question is really how much and how far.
Specific to private cloud, the above specifies the key milestones, essential capabilities, and recommendations. Each is crucial for contributing a predictable ROI with cost-effectiveness in the long run. Enterprise IT must rethink how to conduct business, develop a vision with a roadmap to move from structured responses to on-demand deliveries. And private cloud is the very opportunity.
Do-It-Yourself Private Cloud
The private cloud solutions from Microsoft is collectively called Hyper-V Cloud which is a set of guidelines and offerings, and not a packaged product. Hyper-V Cloud offerings is mainly for building private cloud of IaaS as of February of 2011. Fast Track is delivered by partners having worked with Microsoft to combine hardware and software offerings based on a reference architecture for building private clouds. There is also a Service Provider Program to identify a qualified service provider to host a dedicated private cloud for you. Above all, there is Hyper-V Deployment Guide for you to build your own private cloud. Yes, you can do it yourself to build a private cloud. In fact regardless which option you plan to acquire your private cloud, I highly recommend going through the Deployment Guide and building a private cloud yourself in a lab environment to get a much better understanding on how to architect, implement, and manage a cloud computing solution.
Getting Ready for the Changes
For IT pros, cloud is a leap from administering server boxes to managing services. The abstraction layer placed by virtualization makes it a lesser concern of the physical hardware in most cloud computing scenarios. This is hard and a cultural shock for most IT pros. With the continual advancement of virtualization technologies, the changing perspective of IT infrastructure is inevitable. One obvious example is that Microsoft System Center Virtual Machine Manager (VMM) Self-Service Portal (SSP) 2.0 introduces a service-centric view of implementing private cloud of IaaS. SSP 2.0 uses (Solution) Infrastructure, Services, and Roles as the building blocks as shown below. A private cloud of IaaS user will then form a service delivery model with a hierarchy where an Infrastructure consists of Services, while each Service includes Roles, and virtual machines are deployed based on the defined Roles.
While integrated with VMM, SSP 2.0 is a free offering from Microsoft that includes a set of web portals, a data store, a lightweight provisioning engine, and documentation and guidance. Within SSP 2.0, a datacenter administrator will first define the resource pools of network, commuting (RAM), and storage (disk space) resources and the cost model of reserving and allocating these resources. There are also predefined templates for deploying virtual machines to be imported from VMM into SSP 2.0 as datacenter’s resources. An authorized business unit administrator will then register one’s business unit followed by making a request for creating a Solution Infrastructure and the included Services and Roles. Once a request is approved by datacenter admin, an authorized user can deploy virtual machines on demand based on approved computing and storage quota. The cost of reserving and deploying an Infrastructure is calculated according to a chargeback model.
The following is a sample solution infrastructure for a Staffing solution. The hiring service is to post job opening, accept resumes, and run through the interview process. Once a candidate is hired as an employee, HR will create a record with Employee Information service and establish employment history and confidential records, while the employee can use the same service to maintain the personal data like home address, phone numbers, etc. The significance of doing this in private cloud is that once the cloud is defined, it is centrally managed with self-serving, on-demand, workflows, and chargeback capabilities. The system is monitored and managed by VMM. On a regular basis, usage IT can now generate reports to conclude the amount to charge back to business units based on their usage.
The provisioning and deploying of a so-called Infrastructure here are quite different than traditional way deploying servers in an on-premises computing environment. Because the deployment is carried out with virtual machines, the computing and storage requirements can be and are provisioned on demand by changing the specifications of a change request. Once approved by datacenter admin via workflow, SSP 2.0 will then allow an authorized user to allocate resources within permitted quota. An authorized user can create/deploy virtual machines on demand as shown in the following. The construction of an “infrastructure” is now much more focused on designing and deploying the “service” with requested capacity and not so much on the involved physical hardware and topology. This allows IT to focus more on enabling business and not constantly running cables and setting up servers. This is called Infrastructure as a Service with private cloud in action.
Personally what gets me most excited about cloud is that all I have discussed are within the reach today. Either to consume, build, or be a cloud, there are so many opportunities to improve IT service deliveries and offer a better experience to users, and grow professionally at the same time. Changes are happening and coming strong. I however see this time they are exciting and for the better. Start immediately. Start now to accept the changes, master the changes, and win all the changes.
So let it be known. I am an IT pro and private cloud was my idea.
[To Part 1, 2, 3, 4, 5, 6]
Compared with on-premises computing, deployment to cloud is much easier. A few clicking operations will do. And I believe this is probably why many have jumped into a conclusion that we, IT professionals, are going to lose our jobs in the cloud era. There is some, not much but some in my view, truth to it. However, not so fast, I say. Bringing cloud into the picture does dramatically reduce cost and complexities in some areas, yet at the same time cloud also introduces many new nuances which demand IT disciplines to develop criteria with business insights. No, I do not think so. IT professionals are not going to lose jobs, but to become more directly than ever affecting the bottom-line since with cloud computing everything is attached with a dollar sign and every IT operation has a direct cost implication. In this article, I discuss some routines and additional considerations for deployment to cloud. Notice through this article, in the context of cloud computing I use the terms, application and service, interchangeably.
For IT professionals, deploying a Windows Azure service to cloud starts with development hands over the gold bits and configuration file, as depicted in the schematic below.
In Visual Studio with Windows Azure SDK, a developer can create a cloud project, build a solution, and publish the solution with an option to generate a package which zips the code and in company with a configuration file defining the name and number of instance of Compute Role. This package and the configuration file are what gets uploaded into Windows Azure platform, either through the UI of Windows Azure Developer Portal or programmatically with Windows Azure API. For those not familiar with the steps and operations to develop and upload a cloud application into Windows Azure platform, I highly recommend you invest the time to walk through the labs which are well documented and readily available.
Application Lifecycle Management
In a typical enterprise deployment process, developers code and test applications locally or in an isolated environment and go through the build process to promote the code to an integrated/QA test environment. And upon finishing tests and QA, the code is promoted to production. For cloud deployment, the idea is much the same other than at some point the code will be placed in cloud and tests will be conducted through cloud before becoming live in Internet. Below is a sample of application lifecycle where both the production and the main test environments are in cloud.
Traditionally when developing and deploying applications on premises, one challenge (and it can be a big one) is to to try to keep the test environment as closely mimicking the production as possible to validate the developed code, data, operations, and procedures. Sometimes, this can be a major undertaking due to ownership, corporate politics, financial constraints, technical limitations, discrepancies between test environment and the production, etc. And the stories of applications behave as expected in test environment, yet generate numerous security and storage violations, fatal errors, and crash hard once in production are many times heard. This will be different when testing Windows Azure services. It turns out the user experience of promoting code from staging to production in cloud can be pleasant and something to look forward to.
When first promoting code into Windows Azure platform from local development/test environment, the application is placed in the so-called staging phase and when ready, an administrator can then promote the application into production. An interesting fact is that the staging and the production environment in Windows Azure platform are identical. There is no difference and Windows Azure platform is Windows Azure platform. What differentiates a staging environment from a production environment is the URL. The former is provided with an unique alphanumeric strings as part of the non-published staging URL like http://c2ek9aa346384629a3401e8119de3500.cloudapp.net/ while the latter is an administrator-specified user friendly URL like http://yc.cloudapp.net. So to promote from a staging environment to the production in Windows Azure platform, simply swap the virtual IP by swapping the staging URL with the production one. This is called VIP-Swap and two mouse-clicks are what it takes to deploy a Windows Azure service from staging to production. See the screen capture below. And minutes later, once Fabric Controller syncs with all the URL references of this application, in production it is.
VIP-swap is handy for initial deployment and subsequent updating a service with a new package. When making changes of a service, the service will be placed in staging environment and a VIP-Swap will promote the code into production. This feature however is not applicable to all changes of service definition. In such scenarios, redeployment of the service package will become necessary.
With Windows Azure platform in the cloud, there are new opportunities to extend as well as integrate on-premises computing with cloud services. An application architecture can accept HTTP/HTTPS requests either with a Web Role or a front-end that is on premises. And where to process requests and store the data can be in cloud, on premises, and a combination of the two, as shown below.
So either the service starts in cloud or on-premises, an application architect now has many options to architecturally improve the agility of an application. An HTTP request can start with on-premises computing while integrated with Worker Role in the cloud for capacity on demand. At the same time, a cloud service can as well employ the middle-tier and the back-end that are deployed on premises for security and control.
Standardizing Core Components
Either on premises or in cloud, core components including: application infrastructure, development tools, common management platform, identity mechanism, and virtualization technology should be standardized sooner than later. With a common set of technologies, see below, both IT professionals and developers can apply their skills and experiences to either computing platform. This will in the long run produce higher quality services with lower costs. This is crucial to make the transformation into cloud a convergent process.
By default, diagnostic data in Windows Azure is held in a memory buffer and not persistent. To access log data in Windows Azure, first the log data need to be moved to persistent storage. One can do this manually by using the Windows Azure Developer Portal, or add code in the application to dump the log data to storage at scheduled intervals. Next, need to have a way to view the log data in Windows Azure storage using tools like Azure Storage Explorer.
Traditionally when running applications on premises, diagnostics data are relatively easy to get since they are stored in local storage and can be accesses and moved easily. Logs and events generated by applications are subscribed, acquired, and stored as long as needed. When moving an application to the cloud, the access to diagnostic data is no longer status quo. Because persisting diagnostic data to Windows Azure storage costs money, IT will need to plan how long to keep the diagnostic data in Windows Azure and how to download it for offline analysis.
The cost of processing diagnostic data is one item in the overall cost model for moving application to public cloud. An application cost analysis can start with those big buckets including: network bandwidth, storage, transactions, cpu, etc. And eventually list out all the cost items and each with organizations responsible for those costs. Compare the ROI with that for on-premises deployment to justify if moving to cloud makes sense. Once agreements are made, each cost item/bucket should have a designated accounting code for tracking the cost. IT professionals will have much to do with the cost analysis since the operational costs of running applications in cloud are compared with both the capital expense and operational costs for on-premises deployment. For example, routine on-premises operations like backup and restore, monitoring, reporting, and troubleshooting are to be revised and integrated with cloud storage management which has both operation and cost implications.
The takeaway is “Don’t go to bed without developing a cost model.”
Some Thoughts on Security
Cloud is not secure? I happen to believe in most scenarios, cloud is actually very secure, and more secure than running datacenter on premises. Cloud security is a topic itself and certainly far beyond the scope of the discussions here. Nevertheless, you will be surprised how fast a cloud security question can be raised and answered, many times without actually answering it. In my experience, many answers of cloud security questions seem surfacing themselves once the context of a security question is correctly set. Keep the concept, separation of responsibilities, vivid whenever you contemplate cloud security. It will bring so much clarity. For SaaS, there is very limited what a subscriber can do and securities at network, system, and application layers are much pre-determined. PaaS like Windows Azure platform, on the other hand, presents many opportunities to place security measures in multiple layer with defense in depth strategy as show below.
Recently some Microsoft datacenters hosting services in the cloud for federal agencies have achieved FISMA certification and received ATO (Authorization to Operate). This is a very proof that cloud operations can meet and exceed the rigorous standard. The story of cloud computing will only get better from here.
Cloud is not secure? Think again.
One very important concept in cloud computing is the notion of fabric which represents an abstraction layer connecting resources to be dynamically allocated on demand. In Windows Azure, this concept is implemented as Fabric Controller (FC) which knows the what, where, when, why, and how of the resources in cloud. As far as a cloud application is concerned, FC is the cloud OS. We use Fabric Controller to shield us from the need to know all the complexities in inventorying, storing, connecting, deploying, configuring, initializing, running, monitoring, scaling, terminating, and releasing resources in cloud. So how does FC do it?
A key technology makes cloud computing a reality is virtualization. An apparent and production example is that Windows Azure abstracts hardware through virtualization and creates a virtual machine (VM) for each Role instance. Here, VMs and the underlying Hypervisor together offers multiple layers of isolations and virtualizing a computing resource further allows it to be moved to any number of physical hosts in a data center. The following schematic illustrates the implementation of Windows Azure computing model discussed in Part 3 of the series. Each instance of either a Web Role or a Worker Role is running in an individual VM. And depending on the configuration of an application, there can be multiple instances of a given Role.
A virtual machines is physically a virtual hard disk (VHD) file which has a number of advantages. For instance, not only it is easier to manage files compared with that of working with physical partitions, disks, and machines, but a VHD file can be maintained while offline, i.e. without the need to boot up the OS image installed in the VHD file. Virtual Machine Servicing Tool (VMST), is a such tool freely available from Microsoft. There have been many and active discussions on server virtualization, desktop virtualization, Application Virtualization (App-V), and Virtual Desktop Infrastructure (VDI). And many IT organizations have already started consolidating servers and introduced various forms of virtualization into their existing computing environments, as reported in many case studies.
Make no mistake nevertheless. Virtualization is not a destination, but a stepping stone for enterprise IT to transform from then a hardware-dependent and infrastructure-focused deployment vehicle into now and going forward a user-centric and cloud-friendly environment. Although virtualization is frequently motivated for cost saving, I believe the long-term and strategic business benefits are however resulted from deployment flexibility. Facing the many challenges and unknowns already in place and ahead brought by Internet, IT needs to make sure new investments are strategic, at the same time transform excising establishments into something flexible and agile. IT needs the ability to manage computing resources, both physical and virtualized, transparently and on a common management platform, while securely deploying applications to authorized users anytime, anywhere and on any devices. Fundamentally, virtualization provides abstractions for manageability and isolations for security to dynamically scale and secure instances of workloads. For enterprise IT, virtualization is imperative, a critical step towards building a cloud-friendly and cloud-ready environment. The takeaway is that virtualization should be in every enterprise IT’s roadmap, if not already. And a common management platform with the ability to manage physical and virtualized resources transparently is essential and should be put in place is as soon as possible.
The concept of fabric in Microsoft’s implementation in production exhibits itself in the so-called Fabric Controller or FC which is an internal subsystem of Windows Azure. FC, also a distribution point in cloud, inventories and stores images in repository, and:
For FC to control a deployed instance inside of a VM and carry out all the above tasks, there are Agents in place. The following schematic depicts the architecture.
When FC is building a node in data center, Fabric Agent (FA) is included in and automatically initialized in the root partition. FA exposes an API letting an instance interact with FC and is then used to manage Guest Agent (GA) running in a guest VM, i.e. child partition. The manageability is logically established with the ability for FC to monitor, interact, trust, and instruct FA which then manages GAs accordingly. Behind the scene, FC also makes itself highly available by replicating itself across groups of machines. In short, FC is the kernel of cloud OS and manages both servers and services in the data center.
This is another term getting overused and confusing. I am as guilty as anyone for using the term frequently without putting it in context. Not all AppFabrics are exactly the same after all. There are, as shown below, Windows Server AppFabric and Windows Azure AppFabric. The former is available as extensions to the Application Server role of Windows Server, while the latter provides cloud-based services to connect users and applications across the Internet. Both are part of Microsoft’s application infrastructure (or middleware) technologies.
Relevant to Windows Azure, many seem assuming Windows Azure AppFabric and FC are the same, similar, or related. This is incorrect, because they are not. Windows Azure AppFabric is a cloud middleware offering a common infrastructure to name, discover, expose, secure, and orchestrate web services on the Windows Azure platform. A number of services Windows Azure AppFabric includes:
The Service Bus service can traverse firewalls and NAT devices without forfeiting the security afforded by these devices to relay messages from clients through Windows Azure to software running on-premises. The Access Control offers a claims-based mechanism federated with Active Directory Federation Services (AD FS) 2.0 accessible by Windows Azure, other cloud, and on-premises applications. For those would like to know more technical details and develop cloud applications based on Windows Azure AppFabric, there are good references including: an overview whitepaper and An Introduction to Windows Azure AppFabric for Developers.
In an over simplified description, FC is the kernel of Windows Azure (a cloud OS) and manages the hardware and services in a data center, while Windows Azure AppFabric is a cloud middleware for developing applications. For IT pros, I consider a must-read overview article of Windows Azure is available elsewhere. And a nicely packaged content, Windows Azure Platform Training Kit, is also a great way to learn more more about the technology.
The series focusing on cloud essentials for IT professionals includes:
In Part 2, I basically said cloud is to provide “Business as a Service” i.e. making a targeted business available on demand. In digital commerce, much of a business is enabled by IT. Therefore, cloud is to in essence deliver “IT as a Service” or IT available on demand, i.e. anytime, anywhere, on any device. This is what we want IT to become via cloud. Realize that “on-demand” in the context of cloud computing also implies a set of attributes as describer in Part 1 including: ubiquitous network access, resource pooling, pay per use, and so on.
Nonetheless, IT is not about implementing technologies which is a means and not the end. All the infrastructure, servers, desktops, SaaS/PaaS/IaaS, public cloud, private cloud, etc. is about one thing and one thing only. That is to provide authorized users “applications” so that with which transactions are made and businesses are carried out. Either in the cloud or on-premises, it is about applications. So, how is a cloud application different than a traditional one? If so, in what way as far as IT pros are concerned.
Traditional Computing Model
A typical 3-tier application includes front-end, middle-tier, and beck-end. For a web application, the front-end is a web site which presents an application. Middle-tier holds the business logic while connecting to a back-end where the data are stored. And along the data path, load balancers (LB) are put in place to optimize performance, as well clusters are constructed for high availability. This analytical model is well understood and modeled. And the 3-tier architecture represents a mainstream design pattern for applications recently developed prior to the emerging cloud era. The concept is illustrated below and some may find there are some similarities to the idea applicable to architecting a cloud application.
Cloud Computing Model
Microsoft Windows Azure abstracts hardware through virtualization and provides on-demand, cloud-based computing, where the cloud is a set of interconnected computing resources located in one of more of data centers. Generally speaking, like a 3-tier design there are 3 key architectural components of a cloud application based on Windows Azure: Compute, Storage, and Fabric Controller, as shown below. In this model, Compute is the ability to execute code, i.e. run applications. Storage is where the data resides. In Windows Azure, Compute and Storage are defined with Roles, and offered as system services. A Role has configuration files to specify how a component may run in the execution environment. While Fabric Controller is a subsystem which monitors and makes decisions on what, when, and where to run and optimize a cloud application. I will talk more about Fabric Controller in Part 4 of this series, meanwhile here let’s examine more on Compute and Storage components.
Specifically, in Compute service, there are Web Role, Worker Role, and VM Role. Web Role implemented with IIS running in a virtual machine.is to accept HTTP and HTTPS requests from public endpoints. And in Windows Azure, all public endpoints are automatically load balanced. Worker Role on the other hand does not employ IIS, is an executable for computation and data management, and functions like a background job to accept requests and perform tasks. For example, Worker Role can be used to install a user specified web server or hosting a database as needed.
Roles communicate by passing messages through queues or sockets. The number of instances of an employed Role is determined by an application's configuration and each Role is assigned by Windows Azure to a unique Windows Server virtual machine instance. An employment of Windows Azure computing model for a real-life shopping list application is shown below. The actual development process and considerations are certainly much more, as discussed elsewhere.
On the other hand, VM Role is a virtual machine. A developer can employ VM Role (namely upload an OS image in VHD) to run Windows services, schedule tasks, and customize the run time environment of a Windows Azure application. This VHD is created using an on-premises Windows Server machine, then uploaded to Windows Azure. Once it’s stored in the cloud, the VHD can be loaded on demand into a VM role and executed. Customers can and need to configure and maintain the OS in the VM role. The following outlines the methodology.
Do keep in mind that VM Role is however stateless. Specifically, VM Role is designed to facilitate deploying a Windows Azure application which may require a long, fragile, or non-scriptable (i.e. can-not-be-automated) installation.This role is especially suited for migrating existing on-premises applications to run as hosted services in Windows Azure. There are an overview and step-by-step instructions readily available detailing how to successfully deploy Windows Azure VM Role.
The other component in a cloud application is Windows Azure Storage services with five types of storage including:
And within a Compute node, there are two types:
There are tools to facilitate managing Storage instances. A graphical UI like Azure Storage Explorer can make managing and viewing stored data a productive experience. Notice the above mentioned storage types are however not relational databases which many applications are nowadays built upon. SQL Azure, part of Windows Azure platform, is SQL in the cloud. And for DBAs, either Microsoft SQL server on the ground or SQL Azure in the cloud, you manage it very much the same way.
A example of using Windows Azure storage is presented with the following schematic. This is a hosted digital asset management web application. It uses a Worker Role as the background processor to generate and place images into and later retrieve by Web Role as the front-end from the store implemented with Windows Azure BLOB services.
In summary, much of our architectural concepts of a traditional on-premises 3-tier application is applicable to designing cloud applications using Windows Azure’s computing model. Namely, employ Web Role as front-end to accepting HTTP/HTTPS requests, while Worker Role to perform specific tasks like traditional asp.net services. There are various types of storage Windows Azure provides. There is also SQL Azure, Microsoft SQL Server in the cloud, making it convenient to migrate existing data or integrate on-premises databases with those in the cloud.
In Part 1, I talked about what “service” in the context of cloud computing means. Cloud is all about delivering services, i.e. making resources available on demand based on needs, paid by use, and.with the characteristics of ubiquitous network access, resource pooling, etc. Still we need to clearly define what cloud is. Without a common definition for a subject as broad as cloud computing it is hard to navigate through the overwhelming business and technical complexities. So here’s the six million question.
What Is Cloud
It is important to understand that there are services delivery models and deployment models. And both are needed to fully describe what cloud is. There are 3 ways to deliver services via cloud.
or SaaS is a model where an application is available on demand. It is the most common form of cloud computing delivered today. Microsoft Office 365 including: Exchange Online, SharePoint Online, Lync Online and the latest version of Microsoft Office Professional Plus suite is an SaaS offering to businesses.
or PaaS is a platform available on demand for development, testing, deployment and on-going maintenance of applications without the cost of buying the underlying infrastructure and software environments. Windows Azure Platform is a cloud-computing platform on which Microsoft’s internal IT (MSIT) organization has quickly built and deployed the Social eXperience Platform (SXP) to enable social media capabilities across Microsoft.com as documented.
On deployment, there are two base models. Public cloud is cloud computing made available through Internet to the general public or targeted users and is owned by an organization offering cloud services. An example is Microsoft Windows Live as free public cloud offerings for consumers, and Microsoft Online: Office 365 for businesses. Private cloud, on the other hand, is cloud available solely for an organization regardless if the cloud capabilities are managed by the organization or a third party and exists on premise or off premise. Based on the two models, some derive additional models like hybrid cloud, community cloud, etc. to highlight the implementation or intended audiences. For private cloud, two service delivery models: PaaS and IaaS are applicable since in a private setting, one can not deliver SaaS without having PaaS in place. Noticeis a solution for building private cloud. Hyper-V Cloud is a set of initiatives, guidelines, and offerings to help emperies deliver IaaS in a managed environment. Also the above mentioned delivery models are significant since once a model is selected to fulfill business objectives, responsibilities are implicitly agreed upon and accepted by the party hosting the cloud facility and the other subscribing the services.
Separation of Responsibilities
An important attribute of Cloud Computing is the separation of a subscriber’s responsibilities from those of a service provider’s. And by subscribing a particular service delivery model, a subscriber in essence agrees to relinquish certain level of access to and control over resources managed by the service provider. As I have discussed in Cloud Computing Primer for IT Pros, we must recognize and be pre-occupied with the limitations of each service delivery model when assessing Cloud. When a particular function or capability like security, traceability, or accountability is needed yet not provided with an intended delivery model, a subscriber needs to either negotiate with the service provider and put specifics in a service level agreement, or employ a different delivery model such that a desired function becomes available. Lack of understanding of the separation of responsibilities in my view frequently results in false expectations of what Cloud Computing can or cannot deliver.
Cloud computing, or simply cloud, is changing how IT delivers services and how a user can access computing resources at work, from home, and on the go. Cloud enables IT to respond to business opportunities with on-demand deliveries that are cost-effective and agile in the long run. Much happening in enterprise IT now is a journey to transform existing IT establishment into a cloud-friendly, cloud-ready, cloud-enabled environment. To start off, there are key concepts we, as IT pros, must grasp to fully appreciate the transformation that is going on and forward.
What Is Service
In the context of IT, “service” is a term frequently used to describe a form of delivery or availability. In a Windows machine, for example, core services to authenticate users and process commands automatically start and run behind the scene to provide essential functions for running a desktop session. In the context of cloud computing, I simply explain a service as something delivered “on demand.” Namely, a computing resource delivered as a “service” is available on demand to an authorized user. Specifically in cloud computing, “on-demand” also carries additional connotations.
On-demand in the context of cloud computing suggests that how a resource is made available is transparent and not a concern of a subscriber. It implies computing capacities can be adjusted dynamically according to demands. In other words, a subscriber can increase the capacities as needed and decrease them when no longer required. On-demand also means there is a business model in place to support “pay as you go” and “pay according to how much you have consumed.” In a production environment, there may be administrative as well operational constraints on how much and how fast a subscriber can change the resource allocations. This can and should be negotiated and stated in a service level agreement between a subscriber and a service provider. Conceptually, a service delivered through cloud is a set of computing resources available, scalable, and consumable based upon demands. On-demand essentially conveys the characteristics of Cloud.
Characteristics of Cloud Computing
Cloud similar to many IT terms like: database, networking, security, collaboration, portal, workspace, etc. is something that too often means different things to different people. Accessing your company’s application via Internet, is that Cloud Computing? Employing VPN to authenticate into your private network, is that Private Cloud? Is remote access considered some form of Cloud Computing? These questions may seem trivial, yet they are fundamental to preclude ambiguity, uncertainty, and uneasiness when we are facing changes and transitioning from an infrastructure-focused deployment to a service-centric, i.e. Cloud, deliveries. For technical professionals, Cloud may mean: utility computing, high speed grids, virtualization, automatic configuration and deployment, on-demand and remote processing, and combinations of them. For non-technical users, Cloud is simply the Internet, a cable form a service provider, or just something out there networked with my computer. Either public, private, or in between, the conventional wisdom, as published in The NIST Definition of Cloud Computing, assumes noticeable characteristic regarding how computing resources are made available in Cloud including:
And realize that based upon a delivery model, these characteristics apply to different user experience. For instance, on-demand self-service may imply the ability to: acquire an account and create a user profile as in SaaS, code and publish an application in PaaS, or configure and deploy a virtual machine in IaaS. This may not as apparent without a clear understanding on how services are deployed and delivered in cloud.
YES, the wait is over. SBS 2011 (which is based on Windows Server 2008 R2 technologies) has been released to manufacturing today. And around mid-January, trial will be available for download at the SBS web site. Here’s some additional information:
For all the IT pros out there, this is a great reference to keep. I have downloaded, installed in all my machines, and made it readily available for me either in office or on the road.
Although mainly developer-focused, this nicely packaged content explains Microsoft’s Platform as a Service (PaaS) solution well with labs, samples, presentations, videos, and demos. And there are core scenarios that IT pros should be familiar with in developing and deploying cloud applications to successfully assess the pros and cons of running on-premises IT and in cloud. Make no mistake about it. Cloud is here. And in my view understanding Windows Azure and services is not just about learning a different technology. It is about staying in the game and taking advantage of the opportunity, or becoming obsolete sooner than expected and worrying about losing job. The more you ramp up your skill set with cloud, the clearer and bluer sky you will get. That is what has been happening to me.
On December 2, 2010, Microsoft’s announced its cloud infrastructure (data centers) has received Federal Information Security Management Act of 2002 (FISMA) Authorization to Operate (ATO). This ATO was issued to Microsoft’s Global Foundation Services organization which provides a trustworthy foundation for the company's cloud services, including Exchange Online and SharePoint Online, which are currently in the FISMA certification and accreditation process. This ATO represents the government’s reliance on Microsoft’s security processes in compliance with
One way to describe cloud computing is to base on the service delivery models. There are three, namely Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) and depending on which model, a subscriber and a service provider hold various roles and responsibilities in completing a service delivery. Details of SaaS, PaaS, and IaaS are readily available and are not repeated here. Instead, a schematic is shown below highlighting the various functional components exposed in the three service delivery models in cloud computing compared with those managed in an on-premises deployment.
Essentially, cloud computing presents a separation of a subscriber’s roles and responsibilities from those of a service provider’s. And by subscribing a particular service delivery model, a subscriber implicitly agrees to relinquish certain level of access to and control over resources. In SaaS, the entire deliveries are provided by a service provider through cloud. The benefit to a subscriber is there is ultimately no maintenance needed, other than the credentials to access the application, i.e. the software. At the same time, SaaS also means there is little control a subscriber has on how the computing environment is configured and administered outside of a subscribed application. This is the user experience of, for example, some email offering or weather reports in Internet.
In PaaS, the offering is basically the middleware where the APIs exposed, the service logic derived, the data manipulated, and the transactions formed. It is where most of the magic happens. A subscriber in this model can develop and deploy applications with much control over the applied intellectual properties.
Out of the three models, IaaS provides most manageability to a subscriber. Form OS, runtime environment, to data and applications all are managed and configurable. This model presents opportunities for customizing operating procedures with the ability to on-demand provision IT infrastructure delivered by virtual machines in cloud.
An important take-away is that we must recognize and be pre-occupied with the limitations of each service delivery model when assessing Cloud Computing. When a particular function or capability like security, traceability, or accountability is needed, yet not provided with a subscribed service, a subscriber needs to negotiate with the service provider and put specifics in a service level agreement. Lack of understanding of the separation of responsibilities in my view frequently results in false expectations of what cloud computing can or cannot deliver.
This is the sixth and last article of a series to review the following five BI vehicles in SharePoint 2010:
Business reports back in mainframe and early PC days used to be tedious to generate, ill to read, and painful to share. The administration and skills needed to organize, develop, and distribute data and reports are not trivial. I can still remember my consultant days working on JCLs and COBOL for customizing business reports in various mainframe shops. Today with some key integrations and tools, it is much easier to generate reports using web services and report generator.
In SharePoint 2010, a report server can be configured as part of a SharePoint deployment. The integration is provided through SQL Server and the Reporting Services Add-in for SharePoint Products. This integration provides benefits in storage, security, and document access. Once configured, opening a report in SharePoint will behind the scene establish a session with the associated Report Server which retrieves and processes the data followed by displaying the results in Report Viewer Web Part in SharePoint. Essentially the reporting services can now be consumed directly from SharePoint document libraries with SharePoint content management and security models. The following depicts the architecture and the steps to enable this integration:
In addition, SQL Reporting Services is also integrated with Report Builder 3.0 which is a feature-rich report authoring tools for end users. Sparklines and data bars, maps, and indicators are some of the new features to enhance data visualization of KPIs in a report. For those who would like to learn more, there is much information readily available for mastering Report Builder 3.0.
(A cross-posting from Microsoft SharePoint Experts Blog)
This is the fifth article of a series to review the following five BI vehicles in SharePoint 2010
was a separate product. Now included in SharePoint 2010, PerformancePoint becomes a set of services configured as a service application, and surfaces itself in a web part page with Key Performance Indicators (KPIs), Scorecards, Analytic Charts and Grids, Reports, Filters and Dashboards, etc. Each of these components interacts with a server component handling data connectivity and security. This integration with SharePoint 2010 brings opportunities to better analyze data at various levels, while SharePoint security and repository framework provides consistency, scalability, collaboration, backup and recovery, and disaster recovery capabilities. One very interesting analytics tool in PerformancePoint is the Decomposition Tree which enables a user to navigate through mass amount of data in a visual and initiative way to decompose, surface, and rank data based on selected criteria. The user experience is shown below.
PerformancePoint is installed by default in SharePoint 2010. It can be easily configured as a service application in Central Admin and deployed in a SharePoint farm as shown below. Overall, this integration makes Business Intelligence much more approachable in system integration and administration. PerformancePoint planning, administration, developers and IT pros centers, and MSDN blog are good resources to find out more information.
This is the fourth article of a series to review the following five BI vehicles in SharePoint 2010:
A picture is worth a thousand words. This cannot be more applicable to what Visio Services can deliver. A feature of SharePoint 2010, Visio Services enables data-bound Visio drawings to be viewed in a web browser. This feature is for sharing Visio drawings and letting authorized users view Visio diagrams in a SharePoint library without having Visio or the Visio Viewer installed on their local computers. Visio Services can also refresh data and recalculate the visuals of a data-connected Visio drawing hosted on a SharePoint 2010 site. So a user will always see the latest and up to date information in a visual form. For instance, a complex manufacturing supply chain can be presented with clarity and simplicity, and up to date status with Visio Services as shown below. A Visio Services overview is a good starting point to better understand this feature. And the installation and administration of Visio Services are very easy to follow.
Visio Services can display Visio drawings using a Web Part without having a locally installed Microsoft Visio 2010 on the client computer. However Visio Services is not for creating or editing Visio diagrams. To create, edit, and publish diagrams to Visio Services, an author must have a locally installed Microsoft Visio Professional 2010 or Microsoft Visio Premium 2010.
Available only with SharePoint Server 2010 Enterprise Client Access License (ECAL), Visio Services must be deployed, provisioned, and enabled before first use. In addition, one must have Microsoft Visio Professional 2010 or Microsoft Visio Premium 2010 in order to save diagrams to SharePoint as Web drawings.
To view a Visio drawing based on a SharePoint list or an Excel workbook connected to an Excel Services, a user must be authenticated and authorized by the SharePoint 2010 hosting the content. And three authentication methods are supported:
While developing enterprise service architecture, planning for services that access external data sources is something not to overlook. For a service application as one the following using a delegated Windows identity to access an external source, the external data source must reside within the same domain with the SharePoint 2010 farm where the service application is located or the service application must be configured to use the Secure Store Service.
Namely Delegation of a Windows identity, Windows domain, and Secure Store Service are a few things to keep in mind if a service application to access a data store beyond the SharePoint farm where the service application is running. In other words, do the right thing to plan your Visio Services deployment.
This is the third article of a series to review the following five BI vehicles in SharePoint 2010:
First introduced in Microsoft Office SharePoint Server 2007, Excel Services provides server-side calculations and browser-based rendering of Excel workbooks. On the right is the architectural concept of Excel Services. In the core is Excel Calculation Service (ECS) which is the calculation engine. Excel Web Access (EWA) is a web part which displays and interacts with a workbook. The access to methods and objects is through APIs provided by Excel Web Services (EWS) hosted in SharePoint Services.
Excel Services allows a user to publish a workbook or selected spreadsheet cells as a webpage. Because the content is published without exposing the underlying business logic, intellectual properties are protected and as well applied in a standardized/consistent fashion. The motivation is to publish "one version of the truth" such that users always view a consistent set of values if published as read only, and results derived on business logic that is consistently defined. In a large organization, consistency and synchronicity are key productivity enablers which are many SharePoint features are about. Both Excel 2010 and Excel 2007 have the ability to publish an Excel workbook to a SharePoint site.
By naming selected cells in an Excel workbook, an author can then indirectly letting a user change the cell values and apply them as parameters of an analytical model. For example, as shown on the left, a user provides interest rate, loan period, and loan values in the Parameters Task Pane to calculate a monthly mortgage payment. While any of the three parameters varies its value, the derived monthly mortgage payment changes accordingly. Since the business logic, i.e. formulas, embedded in these cells are not exposed; Excel Services can display the results with the business logic implemented in a consistent and protected way. In this example, the mortgage calculation happens to be a well-known formula and the protection may appear trivial. However in a production application, this may be a work order estimate or a marketing program discount rate calculator. In this case, Excel Services not only can protect the underlying business logic perhaps based on proprietary knowledge, but as well ensure the logic is applied in a consistent and predictable fashion.
In other words, in addition to publishing one version of the truth as read-only data like KPIs, charts, and tables, Excel Services can also allow a user to enter values as parameters to a protected analytical model and carry out what-if analysis.
This is the second article of a series to review the following five BI vehicles in SharePoint 2010:
Excel 2010 and PowerPivot introduces a fascinating integration. PowerPivot, a data analysis tool and free add-in to Excel 2010, gives users the power to create compelling BI solutions right at from an individual's desktop. With this add-in to Excel, a user can transform mass quantities of data with significant speed into meaningful information to facilitate decision making. Excel with PowerPivot is, in essence, a user-driven, self-service solution model with minimal infrastructure dependency. For those who need a quick and user-driven BI solution with minimal infrastructure dependency, Excel 2010 with PowerPivot is a great candidate.
PowerPivot, natively supports various data stores, SQL Analysis Services, and data feeds as shown above. Once in place, PowerPivot has an in-memory engine capable of processing millions of rows of data with impressive performance. At minimum, 1 GB RAM is expected to run PowerPivot. The actual RAM needed will obviously depend on the amount and business logic that PowerPivot carries. The computing power is delivered directly within Excel with a consistent user experience. At an operational level, an Excel user will be able to employ PowerPivot very easily with just routine Excel end-user operations, i.e. mouse-clicks, cut-n-paste, etc. It's a very cost-effective way to analyze large amount of data for achieving business insight and shortening decision cycles. There is also an integration of PowerPivot with SharePoint 2010 (see below) that scales this self-service BI model to an enterprise level.
PowerPivot for SharePoint adds services and infrastructure for loading and unloading PowerPivot data. After creating a PowerPivot workbook, one can save/publish it to a SharePoint server or farm that has PowerPivot for SharePoint and Excel Services installed. This adds collaboration and document management support for a published PowerPivot workbook. In SharePoint, while the PowerPivot services processing the data, Excel Services renders it in a browser session. The SharePoint integration enables users to share data models and analysis. By configuring refresh cycles, the data can stay current automatically. Further, a published workbook may become the basis for Reporting Servicesreports created by other authorized SharePoint users, repurposed in other PowerPivot workbooks, or linked to from different sites, possibly in different farms. There are many interesting business scenarios and possibilities with PowerPivot. To learn more, Http://powerpivot.com/ is a great resource.
Is There A Need for Processing Millions of Rows of Data
A key delivery of PowerPivot is the ability to process millions of rows of data. Here the amazing capacity of sorting more than 100 million rows on a desktop delivered by Excel 2010 and PowerPivot is shown below.
In some of my TechNet events, a few IT professionals nevertheless told me they would never need to process millions of rows of data in Excel. I was not surprised with this response and, in a way that was very true.... till the introduction of PowerPivot. In my view, the capacity and performance limitations then existed in hardware and software making it not practical to process extremely massive amount of data in a desktop environment. That was not because there had had no need for processing very large amount of data. Companies would need to spend a lot of time and money; contracting it out or having a developer team to develop, produce, and maintain reports for making business decisions. I can vividly remember in mid 90's while working as a consultant, many of my engagements were to fix business logic and improving the performance of COBOL reports based on large amount of data. When it comes to statistical models, demographics, trend analysis, optimization, information portal, etc. data will never be too much and the demands have been always there. The difference is reports used to take hours of a team of specialists and operators to implement and much CPU time to generate, now available in seconds at the finger tip of an information worker running Windows 7 desktop and Excel 2010 with PowerPivot.
And just like many of us once argued 1024x768 resolution would be more than enough for word processing and email routines, while today few works with a low resolution screen anymore. Not too long ago, I thought Instant Messaging was counterproductive, while today it is a necessity for me to be productive and take care of business every day. So, does everyone need the ability to process millions rows of data? Maybe not, not yet. I do however believe as PowerPivot is becoming a standard add-in for Excel 2010, businesses will soon expect the ability and performance to process extremely large amount of data from multiple data stores are and will be readily available with a PC desktop. The question is and will not be if data can be analyzed, but what to analyze and how good the analytical model is. Above all, PowerPivot is for:
The empowerment through a self-service model to derive business intelligence right on the desktop and the immense capacity offered by PowerPivot as a free add-on, together makes this solution a must-have for conducting data analysis.
BI is a concept encompassing many areas of IT and like many other IT terms, it means different things to different people. One simple definition of BI is “Using of analytic and visualization tools to better understand and interpret data.” Recently there have been active discussions on BI as a priority in CIO’s list. Interestingly, the more we talk about BI, it seems the bigger the scope BI has. Indeed, PowerPivot, Excel Services, PerformancePoint, etc. these tools and features can sound confusing and overwhelming. To better understand BI, I find a great review discussing How SharePoint 2010 brings BI to the next level and a nicely done poster, Getting started with business intelligence in SharePoint Server 2010, are both very interesting and informative.
Notice there are three areas of BI, i.e. at individual, community, and organizational levels. SharePoint 2010 addresses these areas as a whole with various vehicles including: Excel and PowerPivot Add-in, Excel Services, PerformancePoint Services, Visio Services, and Reporting Services and Report Builder as depicted below. And it is important to keep the context in mind of a BI scenario that is being assessed such that the best vehicle, namely right tools and the best-fit features, will become evident.
(Source: Getting started with business intelligence in SharePoint Server 2010)
This is an overview of a series of articles to review the following five BI vehicles in SharePoint 2010:
Also I highly recommend reviewing a great series of Office and SharePoint relevant content publsihed by Dan Stolts, one of my fellow Evangelist based in Boston area.
I am very excited to announce our upcoming TechNet events for the remainder of 2010. Based on the feedbacks from those who attended our events in the past few quarters, we have made some changes on our deliveries. We will have two tracks simultaneously delivered in the US east coast. The ten Windows 7 Deployment events are delivered by John Baker and Blain Barton. While eight SharePoint 2010 are by Yung Chou and Bob Hunt.
These are so called Firestarter events with the first session starts from 9 AM and the last and fourth finished by 5 PM. And throughout the day we will have sessions relevant to a specific technical focus. We are hoping with this format more technical depth can be delivered within a short period of time. The following is the schedule. The registration links have the location information and abstracts. Most events are to be delivered in Microsoft offices and the seats are very limited. So register early. You have been here advised. :)
MVPs, user group leaders, and IT influencers relevant to each track, please let us know you are coming. We would like to know the activities in your areas and how we may better assist you growing the communities.
See you all at the event near your city.
The following table highlights the minimum requirements of Windows SharePoint Services (WSS) 3.0 and SharePoint Foundation (SPF) 2010 for preliminary upgrade planning for SPF2010. Since SPF2010 is the underlying infrastructure for SharePoint Server (SP) 2010, information presented here is applicable to upgrade planning for SP2010 as well.
Notice that the following scenarios are not supported in upgrading to SPF2010:
Upgrade from a farm running WSS 3.0 earlier than SP2
Direct upgrade from a farm running WSS 2.0/SharePoint Server (SPS) 2003. In this case, one must go from WSS 2.0/SPS2003 to WSS 3.0/MOSS2007 before going to SPF2010/SP2010.
For production deployment, please do reference the links in Official Requirements of the table to get the latest information.
SharePoint Foundation 2010
64-bit, four cores
4 GB for developer or evaluation use
8 GB for single server and multiple server farm installation for production use
For large deployments, see the "Estimate memory requirements" section in Storage and SQL Server capacity planning and configuration (SharePoint Server 2010).
3 GB for installation
80 GB for system drive
For production use, you need additional free disk space for day-to-day operations. Maintain twice as much free space as you have RAM for production environments. For more information, see Capacity management and sizing for SharePoint Server 2010.
Windows Server 2003 (Standard, Enterprise, Datacenter and Web editions)
Windows Server 2008 (as of WSS 3.0 SP1)
64-bit edition of Windows Server 2008 Standard with SP2
SQL 2000 SP4
SQL 2005 SP2
SQL 2008 (as of WSS 3.0 SP1)
64-bit SQL 2005 with Service Pack 3 (SP3) with Cumulative update package 3 for SQL Server 2005 Service Pack 3
64-bit SQL 2008 with Service Pack 1 (SP1) and Cumulative Update 2 installed with Cumulative update package 2 for SQL Server 2008 Service Pack 1
http://technet.microsoft.com/en-us/library/cc288751(office.14).aspx including software prerequisites
WS2008 R2 Trial, SQL2008 R2 Trial, SPF2010, SP2010 Trial
(A cross-posting from SharePoint Experts Blog)
is a free download from Microsoft. It is a low-cost entry-level Web-based collaboration solution for small organizations or departments. SharePoint Foundation (SPF) 2010 is the underlying infrastructure for SharePoint Server 2010 and the new version of Microsoft Windows SharePoint Services (WSS) 3.0. Frequently SPF is also used as a pilot or proof of concept before enterprise roll-out of a SharePoint solution.
Almost everything an IT pro needs to know about SPF including requirements, what’s new, Getting Started, planning, deployment, and operations is in a technical library on the web and in Help (or chm) format for downloading as shown below. Similarly technical libraries of SharePoint Server 2010 on the web and in Help format are also available. These technical libraries are must-have references and my recommended bedtime readings for IT pros serious about SharePoint.
Notice in the Help format version, once downloaded if the text in the Help file does not appear as expected, instead "Navigation canceled," "Action canceled," or "The page cannot be displayed" is displayed, please proceed with the following steps to unlock the file:
Office Web Apps are online companions to Word, Excel, PowerPoint, and OneNote giving you the freedom to work on Microsoft Office documents with browsers. This screencast gives a high-level overview of the requirements to deploy Office Web Apps in a SharePoint environment, what is a SkyDrive, how you can experience Office Web Apps today, etc. Additional information is also available from the post, Office Web Apps with SharePoint 2010 or SkyDrive Explained.
What Are Office Web Apps
The concept of Office Web Apps is essentially your Microsoft Office in the cloud. Enterprise customer can deploy Office Web Apps in a private cloud, while for Windows Live users Microsoft makes Office Web Apps available free in the Internet.The following is a screen capture of editing a presentation with PowerPoint Web App. A quick review is also available in Office Web Apps Overview.
Office Web Apps are online companions to Word, Excel, PowerPoint, and OneNote giving you the freedom to work on Microsoft Office documents with browsers including Internet Explorer 7 or later for Windows, Safari 4 or later for Mac, and Firefox 3.5 or later for Windows, Mac, or Linux.Office Web Apps are entirely Web-based, and there's no additional software to download or install. Office documents can be created and stored in a server supporting Office Web Apps right from the browser session without the need of a locally installed Microsoft Office client.
Using Office Web Apps a user will be able to view Office documents seamlessly in the browser with great fidelity, create new Office documents and do basic editing using the Ribbon. There are however some differences between the features of Office Web Apps and the Office 2010 programs. When making changes requiring functions beyond what are available in an Office Web App, or as preferred, one can easily open and edit the document in Office locally installed on your computer, and later save it back to the server. The ability to open Office documents directly from Office Web Apps into the desktop application is available on computers running a supported browser and with Microsoft Office 2003 or a later version of Office (for Windows PCs). This functionality will also be available on computers running a supported browser along with the forthcoming Office for Mac 2011.
What Is SkyDrive
A free, password-protected online storage available with a Windows Live ID by Microsoft, SkyDrive is. With a Windows Live ID, a user can store up to 25 gigabytes (GB) of files as of July, 2010. The upload operation accepts a file up to 50 megabytes (MB) in size. A user can arrange files with folder and subfolders, and keep private files in the personal folder while placing those to be public in a shared folder. To share a folder or individual file, a user can set permissions accordingly followed by inviting others with email. Shown below is one way to create Office documents in SkyDrive.
Although SkyDrive provides a location for storing files online, it is nevertheless not an FTP site, nor does it function with an FTP client. Further Microsoft may limit the number of files that each user can upload to SkyDrive each month. Individual seeking support on SkyDrive can participate the conversations and look for answers in SkyDrive Forum.
Office Web Apps , SharePoint, and SkyDrive
For enterprise customers with on-premise SharePoint installation, Office Web Apps require SharePoint Foundation 2010 which is free from Microsoft. On the other hand, Office Web Apps does require volume licensing. Office Web Apps can deliver Word, Excel, and PowerPoint files on many devices. Supported mobile viewers for Office Web Apps on SharePoint include Internet Explorer on Windows Mobile 5/6/6.1/6.5; Safari 4 on iPhone 3G and 3GS; BlackBerry 4.x and later; Nokia S60; NetFront 3.4, 3.5, and later; Opera Mobile 8.65 and later; and Openwave 6.2, 7.0 and later. To roll out the services in an enterprise environment, TechNet has documented specifics including planning and deploying Office Web Apps.
For consumers, Office Web Apps are part of the Windows Live offerings. A user with a Windows Live ID can user Office Web Apps to create, upload Office documents which are stored in SkyDrive. Supported mobile viewers for Office Web Apps on Windows Live include Safari 4 on iPhone 3G and 3GS, and Internet Explorer 7 on the upcoming Windows Phone 7. Viewing Excel files via a mobile browser is currently only available with Office Web Apps on SharePoint 2010.
Start Using Office Web Apps with SkyDrive Today
A supported browser and a Windows Live ID are all you need to create, view, edit, and share your Office documents in the cloud. Your teammates can now work with you on projects regardless if they have a locally installed copy of Microsoft Office.
<Next: Office Web Apps Overview>
Like the Windows Server 2008 R2 component poster, the Hyper-V poster is a great visual reference to better understand the key features and components of Hyper-V, and Microsoft virtualization solutions in general including:
I use it a lot myself and highly recommend.