Anyone who’s deployed Windows Server 2008 (Pre-R2) with the Server Core deployment option will know it requires a slight behavioural change from what most Windows guys will have been used to. For those of you who need a refresher, the Server Core interface, in all it’s glory, looked like Exhibit A, below:
OK, the eagle-eyed among you will know that isn’t a screengrab from a Server Core box, as I’ve got aero-glass present – it’s actually from my Windows 7 box, but, the point is, the interface is the same, all you get is the command line for manipulation and local management. With a reduced overhead, less patching, reduced attack surface, and the ability to trim the install size still further using DISM, there are some key benefits to using Core, however, for some, those benefits are outweighed by the additional ‘overhead’ around managing the thing! Locally at least.
Step up Core Configurator. Developed by Andrew Auret and Tony Ison from Microsoft UK, this little tool, which can be run from a USB stick, gave the usual GUI-admins much more of a GUI to work with, simplifying and speeding up implementation of aspects such as installing Roles and Features, managing Windows Updates, and Licenses, but also turning on things like MPIO and setting iSCSI options. For those of you who haven’t seen it, it looked like this:
As you can see, even in V1, it was a much simpler and more streamlined interface than just using CMD!
Fast forward to now, and V2 is upon us, with significant GUI and usability improvements (click to enlarge):
As you can see, it’s pretty rich, and you can run it directly from a USB stick, so it’s always handy to have with you! What else can it do?
You can download it here. I know I will be!
More and more information is creeping out of Microsoft with regards to Service Manager. Part of the System Center family, and due for release in 2010, Service Manager is an integrated platform for automating and adapting your organization’s IT service management best practices, such as those found in Microsoft Operations Framework (MOF) and Information Technology Infrastructure Library (ITIL). It provides built-in processes for incident and problem resolution, change control, and asset lifecycle management. Through its configuration management database (CMDB) and process integration, Service Manager automatically connects knowledge and information from System Center Operations Manager, System Center Configuration Manager and Active Directory. Service Manager delivers integration, efficiency, and business alignment of the datacenter IT services by:
Optimizing processes and ensuring their use through templates that effectively guide IT analysts through best practices for change and incident management.
Reducing resolution times by cutting across organizational silos, ensuring that the right information from incident, problem, change, or asset records is accessible through a single pane.
Extending the value of the Microsoft platform through automated generation of incidents from alerts and the coordination of activities among System Center products.
Enabling informed and cost-effective decision making through its data warehouse, which integrates knowledge from disparate IT management systems, delivering out-of-the-box reporting and flexible data analysis through SQL reporting services
Enough about the background, what about this Academy Live Session?
MGT54PAL: Understanding Service Manager and Positioning it in the ECAL Suite
Presented by Clare Henry and Kelly Wagman
Thursday, 7th January 2010 - 16:00-17:00pm GMT - Click here to calculate your local time
Customers are looking for solutions to address their automation requirements for compliance management or to implement popular service management frameworks like ITIL, MOF, or CoBiT. With the introduction of System Center Service Manager in FY10, Microsoft can help customers extend the value of ECAL and SMSE to meet these needs. Learn how Service Manager integrates Microsoft management products to improve end-user support through the self-help portal, improve reliability in the datacenter through automated incident and compliance management, and how Service Manager is a platform for delivering business-centric solutions like Asset management.
Some of you may be thinking, what’s the eCAL suite? Well, from here, the Microsoft Enterprise Client Access License (CAL) Suite brings together 11 of the latest Microsoft products to provide your people with the newest innovations in compliance, real-time collaboration, security, communication, desktop management, and more. The Microsoft Enterprise CAL Suite provides an outstanding opportunity for customers to use their existing investments in the Microsoft core platform.
Furthermore, the Microsoft Enterprise CAL Suite provides a significant cost savings compared to the purchasing of individual CALs. Customers considering investments in just two of the products included in the suite will find that the economic value of the Microsoft Enterprise CAL Suite becomes compelling.
What technologies are within the eCAL suite?
So, add to that list, System Center Service Manager, and you’ve got an even more compelling package, all rolled up under the single license.
If you’re interested in finding out more, and you’d like to attend the webcast, you can head on over and register here.
Hot on the heels of a recent post around running Dynamics on Hyper-V, Liam from Technology Management wanted to make me aware of another case study, again, focused on Dynamics NAV 2009, with security systems manufacturer, Fortress Interlocks. In this case study, the savings made through the solution saved the company £70k, and at the same time, will give them a capability to double productivity of their employees. At the time, this was the UK’s first deployment of NAV 2009, on WS2008 Hyper-V (pre-R2).
In early 2008, Golding and his IT team started working with Microsoft Gold Certified Partner Technology Management to help choose and deploy a solution that could meet the company’s needs. After evaluating all the leading products on the market, they chose Microsoft Dynamics NAV 2009 business management software. Golding says: “Microsoft Dynamics NAV 2009 satisfied all our requirements, and, as it’s produced by Microsoft, it had long-term commercial viability and support. Also, it operates on the Windows user interface, which is intuitive and familiar, reducing the amount of training employees need.” Once the decision to deploy Microsoft Dynamics NAV 2009 was taken, Fortress Interlocks and Technology Management wanted to make sure the system had high availability for its users. Virtualisation technology was regarded as the best way to achieve this because a number of virtual servers can be running to support the primary server if that fails. However, this option seemed too expensive. Golding says: “We initially considered VMware technology, but it was too expensive for our organisation. We then became aware of Windows Server 2008 Enterprise and Hyper-V. We researched it further and found it could offer us a similar service but at a much reduced price.” Despite it being the first ever U.K. implementation of Hyper-V for Microsoft Dynamics NAV 2009, Technology Management built the environment in less than 10 days, with the project going live on time and within budget. By connecting new Citrix servers running on Windows Server 2008, the company’s overseas offices can also access the business management solution, giving Golding complete visibility of all his departments.
In early 2008, Golding and his IT team started working with Microsoft Gold Certified Partner Technology Management to help choose and deploy a solution that could meet the company’s needs. After evaluating all the leading products on the market, they chose Microsoft Dynamics NAV 2009 business management software. Golding says: “Microsoft Dynamics NAV 2009 satisfied all our requirements, and, as it’s produced by Microsoft, it had long-term commercial viability and support. Also, it operates on the Windows user interface, which is intuitive and familiar, reducing the amount of training employees need.”
Once the decision to deploy Microsoft Dynamics NAV 2009 was taken, Fortress Interlocks and Technology Management wanted to make sure the system had high availability for its users. Virtualisation technology was regarded as the best way to achieve this because a number of virtual servers can be running to support the primary server if that fails. However, this option seemed too expensive.
Golding says: “We initially considered VMware technology, but it was too expensive for our organisation. We then became aware of Windows Server 2008 Enterprise and Hyper-V. We researched it further and found it could offer us a similar service but at a much reduced price.”
Despite it being the first ever U.K. implementation of Hyper-V for Microsoft Dynamics NAV 2009, Technology Management built the environment in less than 10 days, with the project going live on time and within budget. By connecting new Citrix servers running on Windows Server 2008, the company’s overseas offices can also access the business management solution, giving Golding complete visibility of all his departments.
Read the whole story here.
Going forward, my role is becoming more and more focused on the management of virtualisation, and just today, I’ve already seen a couple of items in the blogosphere, including this one, that are highlighting management as the big battleground in 2010, and to be honest, I agree. That’s not to say that the virtualisation layer isn’t important – far from it, VMware, Microsoft and Citrix to name but three, will still stress the importance of a stable, performant and robust platform to run your workloads on, but once virtualised, how can you ensure you’re getting the best out of the workloads you’ve invested in? How can you ensure that Exchange, right through to Outlook, is performing effectively, and is healthy? The same could be asked for SharePoint, SQL, Active Directory, Terminal Services, and more, not to mention the countless non-Microsoft platforms and technologies that something like System Center Operations Manager can effectively monitor and report upon.
There’s a key point right at the end of that last sentence. ‘Effectively monitor and report upon’. Monitoring is one thing, and can provide massive benefits around proactivity (I may have made that word up…), alerting, and automated recovery, however, being able to prove systems are functioning efficiently, and effectively, is just as valuable to the business, especially where SLA’s are concerned.
This brings me nicely on to the Service Level Dashboard 2.0. SLD 2.0 is a free download, that bolts onto, and extends the current SLD in SCOM 2007 R2, and provides a much richer experience when it comes to visualising the status of your infrastructure, and subsequently reporting on said infrastructure. If you’re wondering what I mean by ‘much richer experience’, take a look at this:
Anyone who’s used SLD 1.0 will know this is a pretty big improvement, and provides a much clearer, simple-to-understand interface, enabling users to quickly identify healthy/non-healthy areas of their infrastructure. The dashboard automatically displays application or system availability and performance in near-real time. Using the dashboard, customers can easily keep track of availability, performance trends, and head-off problems before they occur. They can also create role-specific dashboards to support different departments, such as HR, Finance, or Operations, so it really can be tailored to the needs of the organisation.
I’m sure you’re wondering how all the ‘bits’ fit together, but fear not, I have a diagram to explain!
You need WSS at the backend, so those of you who were worrying you’d need full-blown Microsoft Office SharePoint Server can relax! The IT chap defines the Service Level Goals, specifying targets around things like Availability, Performance etc. The dashboard is configured based on these tracking definitions, and Windows SharePoint Services displays the dashboard, building it from the System Center Operations Manager Data Warehouse. OK OK, that’s a 100,000ft view, and it’s more complex than that, so if you’re looking for more information, here it is:
Hat-tip to Matt Hester, an IT Pro Evangelist, who’s recorded these SLD Videos. If you’re interested, the download of SLD 2.0 is here.
Sticking with the ‘plethora’ theme of the last post, I was having a chat a few weeks back over email, with Paul from Xtravirt. If you’ve not heard of these guys, they are a bunch of guys who know there stuff, and I blogged about them ages ago, when they released a whitepaper on how to run VMware ESX/ESXi, on top of VMware Workstation – a great testing solution!
That post was just over a year ago, but fast forward to now, and the Xtravirt site is becoming one of the premier virtualisation sites, spanning across both server, and desktop virtualisation technologies, with a fantastic mix of blog resources, whitepapers, and downloads. Pay particular notice to the downloads section, as, if you’re a VMware house, there are a number of tools you may find useful.
I’d also encourage you to take a look at these whitepapers:
I’ve been playing with vWorkspace 7.0 over the last few days, and I’d say it’s definitely one to watch in 2010…
If you’re interested, head on over to the site, or subsvcribe through the usual RSS or Twitter channels…
A couple of weeks back, I started a series (currently still 1-part long, apologies!) on Site Recovery for Hyper-V, and in that post, I discussed Essentials for Hyper-V 5.5, from Citrix. Without trying to recreate the wheel from that post, Essentials for Hyper-V is a comprehensive set of advanced virtualisation management capabilities now offering customers affordable business continuity solutions. Citrix Essentials for Hyper-V 5.5 includes Citrix StorageLink Site Recovery, a powerful set of tools that make end-to-end business continuity accessible, affordable, and easy to deploy.
If you want the full lowdown on the suite, check out my previous post, however if all you’re interested in is the release date, then this post is for you.
It’s available now!
You can also grab a Citrix Essentials for Hyper-V Platinum Edition 30-day Evaluation which you may find useful in testing, to see if it meets your needs.
Finally, there is a joint webcast, Citrix Essentials and StorageLink Site Recovery: Disaster Recovery for your Hyper-V Environment, with details below:
During this webcast Citrix and Microsoft introduce a new solution that makes cost-effective disaster recovery attainable for any organization. Citrix Essentials for Microsoft Hyper-V adds new disaster recovery capabilities to Hyper-V with the recent release of StorageLink Site Recovery technology. The joint solution combines the benefits of consolidating servers using Microsoft Hyper-V with advanced clustering and disaster recovery automation to give organizations an affordable way to implement, manage and automate disaster recovery services.
Product experts will demonstrate how StorageLink Site Recovery brings simple disaster recovery controls to Hyper-V while tapping powerful array-based remote replication services to protect your Hyper-V workloads from site-to-site. This session will also explore how you can use StorageLink Site Recovery together with Microsoft Clustering and Citrix Essentials workflow orchestration tools to build an automated DR solution for your Hyper-V environment.
Discussion topics include:
Get ready for some stats…
Did you know, on July 10th 2010, Windows Server 2000 will be end-of-life? No?
Did you also know, that 17% of Windows Server units deployed worldwide are running some form of Windows 2000? No?
Well, with the end of life approaching, there has never been a better time to understand the benefits of moving to the latest release of Windows Server, namely, 2008 R2. If you’re interested in understanding more about the story, and and what conversation starters, tools and methodologies are available to you, as a Partner, to smoothly migrate from 2000 to 2008 R2, this could be the webcast for you.
WNS170PAL: Helping your Customers Migrate from Windows 2000 to Windows Server 2008 R2
Presented by Justin Graham
Friday, January 15, 2010 - 17:00 - 18:00 GMT - Click here to calculate your local time
If you’re interested, all it will cost you is your time, and you can register here.
Before anyone says ‘this has been out for ages!’ – yes, I know, but there’s a new version out! I just haven’t had chance to blog anything about it yet! For those of you who haven’t heard about the Offline Virtual Machine Servicing tool, this is your lucky day!
Firstly, what is it?
Well, the name is a bit of a giveaway, unfortunately. I say unfortunately, as it’s probably one of the most boring names I’ve seen for a product, but, at least you won’t download it wondering what it is! The tool, as the name suggests, allows you to keep virtual machines that may be offline for a while, up to date with patches and the like. Imagine the scenario; virtual machines may be left offline (stored in a non-operating state) for extended periods of time, which conserves resources when the server capacities of the virtual machines are not needed or frees up physical computing resources for other purposes.
However, offline machines do not automatically receive operating system, antivirus, or application updates that would keep them compliant with current IT policy. An out-of-date virtual machine may pose a risk to the IT environment. If deployed and started, the out-of-date virtual machine might be vulnerable to attack or could be capable of attacking other network resources.
Therefore, IT groups must take measures to ensure that offline virtual machines remain up-to-date and compliant. At present, these measures involve temporarily bringing the virtual machine online, applying the necessary updates, and then storing it again.
In the future, image updating solutions may be able to update virtual machines while they remain offline. Until such solutions become available, the Offline Virtual Machine Servicing Tool, a Solution Accelerator from Microsoft, provides a way to automate the process of updating virtual machines.
How does it work?
Firstly however, what do you need to make it work? Well, System Center Virtual Machine Manager for starters – this tools supports as far back as SCVMM 2007, but works fine and dandy with the most recent release, SCVMM 2008 R2. You also need some mechanism for actually patching/updating the virtual machines. This is handled by one of the following:
The ‘flow’ is as follows:
OVMST uses “servicing jobs” to manage the update operations based on lists of existing virtual machines stored in VMM. Using Windows Workflow Foundation technology, a servicing job runs snippets of Windows PowerShell scripts (against SCVMM, which is built on PowerShell) to work with virtual machines. For each virtual machine, the servicing job:
Couple of key concepts to mention there. The VM that is offline will reside in the Library, which is effectively the storage side of SCVMM. From there, it’s deployed onto a host. This host, ideally, will be known as a maintenance host, and may be disconnected (not completely!) from your ‘production area’. The reason for this? Well, think about it – if you start up an out of date VM, that could be vulnerable, where would you want to start it up? An isolated, controlled environment, or alongside your main environment? Remember, this ‘maintenance host’ could be a Hyper-V Server, which is a very cost effective way to provide isolation.
Well, for starters, here’s a big one – one of the objects you typically store in the Library is a Template. This is different from an offline virtual machine, as an offline virtual machine is something that has been started, configured, and placed in the library, ready to be started again at any time. A Template however, is a pre-built VM, with a sysprepped OS. The way the OVMST works, is it starts the VM, patches it, then shuts it down. This can’t happen with a sysprepped OS – it would start up, but then you’d have to go through the whole out of box experience to ‘personalise’ the OS with Time Zone, Computer Name etc, before WSUS/SCCM could deploy patches into it. This, I believe, will be solved in the future when injecting patches into offline VHDs becomes reality.
I really like the OVMST – apart from the name! It’s a free tool that can help you solve a key business problem. A lot of organisations I speak to, typically keep their VMs running at all times, but if there are a few servers that only get launched once a month, or quarter, and that need to be kept up to date in line with policy, then this is a great tool to automate that process for you, and the best bit? It integrates into commonly used tools, like WSUS, or like SCCM. I guess it’s a shame it doesn’t integrate with 3rd Party technologies for managing patches, but maybe that will make it into a future release.
Who knows – maybe I’ll record a video on it’s use sometime in the near future!
You can get all the info, and download the OVMST here.
If there was ever a site, apart from this one (;-)), that provided a pretty comprehensive set of resources around Hyper-V, this TechNet Resource would be it. From planning to installation, pre-deployment to deployment, and management through to benchmarks, this site has it all. I can’t see any links to my videos, but I didn’t say it was a perfect site, now did I? :-)
Anyway, short and sweet – just wanted to make sure you had the link, should you be looking for Hyper-V info.
Again, it’s here: http://technet.microsoft.com/en-us/dd565807.aspx
For those of you not familiar with the IPD Guides, they aim to provide a couple of key benefits:
The IPD Guides are growing fast, and as always, you can find all the IPD Guides here.
With the release of these updated guides, the Infrastructure Planning and Design (IPD) series of guides further assists organisations in selecting the right virtualisation technologies for their business needs.
To select an appropriate virtualisation technology, organisations can look to the updated IPD Guide for Selecting the Right Virtualisation Technology. This guide walks the reader through the technology selection process for each workload - and is now updated to include coverage of Windows Server 2008 R2 Remote Desktop Services and Virtual Desktop Infrastructure (VDI).
If the IPD Guide for Selecting the Right Virtualisation Technology points the organisation to Remote Desktop Services as a best fit for their business needs, the guide then directs the user to the updated IPD Guide for Windows Server 2008 R2 Remote Desktop Services, which then outlines key infrastructure planning and design guidance for a successful implementation of Remote Desktop Services. The IPD Guide for Windows Server 2008 R2 Remote Desktop Services leads the reader through the nine-step process of designing components, layout, and connectivity in a logical, sequential order. Identification of the RD Session Host farms is presented in a simple, easy-to-follow process, helping the reader to design and plan centralised virtual datacenters.
Used together, these updated guides provide comprehensive planning and design guidance for implementing a Remote Desktop Services infrastructure. The IPD Guide for Selecting the Right Virtualisation Technology also teams with other virtualisation guides in the IPD Series - to provide end-to-end planning and design guidance for a variety of virtualization technologies.
For users of Windows Server 2008 R2, the Remote Desktop Services guide is a complete replacement for the Terminal Services guide. The Remote Desktop Services guide reflects the new capabilities introduced with Windows Server 2008 R2 as well as the rebranding of Terminal Services. The Infrastructure Planning and Design Guide for Windows Server 2008 Terminal Services remains available on the IPD Web site.