Just had to share this with you:
Details are here.
I've got some time on my hands (I'm at home not feeling too well) and decided that I'd spend a bit of time setting up the Windows Recovery Environment (WinRE) on my laptop. I've been meaning to do this for ages, as I want to enable BitLocker (and be able to recover my data if my laptop ever decides to not work'). It's worth pointing out that, because of my role, I don't run the Microsoft corporate image of Windows Vista Enterprise (I run Ultimate).
WinRE is just WinPE (Windows Pre-Installation Environment) with a bunch of recovery tools. This is what you'll end up with:
Very 'handy' - just a bit 'fiddly' to get installed (especially as an afterthought).
There's loads of great info available (just use your favorite search engine) but even with this I needed to do more.
You can create a bootable CD/DVD with WinRE on it, but ideally you have it available as a boot option. You actually end up with your PC booting into it automatically if Windows won't load for any reason.
If you're building your PC from scratch, you need to create a separate partition on your drive to hold WinRE. The best way to do this, is to follow these instructions.
If you've already installed Vista, and you're running Ultimate, you can use the Windows BitLocker Drive Preparation Tool - it comes as part of the 'BitLocker and EFS enhancements' optional extra on Windows Update. Failing that it's WinPE and DiskPart.exe or a third party tool.
Either way, you now have a 1.5Gb boot/system partition at the beginning of your disk, holding the boot block and enough information for Windows to boot from the second partition.
Now onto WinRE itself. To do this properly, you'll need the Windows Automated Installation Kit (well you only need imagex.exe and SetAutoFailover.cmd really) - you can get WAIK here.
Then you just follow the instructions here to create WinRE and here to enable it as an option at boot time.
One last 'gotcha' - When you boot into WinRE, it gets you to logon as a Local User from your Windows installation (preferably an admin). My laptop is a member of the Microsoft domain and doesn't have any local user accounts (well it does have Guest and Administrator, but they're both disabled). I simply enabled the local Administrator account (after giving it a password that meets the Microsoft password policy - the Group Policies on our network force strong passwords).
Hopefully I'll never need to boot into WinRE, so I've probably wasted the last hour or two - but I 'might'...
Here’s a PUBLIC video of Windows Server “Longhorn” and two of its cool new features: Server Core and Windows Server virtualisation.
In this demo you will see:
- Windows Server Virtualisation running on a Server Core installation managed remotely from another Windows Server Longhorn box
- 64-bit and 32-bit virtual machines running concurrently
- SUSE Linux 10 running in a virtual machine
- An 8-core virtual machine
- System Center Virtual Machine Manager
o Interface and Operations
- System Center Operations Manager
o Monitoring the VMs on the Server Core box
o Fire off a PowerShell script to hot-add another NIC to a SQL VHD Image
Click HERE: http://soapbox.msn.com/video.aspx?vid=5119240c-6579-4827-8338-7f5539930402
I thought I’d start this third and final part with ‘what is Microsoft’s strategy around virtualisation’? Our strategy has been developing in this area and I think we’ve now got a pretty clear position:
“A complete set of virtualisation products, from the data center to the desktop. All assets (both virtual and physical) managed from a single platform”.
We (Microsoft) believe that we have a complete solution, when it comes to virtualisation; we have made investments in the technology, the licensing, in interoperability and in support. Let me cover off the last three first and then spend more time on the technology.
Licensing used to be a can of worms, then we made it easier, then we introduced virtualisation! Every version of Windows that you run needs to be licensed, whether that’s running on a physical server or a virtual one. We have made a few changes to make this a bit easier, they are:
Just a small note here (for completeness and honesty) – technically you can run another virtualisation solution on a server and avail of the last two licensing deals (you just need to have assigned the license to that server).
In the context of virtualisation, interoperability has been well documented recently. Be it the Novell agreement, the XenSource agreement, moving the virtual hard disk (VHD) license under the Open Specification Promise, and certainly running and supporting Linux guest virtual machines on Virtual Server. But these are just the higher-profile examples. We’ve also done more subtle items to enable and support interoperability. For example, the APIs for Virtual Server 2005 have been available publicly since day 1 on MSDN. And much of the preliminary details on Windows Server virtualisation (part of Longhorn) APIs were shared at WinHEC last year, in the included documentation shared with each attendee. And like other Windows APIs, we plan to publish these publicly at beta.
Thirdly, support. I believe Microsoft has the best support organisation out there – mind you, you get what you pay for. All of our ‘self help’ content is on the web, we offer free support for security and internet explorer related issues, you can give us your credit card details and we’ll help you with anything (and if it turns out to be our fault, we won’t charge you). Give us a bit more money and we’ll give you 24x7 support and get the guy that wrote the piece of code causing the error, out of his bed to fix your issue. Give us a bit more and you can have engineers on site, technical account managers representing you inside Microsoft, training, design reviews, you name it - you do get what you pay for. In the context of Virtualisation, support just means that you have one place to ‘point your finger at’ (one arse to kick) – if it’s a Microsoft product, we’ll fix it (we own the entire stack, so we can troubleshoot the entire stack). It is a bit better than that – if it’s one of the supported, non-Windows operating systems and it’s running in Virtual Server, we’ll even fix that.
Now onto the technology – apologies if this comes across as a big list (but we do have a lot to offer).
Virtual Server has to be first; as it is the most cost-effective server virtualisation technology available (it’s free, as is Virtual PC). Virtual Server increases hardware utilisation and enables organizations to rapidly configure and deploy new servers. Virtual Server virtualises the hardware of a complete physical computer in software and lets you install and run multiple virtual machines. Virtual Server has only been out since 2005, so we’ve been playing ‘catch up’ to a degree (it’s been around for a lot longer though – I used it during the Windows Server 2003 launch event to run all my demos). We are currently at Virtual Server 2005 R2 SP1 (well, nearly – SP1 is in its final beta) and can pretty much do anything you’d ever need a virtualisation platform to do (including physical to virtual migrations). The R2 release introduced 64-bit support for the host machine (which simply put, means more memory, which equates to more guest machines and more performance). We also released support for running Virtual Server on a cluster, which offers high availability for any virtual machines you run, by letting them fail over to another physical host in the event of planned or unplanned downtime. SP1 adds Volume Shadow Copy support (snapshot backups), support for hardware assisted virtualisation (Intel VT and AMD), Vista as a guest, the ability to mount a VHD file (for offline edits), bigger VHDs, and a few other features which escape me. It’s possible to completely manage Virtual Server by scripting against the APIs of its Component Object Model, or you can use the web-based administration console that it ships with. A quick note here – you do not need to have IIS installed to run Virtual Server (you only need it on the machine you are going to run the admin console on).
Management is next; we want to make Windows the most manageable platform out there, by enabling customers to manage both physical and virtual environments using the same tools, knowledge and skills. No other virtualisation platform provider is delivering this. Today customers can use Microsoft Operations Manager (MOM) and the management pack for Virtual Server. It knows what is running on physical servers and what is on virtual and knows to treat them differently. MOM includes all the best practice advice and guidance within itself and treats one host server as a single device (even though it could be running multiple virtual machines).
Next up is Virtual PC 2007. Virtual PC lets you create separate virtual machines on your Windows desktop, each of which virtualises the hardware of a complete physical computer. You can run multiple operating systems at once on a single physical computer and switch between them as easily as switching applications. Virtual PC is the ideal solution for tech support, legacy application support, training, or just for consolidating physical computers.
Microsoft SoftGrid Application Virtualisation is next on my list, is the only virtualisation solution on the market to deliver applications that are never installed, yet securely follow users anywhere, on demand. SoftGrid changes applications into network services that no longer need to be installed, which lets you keep your PCs in a known, locked down state whilst still letting your users get on with their jobs. Application virtualization provides a finer grained solution compared to virtual machines, and it’s really more suited for the enterprise desktop environments.
I guess the next to mention is for Windows Server. Terminal Services is an integrated part of all Windows Server versions and it allows you to remote the video, keyboard and mouse over the network to your desktop. I know terminal services isn’t really what you’d call a virtualisation technology – but it is (your desktop hasn’t really got eight processors – has it?).
There’s even virtualisation in Windows Vista – if your application wants to write to the windows or program files folders or wants to write to the registry, Vista won’t let you. Instead of just failing, Vista virtualises the file system and the registry and your application just works (you actually write into your user profile, out of harm’s way, keeping Vista stable and un-compromised).
We’re extending the virtual infrastructure management capabilities with System Center Virtual Machine Manager soon, which will allow you to increase physical server utilization, centralise management of virtual machine infrastructure and quickly provision new virtual machines. And it’s fully integrated with the System Center product family so you can leverage your existing skill sets. Even though SCVMM is currently in beta, I would recommend you look at it; especially it’s physical to virtual feature and its self service portal.
Longhorn is due out later this year and will include Windows Server Virtualisation, which will introduce things like live migration, support for up to eight virtual processors (32 or 64-bit), and hot add of resources such as disk, networking, memory and CPU (you’ll need an operating system that knows what to do if it suddenly sees more memory or CPUs – Longhorn?). Windows Server Virtualisation is a hypervisor based virtualisation solution that runs at ring -1 (below the kernel) and is very thin.
So, by the end of this year, we’ll be able to do this:
I need a new server to run my line of business application. I go to the self service portal on my intranet and request a server. Within minutes I am running a virtual server (with potentially up to eight 64-bit processors and 32GB of memory). System Center is looking after my server and maintaining the service level I need. It’s being kept up-to-date, it’s being backed up and it’s being monitored. If System Center deems that the best fix for a problem, is to add more virtual hardware, it will. My virtual server is running on a physical server somewhere, I don’t need to know where, System Center made that decision based on its knowledge of my datacenter and its knowledge of my workload – if the physical host needs to be shut down or it has an error, my virtual machine will continue running on another box. I’ll receive regular reports in my inbox, detailing my server’s health and performance (I’ll probably get a bill too – based on how much resource I used).
This might sound a bit farfetched – but it’s real and all I’m waiting on is a couple of products to come out of beta. If I can’t wait, I can probably do 80 percent of it now anyway - certainly all I’d ever want (or need) to do with virtualisation just now.
I hope you enjoyed reading this series of articles as much as I did writing them.
Everything we implement has to have a great Return on Investment (ROI) and the Total Cost of Ownership (TCO) must always be low. We all want to increase the availability of our systems, and we all want to 'do more with less'. Here's a good one: We all want to 'enable agility' (whatever that means).
I'm not taking a deliberate swipe at marketing departments (I've probably said most of those myself at some time or other), it's just that, apparently, we all spend 70 per cent of our time and money 'fighting fires' (keeping our IT systems up and running) and only 30 per cent adding values to the business (implementing new systems and solving new challenges). Most people I say that to tell me that I don't know the half of it; the mix is closer to 90:10.
So, is virtualisation the 'Holy Grail'? Is it going to solve all of our issues and turn us into super heroes?
Let's look at some of the challenges you face and see if virtualisation can help.
I can't remember where I learnt this, but apparently you can spend as much money keeping a server cool as you do keeping it switched on. This means that the fewer physical servers you have, the smaller your electricity bill will be. It also means that with fewer servers, the space you need to house them all can be smaller. Can virtualisation help here? You bet it can. As part of your server consolidation strategy, virtualisation will let you run fewer physical servers, which will address your power and space issues as well as let you run your servers at a much higher utilisation.
How long does it take to provision a new server (including the time taken to get financial approval, the lead time for delivery and the time to build and implement)? How long do you think it would take if the physical server infrastructure was already in place and all you were provisioning was a virtual server (that is already created as a template and is already up to date)? You've got it - the difference is minutes compared to weeks or months.
How much do you spend on providing high availability? How much do you spend on back-ups? If you virtualised your operating systems and applications, you can back them up into a single file, and replicate and move them to other available servers and desktops.
Most organisations spend a lot of time testing for application incompatibilities before implementing any infrastructure change (new operating system, service pack or patch). If you used desktop virtualisation (Virtual PC or similar) or application virtualisation, this lengthy testing would disappear. I feel a need to explain myself here. If you have an application that works fine on, say, Windows XP and you know that it currently fails on Windows Vista, then an option would be to run a virtual machine (running XP) within your Windows Vista host. This gives you all of the benefits of the new operating system (better security, easier management, etc) plus your application 'just works' (because nothing has changed as far as it's concerned - it's still running on XP). Your other option would be to use application virtualisation. Here your application runs in its own little sandbox, and if it ran fine when that sandbox was on XP, it will run fine when it's running on Windows Vista (the sandbox never changed).
Session virtualisation can also help with this scenario (and to be honest, this is one of the few scenarios for which I still see a need for terminal services). If you have an application that has very specific requirements (i.e., won't behave on new operating system or service pack, or needs a lot of testing before it can be put into production), then run it on a terminal server and remote the keyboard, video and mouse to the users over the network. You can actually mix application and session virtualisation together and come up with a very neat solution - run your applications in a sandbox on the terminal servers!
All the above makes it sound like virtualisation can solve a lot of issues, but on its own it can introduce almost as many. At the end of part one of this series, I left you with this comment: Every machine you run, either virtually or physically, needs to be managed. Let's imagine that I virtualise everything and enable self-service provisioning of new servers. How long do you think it would take to have a hundred servers? A thousand servers? Tens of thousands of servers? Who is going to back them up? Who is going to keep them up to date? Who is going to monitor them? Who is going to keep them in compliance with corporate policies? Who is going to manage them? Who is going to pay for them all? Without decent management products, you're just making your infrastructure even more complicated.
Ideally, you use management tools that can differentiate between physical assets and virtual ones. If I need to reboot a server after applying an update, I want to know that it is actually a host server for a number of virtual ones - I don't want to 'accidentally' reboot a dozen, mission-critical servers that have a 24 by 7 service level agreement. I feel another need for an explanation coming on (or at least a solution to the stated problem). How would you reboot a server that was the host for a dozen servers that can't be taken down? You would have them running on top of a cluster. The management tools would have the knowledge of what to do. The running, mission-critical servers would be failed over to another host (to maintain the SLA) before the server in question was rebooted (they could be moved back afterwards if that was required).
It would be great if the management tools had the knowledge within them to know what to do next, to do the right thing. Imagine a world where you are installing an application and are asked what SLA you require (99.999 per cent uptime and less than a second response time). The system would know what was required, in terms of architecture, to provide such an SLA (geographically-clustered, mirrored databases and multiple, load-balanced servers at every tier) and would implement and monitor that configuration. Over time, if the service level was going to be missed, the system would automatically implement the best practice resolution (put more memory, processors or I/O into a virtual machine; introduce another server into the presentation tier). This might sound a bit 'far fetched', but it's not that far off.
So, it depends. Virtualisation is definitely here to stay. With hardware advances moving as quickly as they are, you'd be hard pressed to maximise the utilisation of a modern server just by running a single workload. With great management solutions and a 'holistic' view of the entire platform, virtualisation may well turn us all into super heroes!
This day fortnight, I will cover Microsoft's offerings in the virtualisation space. I'll explain both the technologies we have now and what's coming (our complete solution). I guess I'll also have to touch on cost and licensing.
Oh, and another last point (to get you thinking): Who would you go to for support with a SuSE Linux Enterprise Server 9 running within Microsoft's Virtual Server?