January, 2014

  • Lab Ops part 17 - Getting Started with Virtual Machine Manager

    In my last post I talked a lot about the way all of System Center  works, now I want to look at  one part of it Virtual Machine Manager (VMM) as VMs in a modern data centre are at the core of the services we provide to our users. In the good old days we managed real servers that had real disks connected to real networks and of course they still exist, but consumers of data centre resources whether that other parts of the IT department or business units will only see virtual machines with virtual disks connected to virtual networks.  So administration in VMM is all about translating all the real stuff (or fabric as it’s referred to) into virtual resources. So before we do anything in VMM we need to configure the fabric, specifically our physical hosts, networking and storage.

    There’s good TechNet labs and MVA courses to learn this stuff, but I think it’s still good to do some of this on your own server, so you can come back it again whenever you want to especially if you are serious about getting certified.  So what I am going to do in the next few posts is to explain how to use the minimum of kit to try some of this at home.  I generally use laptops which are sort of portable typically with 500Gb plus of SSD and at least 16Gb or RAM half of those resources should be enough.

    I am going to assume you have got an initial setup in place:

    • Copy the above VHDX to the root of a suitable disk on your host (say e:\) and mount it(so it now shows up as a drive for example X:)
    • from an elevated prompt type BCDBoot X:\windows
    • Type BCDEdit /set “{default}” hypervisorlaunchtype auto
    • Type BCDEdit /set description “Hyper-V Rocks!”
    • reboot the host and select the top boot option which should say Hyper-V Rocks!
    • from PowerShell type add-windowsfeatures Hyper-V –includemanagementtools –restart
    • the machine should restart and you are good to go.
    • VMM running already either by downloading a complete VMM evaluation VHD or installing from the VMM2012R2 iso yourself.
    • A domain controller as per my earlier post on this, with the host and the VMM VM belonging to this domain.

    The first thing you want to do in VMM is to configure Run As Accounts.  One of the reasons my PowerShell scripts in this series are not production ready is that they have my domain admin password littered all over them which is not good.  VMM allows us to create accounts used to authenticate against anything we might need to  which could be a domain account a local account on a VM (be that Windows or Linux), access to a switch or a SAN.  So lets start by adding in domain admin. We can do this from settings | Security | Run As Accounts or with the VMM PowerShell cmdlets and not all I have to do is open PowerShell from inside VMM or use PowerShell ISE and either way there’s no more of that mucking about importing modules to do this..

    #New Run As Account in VMM
    $credential = Get-Credential
    $runAsAccount = New-SCRunAsAccount -Credential $credential -Name "Domain Admin" -Description "" -JobGroup "cb839483-39eb-45e0-9bc9-7f482488b2d1"

    Note this will popup the credential screen for me to complete (contoso\administrator is what I put in). The jobGroup at the end puts this activity into a group that ends up in the VMM job history so even if we use Powershell in VMM our work is recorded which a good thing.  We can get at that job history with Get-SCJob | out-gridview

    We’ll probably want to do the same for a local account so just Administrator and a password so that any VM’s we create will have the same local admin account & Note this will popup the credential screen for me to complete.

    Now we can consume that domain login account to add manage our host.  In VMM we would do this from Fabric | Servers | all Hosts and before we add in a server we can create Host groups to manage lots of hosts ( I have one already called Contoso). To add a host right click on all hosts or a host group and select add Hyper-V Hosts or Clusters and the equivalent PowerShell is ..

    <#Note in the raw PowerShell from VMM there  is a load of IDs included but this will work as long as we don’t use duplicate names in accounts etc.  #>

    $runAsAccount = Get-SCRunAsAccount -Name "Domain Admin"

    $hostGroup = Get-SCVMHostGroup -Name "Contoso"
    Add-SCVMHost -ComputerName "clockwork.contoso.com" -RunAsynchronously -VMHostGroup $hostGroup -Credential $runAsAccount

    In the background the VMM agent has been installed on our host and it has been associated with this instance of VMM.  You should now see the host in VMM and against all the VMMs that are on it so not bad for a line of PowerShell! We can also see the properties of our host by right clicking on it, and of special interest are the virtual switches  on the host and if you have used my script  with modifying it you’ll see a switch called RDS-Switch on the host.  We can also see the local storage attached to our host here.

    So now we have a basic VMM environment we can play with, a host a DC and VMM itself so what do we need to do next.  If this was VCenter we would probably want to setup our virtual switches and port groups so let’s look at the slightly scary but powerful world of virtual networks next.

  • New Enhancements to Windows Intune!

    ‘It is critical that organizations think holistically about their approach to consumerization and BYOD. The new world of work is more than ever about people – enabling people to get access to their applications and their data on the devices they choose.  This increases satisfaction and productivity, yet IT still has a responsibility to ensure this is done in a way that protects corporate resources and maintains compliance.’

    There are new capabilities coming to Windows Intune! These enhancements will provide organizations increased flexibility to enable users to choose the devices which best suit their needs or preferences, while also helping to protect corporate data.  These updates will roll out to our service subscribers next week.

    Click here to find out more.

  • The Right Theory for IT.

    Back in the sixties Douglas McGregor at MIT came up with Theory X and Theory Y. This is so old it might infact be a new thing for you.  If it is new to you the theories play out like this…

    Theory X

    Theory Y

    - We’re all lazy - We work hard
    - We hate work - We enjoy work
    - We’re under rigorous control - We have sensible guidelines
    - We have no autonomy - We’re self-motivated
    - There’s no trust - We’re trusted

    If we apply this to IT then:

    • Theory X will result in a complete lockdown of the desktop, and access to the internet because it’s assumed our users will misuse the IT systems at every opportunity and will also try to make off with our company data if not the device itself!
    • Theory Y will mean we can install apps on our machine, the internet will be available to us and we can use our device for personal stuff as well as work, because we can be trusted to do the right thing.

    I’ve only had limited exposure to Theory X some 10 years ago, but my wife’s work adopts that theory and so do the roles of other family members, so I’ve just about enough information to apply those six principles that Chris from AvePoint covers in his article earlier in the month.

    In a Theory-X company environment, only a select few employees that need to work remotely will be allowed to. If it is allowed then (in my experience) several bad things start to happen.  The first is that there is typically a collection of tools that are poorly integrated to provide the VPN, encryption, and remote desktops.  This happens because IT is seen as an overhead in these organisations the lack of trust also manifests itself in the mistaken belief that by using different providers a system can be more secure.  Typically the infrastructure is bolted onto what’s there and a lack of investment means that not everything works remotely, even if it’s tested!

    Obviously Integration and Usability suffer, but if Remote desktops are used, the desktop you get at work is the same at home (however good or bad that is). The only problem then is the extra pain of logging into that at home (setting up the VPN etc.) especially if there’s an aggressive timeout policy as well which closes the session rather than leaving it open but dropping the connection.

    Productivity for my wife actually goes up at home as she is on hot desk hell at the office and the office network seems slower to connect to her line of business systems than when she is home, oh and the coffee is better and cheaper at home!

    Scalability is a huge problem here.  The limiting factor is not the technology, I think it’s the helpdesk.  If a fragmented solution is implemented, it’s going to be more prone to failure and so this will limit it’s use and the helpdesk may well not have had the training needed to support the user base .

    Security & Privacy. The theory X approach can often create security problems as direct result of trying to be too secure.  If I look at security on my wife’s machine then there’s no single sign on, indeed some of the passwords are system generated alphanumerics like a GUID.  She can’t hope to remember these so she writes them down albeit in an obfuscated fashion. If the mobile users are dedicated and motivated then they will try and circumvent the security lockdown in some way. USB drives, e-mail and cloud storage are the usual methods so that corporate data can be accessed on users personal devices as they are familiar and often faster than the way device Theory X grudgingly hands out to employees.

    How does all of this contrast with the world of Theory Y? I am now in Theory Y heaven at Microsoft so if I consider myself a mobile worker then what do those principles look like now:

    Integration.  This just got installed on my Surface 2 Pro by our IT department MSIT

    image 

    It gives me a rich dashboard to get everything done that I need to, it’s little things like this that make life easy. other good examples are when I book leave it shows up in my calendar and my managers.  When I am in a meeting in Lync my status gets set to busy, when I am presenting I can drop my usual background and PowerPoint tells Lync I am unavailable so only my manager can interrupt my presentation with an instant message.  SkyDrive (mine and Pro) just sync’s everything. I have work folders for syncing more sensitive stuff across my devices.  Bitlocker & Direct Access are part of my desktop and my smartcard authentication is one of my login process to access resources via ADFS like Yammer wherever I am.

    Usability for me starts with me having a choice of which devices I use.  I do have a Surface 2 Pro, but I didn’t have to have that, I have a Nokia 925 just now and actually I kind of miss the older 920 but you get the idea.  I can elect which devices mine or Microsoft’s. I get e-mail on up to a limit of 10, if I go over that I choose which to disable and I can report the phone as missing all from the same web portal.

    Productivity. This is mostly down to Direct Access & Lync, The only issues I have is in participating in the odd badly prepared meeting, or by someone who isn’t Lync aware at the other end of the call.

    Scalability. I think this is well covered at Microsoft. We have a very small helpdesk and you might think we are all techies but actually in my department of 80 odd people, we have about 20 technical chaps and most of those are developers. Although I am an MCSE I am not allowed near production infrastructure, so I do need to make a call to these guy’s if I've any issues.

    Security. I have BitLocker to protect my data. If I choose to disable it then I must get manager approval.  In Windows 8.1 I can store a virtual smartcard on my device, so I don’t need a reader to use my real smartcard. The password for this, plus my domain account and my registered devices are the only three things I need to be secure, although my smartcard does actually have to let me into our offices too! Also, my VPN won’t work unless my machine is current with patches and malware updates.

    Privacy.  You might think that this fun loving culture is not as secure say as a government department.  Actually I think our culture is more secure, we have an up-to-date infrastructure design, backed by experts in threat management, but most importantly we have a culture of security.  This means we believe that we should protect high business impact data like personal information very carefully.  Of course it’s you that keeps us honest here as any breaches would be widely reported.  In my case we simply can’t do some of the cross matching of personal data that would be useful to better understand you or our audience because of these policies. This might be seen to be inefficient, but the cost to our reputation in this area would be far higher than any short term gain.

    So my assertion is that not only is Theory Y a better place to work, it’s also more secure than Theory X.  The statistics bear me out, the fun loving high-tech companies rarely loose confidential data whereas this continues to be an issue for the more traditional companies and institutions who try to rely on technology to maintain security when it is really all about the user and the organisational culture.

     

    What theory does your company adopt? Would you agree with Mr Fryer that Theory Y’s the way forward? Let us know in the comments below or via @TechNetUK.

  • Virtual Machine Manager 2012R2 Templates cubed

    Up until now in my Lab Ops series I have been using bits of PowerShell to create VM’s in a known state, however this requires a certain amount of editing and I would have to write a lot more code to automate it and properly log what is going on. Plus I would probably want a database to store variables in and track those logs and some sort of reporting to see what I have done.  That’s pretty much how Virtual Machine Manager (VMM) works anyway so rather than recreate it in PowerShell I’ll just use the tool instead.  VMM not only manages VMs it also manages the network and storage used by those VMs. However before we get to that we need to create some VM’s to play with and before we can do that we need to understand how templates are used.  It’s actually similar to what I have been doing already – using a sysprepped copy of the OS and then configuring that for a particular function (file server, web server etc.) and building a VM around it.  It’s possible just to use the Windows Server 212R2 evaluation iso and get straight on and build a template and from there a VM.  However VMM also has the concept of profiles which are sort of templates used to build the templates themselves. There are profiles for the Application , the Capability (which Hypervisor to use) the Guest OS and the Hardware. Only the hardware profile will look familiar if you have been using Hyper-V Manager as this has all the VM setting in.  The idea behind profiles is that when you create a VM template you can simply select a profile rather than filling in all the settings on a given VM template tab and in so doing you are setting up an inheritance to that profile. However the setting in Application and Guest OS profiles are only relevant when creating a Service Template. So what are those and why all this complexity when I can create a VM in a few minutes and a bit of PowerShell?

    For me Service Templates are the key what VMM is all about. If you are VMware expert they are  a more sophisticated version of Resource Groups and before you comment below please bear me out.  A service template completely describes a service and each of its tiers as one entity..

    image

    a sample two-tier service

    If I take something like SharePoint,  there are the Web Front Ends (WFE) the Service itself (the middle tier) and back end databases which should be on some sort of SQL Server cluster.  The Service Template allows us to define maximum and minimum limits for each tier in this service and to declare upgrade domains which will enable you to update the service while it is running by only taking parts of it off line at a time.  The upper and lower VM limits on each tier enable you to scale up and down the service based on demand or a service request, by using other parts of System Center.    There might well be away to do this sort of thing with Resource Groups and PowerCLI in VCenter, but then there are those application and hardware profiles I mentioned earlier.  They mean that I can actually deploy the fully working SharePoint environment  from a template including having a working SQL Server guest cluster where the shared storage for that cluster is on a shared VHDX file.

    Services created by these templates can then be assigned to clouds, which are nothing more than logical groupings of compute, networking and storage splashed across a given set of hosts, switches and storage providers and assigned to a given group of users who have delegated control of that resource within set limits. 

    So templates might seem to be piled on top of one another here, but you don’t have to use all of this capability if you don’t want to. However if you do have a datacentre (the internal Microsoft definition of a datacentre has more than 60,000 physical servers) then this power is there if you need it.

    If you haven’t a spare server and a VMM lab setup then you can just jump into the relevant TechNet Lab and see how this works.

  • Careers Advice for the IT Professional

    Jedi Fryer

    I have to admit that I am not at all interested in football, but despite all the bad press some players get, there are many who understand how privileged they are and that they are essentially there to entertain their many loyal fans.  A good example is something that Craig Bellamy said when asked about the best piece of advice he had been given..

    “Play every game as if someone in the crowd is watching you for the last time and that performance was their lasting memory of you”

    How is that relevant to us in the IT industry?

    I think we are all ambassadors for our team, our business and to a certain extent even our industry.  Most of us get pretty well paid for what we do and most of us get to work in warm comfortable offices with little risk of injury and so I would argue that working in IT is a privilege too, though perhaps not in the same league as the premiership.  However we all have bad days, and there are terribly managed projects and organisations out there that we have to work with, and it can be had to be the true professional all of the time. This is especially true if we don’t believe in what we are doing and we aren’t happy in current roles.

    However changing roles just because the wheel has come off where you are now is not really the answer,  you need to focus on what you want to do as I discussed in a post earlier in the year about How to find your 'Happy Place', which I hope got you thinking. It got us on the TechNet team thinking as well and we wondered how we could help to get you into a role that you would really enjoy.  The outcome is that we have three pilot careers evenings planned at our London offices. Each of these is themed around a particular group of technologies:

    clip_image002 6th March SQL & SharePoint
    clip_image002 3rd April Infrastructure so Windows and System Center
    clip_image002 1st May Cloud ; Azure and Office 365

    We have got some top experts to help advise you each of these.  We’ll have our own HR expert Emma Broadway to provide an employers perspective so that you can properly market yourself and prepare for the role you are looking for. One of our learning partners along to ensure you have the skills and certification you need.  Finally the technology expertise will come from two sources; our MVP’s who have had really interesting careers and have really good insights into what is happening in the industry and where the technology is headed, and from apprentices who are just starting their careers, because actually we are all apprentices whenever we change technologies or there is a revolution like the cloud that changes the way we do things and their insights will give us all a refreshing perspective on our world.

    These evenings are open to all, and whatever your IT career aspirations are, we think you’ll pick up some good tips and advice whether or not you are in the transfer market.

  • Server Market in Decline - 6 potential reasons why.

    I saw an article a few weeks ago in Information Week discussing the slow down in server purchasing.  It was then going on to decry the lack of innovation, but actually this trend could be down to any of these factors:

    Consolidation is up – the number of VMs running on a given server is higher than its ever been typically over 10 VMs per host. That is down to two things; the servers can handle the workload because they have more memory cores and connectivity than they did before, and because the IT industry has embraced virtualization wholesale.  Both of those represent innovation, be that higher spec CPUs, with things like NUMA, advanced virtualization support plus hypervisors that support that technology and provide that power through to VMs with little overhead. This all means those higher spec servers could well replace two or three older units even if the older servers were virtualized already.  So hopefully those horror stories about floor loading being exceeded by the sheer weight of server or having to wait for more power from the local utility companies are less common, though we could still all use some more connectivity!

    There’s no Mid-Life Crisis.  What will often stop additional workloads say an eleventh or twelfth VM running on a server is not CPU or the hypervisor it will be memory and connectivity. Given that most servers are not purchased fully loaded with network cards (NICs) or memory but do have the latest CPUs then it makes perfect sense to upgrade these servers to ensure more balance of resources, where an initial purchase would rightly target the CPU as something that is harder to swap out and spend money on peripherals like memory and NICs later as the price of those drops. One thing I like about NIC teaming in Windows Server here is that disparate NICs can be teamed even if they are from different vendors (though it’s not a good idea to team across NIC of different speeds!)

    Storage?  Many critical workloads rely on shared storage like SANs so as storage needs to expand, then more storage will be put into these and there will also be investments in better connectivity from the storage to the servers but again no new servers.  Deduplication whether that’s built into the OS like in Windows Server 2012/R2 or part of the storage solution has helped to slow demand for storage but data volumes are only growing so it’s good to know that for most storage solutions it’s possible to swop out drives for larger ones without throwing away the whole storage solution.  Innovation in storage is also evident in the numerous ssd and hybrid solutions available and that technology is being embraced in all sorts of imaginative ways.

    Cloud.  At least one workload, e-mail has been making a steadily increasing move to the cloud if the take-off of Office 365 is anything to go by.  This has either resulted in servers and storage not being acquired for an increase in e-mail capacity, that existing servers can be repurposed for other uses or a combination of the two.  There are other workloads that are off the cloud like remote desktops, and internet facing solutions even if an organisation doesn’t move everything out of its own datacentres

    Windows Server OS. Back when I was young, every new Server OS or application had higher and higher base specifications to run it.  However the memory & CPU requirements for windows haven’t really changed for  a decade, and in some cases have sort of decreased. I am thinking here about the wider use of Server Core as installation option for Windows Server.  

    I can’t use the new technology.  If you have applications or hardware that can’t use the new technology in a new server, that could also block new server purchases.  Not every application is multi threaded or run on x64 and even if it does it might not be any faster which is why you might want to upgrade in the first place.  Virtualisation should help here but not every environment can use this for example there are often questions about licensing and support in a virtual world. 

    However Server innovation still continues and so when a new server is needed it will have the latest hardware to support an even better virtualisation story such as network cards that support virtualization (SR-IOV) storage communications (SMB-Direct using RDMA on the latest NICs).  New server designs also allow for new ways of doing things for example in a small business you might have had this..

    image

    where you could do the same thing with this, the Dell VRTX tower..

     

    image

    I say tower because on first inspection it looks a bit like my gaming rig, but as you can see this monster has got up to four separate servers in there ,plus shared storage with raid controllers and multiple NICs and redundant power supplies.  This means we can deliver a departmental cluster in a box which can sit under a desk in a school, store or any branch office and that didn’t really exist before.

    So yes server slowdown is happening for a variety of reasons, but actually the workloads and useful data volumes in data centres is still on the increase.  This means we as IT Professionals still have more and more stuff to manage, it’s just there might be a few less boxes with flashing lights in darkened rooms as part of the mix.

  • Free Microsoft System Center eBooks

    Introducing Microsoft System Center 2012 R2:

    MSC - Intro

    Introduction:

    ‘Microsoft System Center is one of the three pillars of Microsoft’s Cloud OS  vision that will transform the traditional datacenter environment, help  businesses  unlock insights in data stored anywhere, enable the development of a wide range of modern business applications, and empower IT to support users who work anywhere while being able to manage any device in a secure and  consistent way. The other two pillars of the Cloud OS are, of course, Windows Server 2012 R2 and Windows Azure, and Microsoft Press has recently released free Introducing books on these platforms as well.

    Whether you are new to System Center or are already using it in your  business, this book has something that should interest you. The capabilities of each component of System Center 2012 R2 are first described and then demonstrated chapter by chapter. Real-world and under-the-hood insights are also provided by insiders at Microsoft who live and breathe System Center, and those of you who are experienced with the platform will benefit from the wisdom and experience of these experts. We also included a list of additional resources at the end of each chapter where you can learn more about each System Center component.’

    Download for free here.

     

    Microsoft System Center: Optimizing Service Manager:

     MSC - Opt

    Introduction:

    ‘Welcome to Microsoft System Center: Optimizing Service Manager. We (the authors) all work with systems management at Microsoft and believe that the Microsoft System Center suite is one of the most integrated suites on the market for this purpose.  Microsoft System Center 2012 Service Manager is the only product that can integrate across most of the System Center suite and Active Directory. Service Manager is a fast and reliable product that can create and maintain a dynamic service management database to enable interaction across the organization, both inside and outside the IT department, making it a very compelling product to many organizations.

    Over the last several years, more and more customers have implemented Service Manager, either independently or via Microsoft or a partner. Sometimes the project and product implementation are not as successful as they should be. Our objectives with this book are to provide you with a framework for planning and delivering a successful Service Manager project and to share some of our experiences and best practices when it comes to optimizing and maintaining your Service Manager environment. 

    This book is written with three different roles in mind: business and technical decision makers; IT architects; and Service Manager administrators.’

    Download for free here.

  • System Center: Use the right tool for the right job

    This post isn’t really about Lab Ops as it’s more theory than Ops, but before I dive in to the world of what you can do with System Center I wanted to stress one important concept:

    Use the right tool for the right job. 

    That old saying that when you have a hammer everything looks like a nail can harm your perception of what SC is all about.  Perhaps the biggest issue here is simply to have a good handle on the suite, which is not easy as traditionally many of us will have been an expert in just one or two components (or whole products as was). So here’s how I think about this..

    Virtual Machine Manager controls the fabric of the modern virtualized data centre and allows us to provision services on top of that

    App Controller is there to allow us to control services in Azure as well as what we have in our datacentre

    Configuration Manager allows us to manage our users, the devices they use and the applications they have. It can also manage our servers but actually in the world of the cloud this is better done in VMM

    Then it’s important to understand what’s going on with our services and that’s Operations Manager. 

    Rather than sit there and watch Operations Manager all day, we need to have an automated response when certain things happen and that’s what Orchestrator is for.

    However in an ITIL service delivery world we don’t want change to happen in a controlled and audited manner whether that’s change need to fix things or change because somebody has asked for something.  That’s what Service Manager is for and so if something is picked up by Operations Manager that we need to respond to this would be raised as an incident in Service Manager which in turn would automatically remediate the problem by calling a process in Orchestrator which might do something in Virtual Machine Manager for example.

    By the way the reason that SC is not fully configured and integrated out of the box is simply down to history and honesty. Historically SC was a bunch of different products which are becoming more and more integrated.  Honesty comes from the realisation that in the real world, many organisations have made significant investments in infrastructure and its management which are not Microsoft based.  For example if your helpdesk isn’t based on Service Manager then the other parts of SVC can still to large extent integrate with what you do have, and if you aren’t using Windows for Virtualization or your guest OS then SC can still do a good job of managing VMs, and letting you know that the services on those servers are OK or not as the case maybe.  

    Another important principle in SC is that it’s very important not to go behind the back of SC and use tools like Server Manager and raw PowerShell to change your infrastructure (or fabric as it’s referred to in SC).  This is important for two reasons, you are wasting your investment in SC and you have lost a key aspect of its capabilities such as it’s audit function.  Notice I used the term “raw PowerShell”; what I mean here is that SC itself has a lot of PowerShell cmdlets of its own however these are making calls to SC itself and so if I create a new VM with a Virtual Machine Manager (VMM) PowerShell cmdlet then the event will be logged. 

    There’s another key concept in SCV and that is “run as” accounts so whther I am delegating a control to user by giving them limited access to an SC console or I am using SC’s PowerShell cmdlets, I can reference a run as account to manage or change something without exposing the actual credentials need to do that to the user or in my script.

    Frankly my PowerShell is not production ready, some of it is deliberate in that I don’t clutter my code with too much error trapping and some is that I am just not that much of an expert in things like remote sessions and logging.  The point is that if you are using SC for any serious automation you should use Orchestrator for all sorts of reasons:

    • Orchestrator is easy, I haven’t and won’t post an update on getting started with Orchestrator because it hasn’t really changed since I did this
    • It’s very lightweight  - it doesn’t need a lot of compute resources to run
    • You can configure it for HA so that your jobs will run when they are supposed to which is hard with raw PowerShell.
    • You can include PowerShell scripts in the processes (run books) that you design for things that Orchestrator can’t do
    • There are loads of integration packs to connect to other resources and the these are setup is with configurations which have the credentials in to those other services so they won’t be visible in the run book itself.
    • you have already bought it when you bought SC!

    Another thing about SC generally is that there is some overlap, I discussed this a bit in my last post with respect to reporting and it crops up in other areas too.  In VMM I can configure a bare metal deployment of a new physical host to run my VMs on, but I can also do server provisioning in Configuration Manager so which should I use? That depends on the culture of your IT department and  whether you have both in production.  On the one hand a datacentre admin should be able to provision new fabric as the demand for virtualization grows on the other hand all servers be they physical or virtual should be in some sort of desired state and CM does a great job of that.  It all comes back to responsibility and control if you are in control you are responsible so you need to have the right tools for your role. 

    So use the right tool for the right job, and after all this theory we’ll look at what the job is by using SC in future posts

  • What I wish I’d known in 2013, is how much improved Hyper-V would be!

    The winning story from our coveted Geekmas day one competition, David has shed some light on what he wished he’d known leading into this year and as a result has bagged himself loads of TechNet goodies inc. an MSDN subscription. His story focuses on the age old server virtualisation battle, see who came out on top below.

     

    clip_image002

      By David Mullenger, Systems Co-Ordinator at Suzuki.

      ‘Ohhh how I wish I had known how much improved Hyper-V would be.'

     

    'Casting my mind back several years, virtualisation was discussed with many a peers.

    The lack of challengers for VMWare, led to a data centre needing more and more care.

    But then from the ashes a giant awoke, Oh look at Hyper-V, Microsoft had spoke.

    So now we face an industry choice, stay with VM or listen to the voice.'

    Like many IT professionals over the last year, I believe the above is very relevant in today’s changing landscape of server virtualisation. For many years I have worked with VMWare, learning support skills on a platform that I hadn’t previously worked with. On the job training and a ‘search engine fix’ scenario had us more reactive than proactive. However now Microsoft has given us something to think about, a return to a trusty host operating system is very well on the cards. The ability to change between a trusty GUI and a lightweight server core installation gives me hope that even the junior support staff will be able to help troubleshoot problems within the virtual data centre without the need for VMWare knowledge. The ability to use virtual disks and storage tiers, shared virtual hard disks and migration tools, all great selling points and all this on a platform we have grown to know and love.

    My first look at Hyper-V in Server 2012 was in the classroom and I have to say I left that training course refreshed (that may be hard to believe after 5 days training). My mind thinking up ways to get a technology refresh project off the ground. The days of struggling through pages and pages of documents just to work out how to configure SNMP on a VMWare host could soon be over, I was rejoicing as if Christmas had come in February. Next stop for me was to spread the message, Hyper-V is THE alternative…. but as with all IT projects, you can’t take 1 man’s word for it. With this in mind, a project team was set up to investigate for our environment, and low and behold Hyper-V was put forward as the best suiting hypervisor.

    Although, I regret not having made this change in 2013, I’m already looking forward to 2014 and embracing a hypervisor on a platform I know very well. My battles with ESX could be a thing of the past, although with my experience I am sure it will put up a fight all the way through any migration and not lay down until the power cords are pulled from the hosts and the vmdk files wave goodbye as they finally grow up to be vhdx.

    If I’d known it was such a hit before the beginning of 2013 – I think Hyper-V would already be our server virtualisation platform. Onwards and Upwards!

     

    A final thank you to all those who took part in the ‘12 Days of Geekmas’ in 2013 and a huge congratulations once again to David, we hope the prizes come in handy! We’ll be back with more competitions throughout the year, so stay tuned to the blog and @TechNetUK for all latest and greatest tech information!

  • Six Essentials of Mobilizing Your Workforce

    clip_image002 

       By Christopher Musico, Vice President, Global Communications, AvePoint

     Let’s be honest with ourselves: When it comes to workers using technology that wasn’t issued by the company on their first day on the job, it’s no longer a question of if that’s going to happen in your enterprise but a matter of when – if it hasn’t started already.

    There’s plenty of evidence that points to this business reality – especially when it comes to IT workers. They’re already inclined to want to use the latest and greatest technologies, and have the skill set to learn and adopt them quickly.

    This shift is taking place, and you’ve the choice to either embrace it or continue to bury your head in the sand, hoping it’ll go away. The problem with hoping it goes away is that it usually doesn’t, and it leads to other unforeseen repercussions. According to a recent Forrester Research study on data privacy and security, nearly half of all data breaches in 2013 were accidental in nature. So employees, just trying to do their jobs, mistakenly released data that they shouldn’t have – which can cost millions of dollars if not permanently destroy a company’s reputation.

    Bearing this in mind, information must be ready to go and available at will for modern workers. As availability becomes the new standard for information in our personal lives, we now expect the same experience at work.

    Is this easy? No, but here are the top six essentials you must address in order to be flexible, agile, and a business enabler rather than seen as a roadblock to productivity in the enterprise:

    1. Integration: The solutions you choose should work seamlessly with your existing content management and other IT systems.

    2. Usability: The tools you provide should feel like an extension of technology your employees already know how to use and designed to be fully accessible.

    3. Productivity: Allowing work outside the office shouldn’t mean losing insight into performance.

    4. Scalability: More users and more content shouldn’t mean a greater maintenance burden.

    5. Security: Remote access shouldn’t mean letting go of control over your data.

    6. Privacy: Sensitive data should still abide by boundaries dictated by corporate and industry policies.

    By accounting for these six essentials, you’ll encourage an open yet secure culture. Employees will feel free to use the devices that they need to do their jobs most effectively, but on the same hand companies can rest assured that it’s most important information is secure. Organizations should trust their employees to do the right thing, but it’s equally important to have systems in place to verify that they are in fact doing the right thing. Trust and verify can help your business go a long way in juggling employee morale with company security.

    By taking these essentials into account, you can regain control of your digital assets but still allow them to use devices with which they feel comfortable. By empowering your employees with remote access to the content they need, they’re less likely to look for various and sundry means of accomplishing their tasks – which oftentimes lead to IT governance and compliance issues.

    There are solutions available from vendors to tackle this issue, as this is a growing trend – particularly as mobile technology has truly become an integral part of many of our lives, whether at home or at work. Look for integrated solutions that can help you enforce security measures and corporate policies, track progress and usage, and maintain data sovereignty.

    This way, your employees have the ability to collaborate from anywhere on any device, and you can have the confidence of knowing this collaboration is safe and secure: the best of both worlds.