Like TechNet UK on Facebook
TechNet Team Blogs
<a href="https://saraallison.wufoo.com/forms/m7p7a7/">Fill out my Wufoo form!</a>
Microsoft recently introduced System Center 2012, a tightly integrated management solution built from the ground up for automated private cloud application and infrastructure management. IDC interviewed a range of System Center 2012 early-adopter customers about their private cloud strategies and the role that System Center 2012 is playing in support of those programs. This white paper discusses IDC's industry-wide views on private cloud management trends and priorities, describes how System Center 2012 is addressing these needs, and highlights System Center 2012 customer experiences and lessons learned. The goal of this paper is to equip IT decision makers with a context for designing their own private cloud management evaluations and pilot projects.
Download the WhitePaper
Paul Gregory is one of QA’s principal technologists – specialising in delivering training around Microsoft Server operating systems, virtualisation and systems management. During a 29-year career within IT, Paul has helped many international organisations develop infrastructure solutions based on Microsoft technologies, as well as supply training services during the last 14 years. Paul has helped QA deliver numerous Microsoft partner training skilling programmes for Microsoft – particularly around the areas of Microsoft Server operating systems, virtualisation and System Center. Paul was also heavily involved in the recent Microsoft Windows 8 / Server 2012 TAP programme where he played a key role in the testing of core Windows Server 2012 technologies and positioning this information back to product specialists in Redmond. With the advent of the Microsoft Private Cloud solutions based on System Center 2010 & 2012 Paul have been responsible in helping Microsoft prepare the Partner channel both in the US and Europe for these technologies.
Often customers I come across install SCOM and panic the main reasons for this are:
1) Trying to do too much too soon
2) Not fully understanding their environment
3) Not understanding SCOM tries to predict issues
There are a few other reasons but we do not need to worry about them here. But this gets me to where I want to be the noise. Starting with item (3) it is important that SCOM tries to predict events so there is always a balance between being noisy and missing events which need to be reported to predict a future event and where that line needs to be drawn will vary from one organisation to another.
One area I see people struggle with this is managing basic hardware capacity issues. For example monitoring free disk space. The main problem is most systems today will have fairly small OS drives and much larger data volumes so different thresholds need to be set. However the default rules for managing free disk space apply to all drives in a computer. To be able to manage this correctly a number of things need to be put in place for best practise.
1) Standardize Server Builds – I often here that server builds are a bit random, it is never too late to standardize the build.
2) Create SCOM Groups for each drive (steps below)
3) Set Overrides for each drive group and for each OS type.
This model will then allow different disk space thresholds to be set for each group of hard drives.
1) From within the SCOM administration console select the Authoring panel
2) Select Groups and choose Create Group on the right
3) Give the group a name and description and create a new management pack for storing Windows Server Hardware Monitoring Overrides in if one does not exist
4) Press Next until on the Dynamic Members page
5) Press the Create/Edit button
6) In the drop down box choose either “Windows Logical Hardware Component” or “Logical Drive (Server)”. These allow you to select drives based on name or other properties. Press Add
7) In the table change the first Drop Down box to “Display Name” in the third box enter C: Press OK
8) Complete the wizard
9) Repeat to create any other groups for other Drive letters you wish to set separate rules for.
By Paul Gregory
System Center configuration Manager introduced many new features. One of the features revolving around the new User Centric element of the product is the Application Catalogue which allows users to select software they would like to install and if required have it Approved by an Administrator.
One question I get asked a lot is supporting this functionality it untrusted forests and this is possible. To enable this support a few things need to be considered
· The Application Catalogue server has to be able to authenticate the users that connect to it
· Configuration Manager needs to know about the users that will request applications
To enable this cross forest support the following steps need to be performed
1) Install the Application Catalogue Web Service in the same forest as the SCCM database
2) Install the Application Catalogue Website in the untrusted forest giving SCCM credentials to deploy the role to a member server in the remote forest
3) The Application Catalogue Web Service and Website will communicate using Self-Signed certificates these can be replaced with certificates from a PKI infrastructure if needed
4) Enable User Discovery or User Group Discovery for the remote forest in SCCM. This is needed because applications displayed in the catalogue are based on the collection targeting so the applications will need to be targeted within SCCM to the users in the remote forest.
für Deutschland gibt es einen Windows Azure Newsletter. Wer mehr erfahren möchte und kontinuierlich über Neuerungen und Hintergrundinformationen bescheid wissen möchte, dann hier Anmelden. Es werden interessante Themen von Infrastructure as a Service (IaaS), Plattform as a Service (PaaS) und Software as a Service (SaaS) in Windows Azure dargestellt.,
Several key goals of the WS2012 Server rewrite were to accommodate requirements of the cloud business. Datacenters which provide cloud capability are generally massive and have unique requirements which are normally a superset of those found in a smaller environment. These requirements include uptime, resiliency, flexibility, cost savings, manageability, speed and security.
In this article, I would like to explain some of the new features that provide dramatic improvements in Speed of the WS2012 servers sold by our OEMs in specific environments that are dependent upon throughput, such as storage, financial, industrial control, gaming, printing, communications and broadcast verticals, as examples.
In order to expand performance on a server, many things must be in place.
Multiple processors, and the more the better.
More memory with dynamic allocation.
Higher bandwidth through the networks.
An unbounded connection to disk farms.
Self healing capability.
Microsoft, in order to increase the performance of datacenters servicing the cloud, implemented support for up to 640 physical processors (not just cores).
An expansion of memory was needed to service all these processors and the resulting virtualized instances that would be running on them. Ergo, not only were the upper limits increased for memory (to 4TB, but also the way in which they were allocated and used. Also, support for SSD is incorporated.
Storage spaces allowed the combination of multiple storage technology types into a single storage pool. And this is no ordinary storage pool. WS2012 can run ReFS -Resilient File System, which allows doing file checks on the fly (imagining doing a FSCHK of 300 million files in seconds--- a direct result due to the files being checked in advance), in essence providing self-healing data that did not have to go through cleansing and correction at the storage level.
Also implemented was support for the off-loading of storage transfers from the server computer to the storage vendors like EMC, Hitachi, Network Appliance. (This was not only for SAN environments, but now, also support for NFS was added). This optimizes usage of the pipe, and expands our ability to work in heterogeneous environments and service existing disk farms.
In order to maximize the speed of the disks, the use of 4K blocks have become the standard. This is what disk drives want to see.
SMB 3.0 to allow multichannel support and a refactoring of SMB 2.0 allowed Network transfers to reach 97% of DAS (direct attach storage) speed. Additionally, files are always opened in a “write through” mode.
(As an aside, de-duplication at the file level was incorporated. This not only saves space, but does assist in reducing time spent searching for files, and support for 256 i-scsi targets was supported.)
And finally, NIC teaming, with the capability of creating a failover cluster of 16 channels or aggregating 32 ports (with load balancing in both scenarios) to maximize throughput was incorporated. NIC products from multiple vendors could even be blended together. With this kind of throughput, and support for networked storage, you can see how an IPSAN can be built with throughputs of up to 320Gb/sec (using 10Gb/E network cards). What used to be hard is now easy.
We measured 1 million IOPS with some of our tests, and we did not even reach the upper bound of WS2012. (I spelled “Million”, because I was certain some would think that number was a typographical error). This is due to the previously mentioned features, as well as SMB Multichannel with SMB 3.0… this means that very powerful storage can be built but for a tenth of the price companies pay today to EMC/NTAP.
If these performance enhancements are of interest to you, go to http://www.microsoft.com/windowsembedded/en-us/evaluate/windows-embedded-server.aspx download and begin testing a demo copy of WS2012.
If you would like a detailed version of these tips please mail: firstname.lastname@example.org. For further information on 1E’s integration capabilities with System Configuration 2012, please visit: http://www.1e.com/it-efficiency/solutions/system-management-services/
IPD Guide for System Center 2012 - Operations Manager now available!
The Infrastructure Planning and Design (IPD) Guide for System Center 2012 - Operations Manager outlines the infrastructure design elements that are crucial to a successful implementation of Operations Manager. It guides you through the process of designing components, layout, and connectivity in a logical, sequential order. You’ll find easy-to-follow steps on identification and design of the required management groups, helping you to optimize the management infrastructure.
· Download the IPD Guide for System Center 2012 - Operations Manager.
· Learn more about the IPD Guide Series.
Determine Windows Server 2012 Readiness with MAP 8.0 Beta
Accelerate your Windows Server 2012 migration with Microsoft Assessment and Planning (MAP) Toolkit 8.0 Beta. This latest version of MAP adds new scenarios to help plan your environment with agility and focus while lowering the cost of delivering IT. Included in MAP 8.0 Beta are hardware and infrastructure readiness assessments to assist you in planning the deployment of Windows 8 and Windows Server 2012, preparing your migration to Windows Azure Virtual Machines, readying your environment for Office 2013 and Office 365, and tracking your usage of Lync.
· Download the MAP Toolkit.
· Learn more
· Join the beta.
Secure your environment with new SCM 3.0 Beta!
Secure your environment with new product baselines for Windows Server 2012, Windows 8, and Internet Explorer 10. The latest version of SCM offers all the same great features as before, plus an enhanced setting library for Windows 7 SP1 and Windows 2008 R2 and bug fixes. The updated setting library gives you the ability to further customize baselines, and also improves GPO Import feature affinity. SCM 3.0 provides a single location for creating, managing, analyzing, and customizing baselines to secure your environment quicker and more efficiently.
· Download SCM.
· Learn more about Security Compliance Manager.
· Join the SCM 3.0 Beta.
auf dem Product Group Blog von Windows Azure hat Steven Martin (General Manager, Windows Azure Business Planning & Operations) ein neues Azure-Support-Angebot veröffentlicht.
Es gibt neue Azure Support Level und der Standard-Level steht kostenfrei zur Verfügung bis zum 31.12.2012.
1 Additional information on Premier Support, including how to purchase, can be found here. 2 15-minute response time is only available with the purchase of Microsoft Rapid Response and Premier Support for Windows Azure. 3 Business hours for local languages and 24x7 for English. 4 Professional Direct is only available in US, UK, and Canada
Nach dem 31.12.2012 wird der Azure Support mit folgenden Preisen angeboten:
Wer den Azure Support jetzt ausprobierne möchte kann dies bis zum 31.12.2012 kostenfrei tun. Dazu einfach auf windowsazure.com das Windows Azure Management Portal öffnen und im User-Menü “Contact Microsoft Support” auswählen.
Danach füllt man den Wizard aus und beschreibt das technische Problem:
Six Steps to Windows Azure launched last week with over 160 attendees over the 2 days at our London kick off events which were run in partnership with the UK Windows Azure User Group. Our first event Azure in the Real World showcased some fantastic real life solutions. Our second day focused on Advanced Topics in Windows Azure which included Windows Azure Media Services and Web Services. Overall the feedback has been fantastic.(#sixstepsazure) The audience was a real mix of those who have just started with Windows Azure to those considering it in the coming year.
Here is the content that was delivered by a great line up of speakers on the 8th and 9th November.
Hitting the Limits of Azure Storage presented by Richard Wadsworth
Drillboard:A sports app case study presented by IQ Cloud
Sheffield University: Environmental Projects on Azure presented by Shaping Cloud
Bootstrapping a start-up with Windows Azure presented by Labtrac Solutions -
How SaaS Changes an ISV’s Business Model presented by David Chappell
Hitting the Limits of Azure Storage. Richard Wadsworth, Amido
Drillboard: A Sports App Case Study. Matt Quinn, IQCloud
Sheffield University: Environmental Projects on Azure. Carlos Oliveira, Shaping
Bootstrapping a Start-Up with Windows Azure. Damian Otway, Labtrac Solutions
Exploring the Micro: .NET MicroFramework and Windows Azure. Andy Cross, Elastacloud.
How the Cloud Changes Financial Services. George Kaye, Derivitec and Andy Cross, Elastacloud.
David Gristwood talk to David Chappell.
Windows Azure Media Services presented by Nuno Filipe Godinho
Windows Azure and Active Directory presented by Steve Plank, Microsoft.
What’s next in Six steps to Azure?
Step 2: Architecture and Design for Windows Azure - Join us online on 26th November:
· Windows Azure Architecture for Developers - 10:00am
· Windows Azure Architecture for IT Professionals – 12:00pm
Step 3: Integration with Mobile and the New World of Apps – Join us online on 4th December
· Integration with Mobile and the New World of Apps Part 1 – 10:00am
· Integration with Mobile and the New World of Apps Part 2 – 12.15pm
We also have a number of Windows Azure Developer camps, which will take attendees from knowing nothing about the cloud to actually having deployed a simple application, and made it available on the public internet.
Other steps to follow but you can find out about the entire programme here.