It's all about Microsoft Infrastructure...

here you can find information about Virtualization, System Center, Unified Messaging, Directory Services, Deployment, MS Certification and much more...

July, 2010

  • Windows Server 2008 R2 beta 2 baselines... introducing Setting Packs!

    The beta 2 version of the Windows® Server 2008 R2 Security Baseline is now available for you to download... and it now includes a setting pack!

    What is a setting pack?
    Since the release of the Security Compliance Manager (SCM) tool, one of the most frequent requests has been to add all of the available Group Policy settings to the Microsoft security baselines so that you can access them in the SCM tool. While our baselines include hundreds of settings, there are hundreds of additional settings available in Group Policy. In response to this request, the team created setting packs. The setting packs include the basic information required by the SCM tool to define custom baselines that you can use to create GPO backups, DCM configuration packs, and SCAP content. You can learn more about setting packs on the program description page. Use the links provided in this message to join the program or go directly to the program description page.
    Meet your business-critical needs and elevate the security of Windows Server 2008 R2 with this updated beta 2 security baseline and the new setting pack. It combines best-practice guidance with the Security Compliance Manager (SCM) tool to help you plan, deploy, and monitor the security of your Windows Server 2008 R2 servers. Preview this new security baseline, and get the knowledge to effectively deploy and monitor your security baseline for Windows Server 2008 R2 faster and easier.
    This beta 2 security baseline for Windows Server 2008 R2 is formatted for easy import using SCM. You must first join the program (https://connect.microsoft.com/InvitationUse.aspx? ProgramID=5758&InvitationID=SOLC-2WBV-24MY&SiteID=715) and then use the Download link found in the upper left hand corner of the Connect page. You will find detailed instructions about how to import the download file into SCM on the program description page (https://connect.microsoft.com/content/content.aspx?ContentID=17624&SiteID=715).


    Join us for one of our upcoming Live Meeting sessions
    Join us and learn more about SCM and the new setting packs. There will be 2 Live Meeting sessions on Thursday, August 19, 2010... one at 8:00 AM Pacific Time and another at 3:00 PM Pacific Time. Details about the meetings and how to participate are posted on the program description page. Choose the session time that fits your schedule and come join us.


    What other baselines are we working on?
    The following are new baselines and setting packs we will ship soon after the Windows Server 2008 R2 baseline:
    - Exchange 2007
    - Office 2010 with a setting pack
    - SQL Server 2008 and 2008 R2
    - Setting packs for Windows 7 and Internet Explorer 8


    Specific beta and final release dates for each of these baselines are available on the program description page. Links are provided below for you to either join the beta program or go directly to the program description page if you joined previously. We will send out announcements when these baselines are released for beta review and when the final releases are available for you to download from within the SCM tool.
    If you are not already a member of the Security Baselines Beta Review Program, click here to join.

    For members, click here to go to the program description page.
    Learn more about the Security Compliance Manager.
    Download the Security Compliance Manager.

  • System Center Operations Manager 2007 R2 Connectors Cumulative Update 2

      This update addresses some critical gaps with the Connectors.  The following is new in this update:

    · Support for HP Operations Manager for UNIX v9

    This update enables support for HP Operations Manager for UNIX v9 with the Operations Manager 2007 R2 Connector for HP Operations Manager.  The Connector supports HP Operations Manager for UNIX v9 on the following platforms:

      • HPUX 11i v3 Itanium
      • Solaris 10 SPARC
      • Red Hat Enterprise Linux 5.2 x64

    · Support for BMC Remedy AR System 7.5

    This update enables support for BMC Remedy ARS 7.5 with the Operations Manager 2007 R2 Connector for Remedy.  This includes Remedy ARS 7.5 installed on Red Hat and SLES platforms. 

    · Addressed incorrect dates passed to the remote system by the connector

    The Connector performed an incorrect date conversion and forwarded the incorrect date to the remote system.  

    · Addressed Product Knowledge not forwarded when the locale is Canadian English

    If the locale on the Operations Manager server is set to Canadian English, the Product Knowledge was not be forwarded from Operations Manager to remote system.

    This update is available from the Download Center: 

    http://www.microsoft.com/downloads/details.aspx?FamilyID=87c27d91-4549-4169-a87a-ca88e4136e4f&displaylang=en

  • Now Available for Download: Windows User State Virtualization

    The Infrastructure Planning and Design (IPD) team is working on a new guide and would like your feedback. The Infrastructure Planning and Design guide for Windows User State Virtualization (USV) helps IT get started planning a Windows USV solution.

    Windows user state virtualization helps IT find the right balance between centralized management of business-critical data and a rich user desktop experience. Follow the stepwise approach in this IPD to gather relevant user and IT requirements. Then compare and contrast the Windows USV technologies (Folder Redirection, Offline Files, and Roaming User Profiles) in light of scenarios that are relevant to your business. Also, leverage the real-world guidance based on subjective analysis of Windows USV deployments in mid to large organizations, and interviews with subject matter experts.

    Reduce time and planning costs by following the processes in this IPD guide to design a successful Windows USV strategy.

    Tell us what you think! Download the beta guide, and send us your honest and constructive feedback to IPDfdbk@microsoft.com by August 14th. We appreciate your input and will work to make each guide as helpful and useful as possible.

    Infrastructure Planning and Design streamlines the planning process by:

    · Defining the technical decision flow through the planning process.

    · Listing the decisions to be made and the commonly available options and considerations.

    · Relating the decisions and options to the business in terms of cost, complexity, and other characteristics.

    · Framing decisions in terms of additional questions to the business to ensure a comprehensive alignment with the appropriate business landscape.

    Tell your peers about IPD guides! Please forward this mail to anyone who wants to learn more about Infrastructure Planning and Design guides.

    Join the Beta Program

    Subscribe to the IPD beta program and we will notify you when new beta guides become available for your review and feedback. These are open beta downloads. If you are not already a member of the IPD Beta Program and would like to join, follow these steps:

    1. Go here to join the IPD beta program:

    https://connect.microsoft.com/InvitationUse.aspx?ProgramID=1587&InvitationID=IPDM-QX6H-7TTV&SiteID=14

    If the link does not work for you, copy and paste it into the Web browser address bar.

    2. Sign in using a valid Windows Live ID.

    3. Enter your registration information.

    4. Continue to the IPD program beta page, scroll down to Infrastructure Planning and Design, and click the link to join the IPD beta program.

    Related Resources

    Check out all the Infrastructure Planning and Design team has to offer! Visit the IPD page on TechNet, http://www.microsoft.com/ipd, for additional information, including our most recent guides.

  • Virtualization: Linux Integration Services version 2.1 goes RTM

    For those who may not have seen this on the Microsoft Virtualization blog, the Hyper-V Linux Integration Services for Linux Version 2.1 just went RTM! This is great news for customers using virtualization in a heterogeneous data center environment. Hyper-V has supported Linux as a guest OS for a long time, and this new release really enhances that support at a granular level. You can look for the following new features in the 2.1 release: 

    • Driver support for synthetic devices: Linux Integration Services supports the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.
    • Fastpath Boot Support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
    • Timesync: The clock inside the virtual machine will remain synchronized with the clock on the host.
    • Integrated Shutdown: Virtual machines running Linux can be gracefully shut down from either Hyper-V Manager or System Center Virtual Machine Manager.
    • Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.
    • Heartbeat: Allows the host to detect whether the guest is running and responsive.
    • Pluggable Time Source: A pluggable clock source module is included to provide a more accurate time source to the guest.

    Also, please note that this version supports Novell SUSE Linux Enterprise Server 10 SP3, SUSE Linux Enterprise Server 11, and Red Hat Enterprise Linux 5.2 / 5.3 / 5.4 / 5.5. Head on over to the download page to give it a try.

  • Download new Solution Accelerators

    Deploy Windows 7 and Office 2010 quickly and reliably with MDT 2010 Update 1. Microsoft Deployment Toolkit 2010 Update 1 includes new features such as Office 2010 support, user driven installation, and key enhancements in driver support for quick and reliable Windows 7 and Office 2010 deployment.

    Accelerate your IT planning with the new MAP Toolkit 5.0. Download the Microsoft Assessment and Planning (MAP) Toolkit 5.0  for robust new features to help you save valuable IT infrastructure planning time and expense. New features include Office 2010 readiness assessment, Windows 2000 Server migration assessment, and inventory of Linux-powered LAMP stacks.

    Build a foundation for a private cloud with Microsoft System Center Virtual Machine Manager Self-Service Portal (SSP) 2.0 Release Candidate. Looking for a solution that helps you reduce IT costs while increasing your organization's agility? System Center Virtual Machine Manager Self-Service Portal (SSP) 2.0 lets you build the foundation for an on-premises cloud infrastructure, enabling you to deliver IT as a service for your organization. Using SSP, you can respond more effectively, and at a lower cost, to the rapidly changing needs of your organization. Download the release candidate.

    Apply Microsoft Operations Framework (MOF) guidance to Microsoft products and technologies using new MOF resources. These materials provide the resources to apply MOF guidance to Microsoft products and technologies. This collection of workbooks, diagrams, and companion guides will help you offer services that run smoothly and deliver expected value. Download new MOF resources:

    ·

  • Exchange 2010 SP1: What’s new with the Exchange Best Practices Analyzer?

     

    Wondering what’s new in Exchange Best Practices Analyzer for Exchange 2010 Service Pack 1 (ExBPA E14SP1)? Curious about how updates to the tool are being handled in Exchange 2010? Here’s the answer to some of your questions:

    How do I get ExBPA E14SP1?

    Since Exchange Server 2007, the Best Practices Analyzer (along with other useful Exchange troubleshooting tools) has been part of the product and installed during Exchange setup. You can find ExBPA and the tools in the Tools node of the EMC. The previous version of ExBPA (v2.8) will not download updates for Exchange 2007 or Exchange 2010; instead, you must run the version of the tool in the EMC.

    Does the ExBPA E14SP1 Support Exchange 2007?

    The Exchange 2010 RTM version of ExBPA does not support scanning Exchange 2007 servers. The Product team heard your requests for Exchange 2007 support and had responded. To support coexistence (and for ease-of-use), ExBPA E14SP1 will now scan older Exchange versions. Be aware, though, that error and warning rules for Exchange Server 2003 are in extended support and will not be updated unless the change meets the requirements for extended support. You can find more about the extended support phase in Microsoft Support Lifecycle.

    What’s new in ExBPA E14SP1?

    In this latest release, the BPA team, Customer Support Services and others worked together to identify and create new health checks. Changes include additional health checks for database availability groups, poison mailboxes and mixed environment support. Some other changes include:

    • Extended coverage in the “Permissions Check” scan Permissions inheritance checks have been extended and moved. They are now a part of the Permission Check scan rather than the Health Check. Tests now also include validating Role Based Access Control (RBAC) permissions. These tests include ensuring all users are able to access the Exchange Control Panel (ECP), that all out of the box RBAC Roles and Role Groups are properly configured, and that there is at least one administrative account present within the Exchange Organization.
    • Readiness checks have moved Readiness checks have been removed from ExBPA E14SP1 and incorporated into the new Exchange Pre-Deployment Analyzer (ExPDA). You can use ExPDA to perform an overall topology readiness scan of your environment. To start planning your upgrade, we recommend you begin with the Exchange Deployment Assistant.
    What’s new in release process?

    With Exchange 2007 and 2010, ExBPA has moved to a release process that is in sync with the product release cycle. Updates to ExBPA are now part of Exchange product update rollups and service packs. The easiest way to get the updates is to install the update rollup on the workstation where you are running ExBPA (assuming you are at that service pack level). The ability to update only the configuration XML files during startup of the tool will still be offered, but if an update to the XML file requires an update to the binaries for proper operation, the tool will direct the user to apply the corresponding update rollup which includes both the XML and the binaries. You can expect ExBPA E14SP1 updates with Exchange 2010 SP1, as well as subsequent Service Pack and Update Rollup releases.

  • Architecture: WHat does email cost?

    External Source: http://blogs.gartner.com/bill-pray/2010/07/15/really-what-does-e-mail-cost/

    When determining e-mail costs, it is important to look beyond the licensing costs and to also use, at the least, a three year average to get the best grasp on what it really costs.

    Costs should include not only the e-mail licensing, but add-ons, mobile, perimeter services, archiving, and migration costs if you are considering a new e-mail solution.

    So, here are two examples of large enterprise’s and their costs – remember your mileage may vary:

    A large financial institution with 35,000 plus mailboxes on 60 plus servers (also significant regulatory and compliance requirements) -

    • Mailbox costs = $116 per user per year
    • Archiving costs = $46 per user per year
    • Perimeter services costs = $31 per user per year
    • Total costs = $192 per user per year

    A large energy enterprise with 36,000 plus mailboxes, using an older e-mail solution and infrastructure (that is under review for replacement), with some of the servers being hosted by third parties -

    • Mailbox costs = 73,5 € per user per year
    • Perimeter services costs = 3,5 € per user per year
    • Total costs =  77 € per user per year
  • Virtualization:Free Gartner Webinar:From Virtual Machines to Clouds

    Server Virtualization: From Virtual Machines to Clouds

    http://my.gartner.com/portal/server.pt?open=512&objID=202&mode=2&PageID=5553&resId=1382132&prm=twweb072310

    Tom Bittman is Gartner’s lead virtualization analyst so this may be worth tuning in to.

  • WIN7: Windows 7 Service Pack Coming in 2011

    external source: http://itmanagement.earthweb.com/entdev/article.php/3894116/Microsoft-Windows-7-Service-Pack-Coming-in-2011.htm

    A week after Microsoft began officially beta testing the first service pack for Windows 7, it has revealed that it's planning to ship the final version sometime during the first half of next year.

    Microsoft (NASDAQ: MSFT) had not said when to expect Windows 7 Service Pack 1 (SP1) to be formally released, but given that the SP1 beta was released broadly last week at the company's Worldwide Partner Conference (WPC) in Washington, many observers had expected it to debut as early as the end of 2010.

    "It [SP1] will be released sometime in the first half of calendar year 2011, meaning sometime after January 1, 2011," a Microsoft spokesperson confirmed in an e-mail to InternetNews.com.

    Historically, release of the first Service Pack for a new version of Windows, with its myriad fixes and improvements, has been the metaphorical starting gun for many corporate IT staffs to begin testing and deployment in earnest. However, early response to Windows 7 has been so positive -- Microsoft has already sold more than 150 million licenses -- that indications from the market point to many organizations choosing not to wait until SP1 arrives.

    In fact, Microsoft credited strong sales of Windows 7 -- both at retail and to enterprise customers -- as driving the company's financial performance in its third fiscal quarter, which ended March 31, to new records. A repeat of that performance is expected Thursday, when Microsoft reports its fourth fiscal quarter and year-end financial results.

    Service Packs typically only include bug and security fixes, and are expected to be more solid and reliable than the original release. That's true of Windows 7 SP1 but its sibling, Windows Server 2008 Release 2 (R2) SP1, adds a pair of new features designed to work better in a cloud computing environment.

    One feature, called Remote FX, aims to provide 3D graphics for remote users, while the other, named Dynamic Memory, enables systems administrators to throttle memory use without causing performance problems.

    Company executives have gone out of their way at every turn to suggest that the original release of Windows 7 -- the "release to manufacturing," or RTM, edition -- is solid and reliable enough that it advocates that large customers not wait for SP1. For instance, in a Q&A posted online, the company declared that Windows 7 is a "high-quality release."

    "SP1 will include all updates previously available to Windows 7 users through Windows Update, so there is no reason to wait or delay their use of Windows 7," the Q&A continued.

  • Exchange: Combining Web Farm publishing with Software or Hardware Based Load Balanced CAS arrays

    External Source: http://msexchangeteam.com/archive/2010/07/20/455575.aspx

    The introduction of the Client Access Server (CAS) role as the MAPI end point Outlook uses to connect to a mailbox has prompted many organizations to consider load balancing internal clients for the first time. The introduction of a load balancer to provide fault tolerance and sharing of load to client access, when combined with using a product such as Forefront TMG or UAG to publish Exchange, when those products can also provide load balancing, can be a source of confusion.

    The most common question is whether Forefront TMG (Forefront TMG will be referred to throughout this section but the same is true of Forefront UAG in these scenarios) should be used to publish the Virtual IP address (VIP) created on the load balancer, as shown in the diagram below, or whether a farm of CAS should be configured on Forefront TMG, and that used as the destination for the publishing rule.

    Figure 1 - All Connections through the Load Balancer

    This approach of publishing the load balancer itself has both advantages and some disadvantages.

    An obvious advantage is that a simple, common path now exists for both internal and external client connections, both via the load balancer. The disadvantage is that a single point of failure now exists for all client connections, though that will always be the case when concentrating connections to any form of hardware device and is usually mitigated by using redundancy in the configuration.

    Another advantage is that a hardware load balancer usually has many more affinity methods available to it, and so that extra capability can be leveraged when balancing the load across the CAS.

    One of the more subtle disadvantages is only clear when you consider how Forefront TMG views the health of the end point it is publishing – if the end point is a single load balancer, if there is an issue connecting to that load balancer the entire target is marked as down, whereas if Forefront TMG is treating the health of each member CAS on an individual basis, then any one member being down does not impact the entire service. This is similar to the previous case however, in that redundancy in the load balancer can help mitigate this risk.

    A further issue that can cause problems in this scenario, though it is relatively easy to work around if the network configuration allows it, is that Forefront TMG typically uses its own IP address as the source IP in the TCP packets that reach the load balancer, effectively appearing to the load balancer as a single IP address, or client, which will impact the load balancers ability to distribute load based on source IP address. There are three mitigations to this problem;

    • Configuring Forefront TMG to not replace the IP address of the client with its own IP address (though this requires Forefront TMG to be set as the default gateway (or used as the ultimate exit route from the network) on the load balancing hardware (if it is decrypting SSL) to ensure the packets route back through Forefront TMG), or on CAS, if SSL is being decrypted there.
    • Configure the load balancer to use a form of affinity other than based on source IP – though this can be a problem for clients such as Outlook Anywhere where one client can create multiple SSL sessions, this can result in sessions from the same client being split across multiple CAS.
    • Configure Forefront TMG to use Bi-Directional Affinity (available only in the Enterprise version of Forefront TMG) which allows Forefront TMG to manage this complex networking scenario. There are however some caveats to this approach, which are discussed in this blog post: http://blogs.technet.com/b/isablog/archive/2008/03/12/bi-directional-affinity-in-isa-server.aspx.

    One last disadvantage to this solution is that publishing the load balancer itself rather than each individual server is that certain scenarios, any that involve Kerberos Constrained Delegation (KCD) for example (certificate based authentication and NTLM Outlook Anywhere are two Exchange scenarios), cannot be configured. KCD requires that Forefront TMG utilize the Service Principal Name of the delegated service, and since SPN’s cannot be configured on more than one machine in a domain, there is no way to configure KCD from Forefront TMG to CAS in this scenario at this time. In these scenarios, publishing a single virtual IP address, that of the load balancer, would prevent KCD from working altogether.

    Another potential solution is to not use the hardware load balancer and simply point all client traffic at Forefront TMG and allow it to load balance all the connections. This is shown in the diagram below, and shows all internal and external client requests being made via Forefront TMG.

    Figure 2 - Use Forefront TMG as only Sole Load Balancer

    The problem with this suggestion is that Forefront TMG is unable to use a farm for any protocol other than HTTP. Accessing a mailbox from an Outlook client when connected to the same network is done using RPC, POP3 or IMAP4. Neither Forefront TMG nor UAG can load balance these protocols across a farm of servers. Therefore you should not use a name that ultimately uses Forefront TMG or UAG as the MAPI end point for your Outlook clients. Whilst it is technically possible to configure Forefront TMG to make the appropriate ports available, they can only be used to publish a single IP address. This single IP could be a single server, or a load balanced IP address, though if you have load balancing available, but choose to concentrate all your connections to Forefront TMG, you are negating all the benefit of having the load balancer in the environment.

    Another alternative would be to force all your internal users into Outlook Anywhere mode, so all traffic is HTTPS and can therefore utilize the Forefront TMG/UAG web farm. Some customers without hardware load balancers have done this to solve this problem, and whilst it is certainly possible, it is not necessary if you do happen to have a hardware load balancer, as we will discuss.

    Knowing that Forefront TMG cannot effectively load balance RPC requests, but can load balance HTTP based traffic, you may be tempted to force all your internal Outlook clients to connect using Outlook Anywhere, using HTTP, and then allow Forefront TMG to load balance this traffic to the CAS in the web farm being published. Whilst this would work in most cases, uneven load balancing is often seen as the number of source IP addresses seen by Forefront TMG is low, particularly if NAT is being used in any part of the network, and so the connections from Forefront TMG to CAS tend to be uneven. For this reason a dedicated software or hardware load balancer is the recommended approach for internal Outlook to CAS connections.

    The opposite approach is to not use Forefront TMG at all, and instead only use the load balancer at the network edge (assuming the device is designed for and supported in this scenario).

    Figure 3 - Use Only a Hardware Load Balancer

    In this scenario you benefit from being able to use a multitude of affinity options provided by your load balancing device, and can use the same device for internal and external load balancing if the network supports it, but you do lose the ability to pre-authenticate traffic at the perimeter of the network, and scenarios involving KCD will require that CAS be responsible for terminating the SSL stream from the client.

    A better solution is using a web farm for all clients accessing via Forefront TMG, and pointing all internal clients at the hardware load balancer. The diagram below outlines this design.

    Figure 4 - Use Forefront TMG to Publish Each CAS and Point Internal Client at the Hardware Load Balancer

    In this configuration, a web farm of CAS is created in Forefront TMG, containing the individual CAS servers, and used as the target for all publishing rules. A virtual array is also configured on the hardware load balancer containing those same CAS servers. The internalURL and DNS settings, used by clients connecting when inside the network point to the load balancing device, and the external settings resolve to the external interface of Forefront TMG.

    The advantage of this approach is that clients benefit from being able to use the hardware load balancer for all protocols, including RPC, and Forefront TMG provides the load balancing for clients accessing from the Internet, and fully support scenarios such as certificate based authentication, by being able to delegate to specific CAS within the farm.

    If you require POP3 and/or IMAP4 access from the Internet, this would be the only scenario where using Forefront TMG to publish the internal Virtual IP of the load balancer would be recommended, as Forefront TMG is unable to publish those protocols to a web farm, and using the VIP as a target gives additional availability to the solution.

    The final presented solution is to simply place the load balancer at the network edge (assuming the device is designed for and supported in this scenario), and use it to publish any Exchange resources that you do not wish to pre-authenticate or for which you require KCD.

    Figure 5 - Split the Edge Connections between Devices as Needed

    This solution allows Forefront TMG to provide pre-authentication to the Outlook Anywhere users and perform KCD back to CAS (and could easily allow certificate based authentication with KCD for ActiveSync users), and enables the load balancer itself to be used for OWA and EAS access. There is no perimeter pre-authentication for these clients, which is a trade off, but this allows the full range of load balancer affinity types to be used for these clients, and avoids the routing complexities previously discussed. It’s an unusual configuration, requiring pre-authentication for Outlook Anywhere, but not for OWA, but some customers may choose this route as they are using some kind of custom security software on their CAS to provide strong authentication, and that software can’t be installed on Forefront TMG.

    The choices available to you are summarized in the table below.

    Depicted in Figure

    Network Edge

    Internal Clients

    Advantages

    Disadvantages

    Figure 1

    Forefront TMG publishes Hardware load balancer VIP

    Hardware load balancer VIP

    • Simple Configuration
    • Ability to leverage multiple affinity types
    • HW load balancer requires redundancy to avoid being a single point of failure and marked as down by Forefront TMG
    • Network routing can be a problem
    • Cannot be used if Certificate Based Authentication or NTLM Outlook Anywhere with pre-authentication is required

    Figure 2

    Forefront TMG balances load over each and every CAS

    Route all traffic to TMG

    • Removes the need for the additional cost of the load balancer
    • Cannot provide resilient and load balanced RPC Client Access to internal Outlook clients
    • Likely poor load balancing for internal clients due to small source IP pool
    • Network configuration may make this difficult to implement

    Figure 3

    Load Balancer balances load over each and every CAS

    Hardware load balancer VIP

    • Removes the need for the additional cost of Forefront TMG
    • Allows most affinity methods to be used
    • No ability to pre-auth traffic entering via the load balancer
    • Network configuration may make this difficult to implement
    • Scenarios involving KCD require SSL termination and certificate validation to be done on CAS

    Figure 4

    Forefront TMG balances load over each and every CAS

    Hardware load balancer VIP

    • Certificate Based Authentication or NTLM Outlook Anywhere with pre-authentication is possible
    • Hardware load balancer can balance Outlook RPC traffic effectively
    • Ability to leverage additional affinity types
    • Two load balancing pools to manage

    Figure 5

    Forefront TMG balances load over each and every CAS and

    Load Balancer balances load over each and every CAS

    Hardware load balancer VIP

    • Certificate Based Authentication or NTLM Outlook Anywhere with pre-authentication is possible
    • Ability to use multiple affinity types
    • Multiple external namespaces required
    • No ability to pre-auth traffic entering via the load balancer

    Conclusion

    The decision as to which of these solutions you should deploy will come as a result of understanding the scenarios you wish to support, and considering the network implications that can impact routing and load balancer effectiveness. It is important to understand that if you require pre-authentication of traffic in the perimeter network, then you need to deploy Forefront TMG, but if you don’t, you could simply use the load balancer to do load balancing for internal and external users. If you realize that you need to load balance RPC Client Access traffic, you need a hardware or software load balancer, as you cannot do that with Forefront TMG. If you ultimately want the best of both worlds, you may decide to deploy both, and use them for different purposes. As long as you carefully plan your requirements, you should be able to make the decision based on your needs, but always remember to keep one eye on the future. Things can change!

  • Cloud: Microsoft shares its future BPOS plans

    External Source: http://www.zdnet.com/blog/microsoft/microsoft-shares-officially-its-future-bpos-plans/6857

     

    At this week’s Worldwide Partner Conference, Microsoft officials shared with attendees their “official” roadmap for updating the company’s hosted Business Productivity Online (BPOS) suite. Company officials shared which features and capabilities that the company rolled out already as part of the on-premises server complements of the BPOS products will be added to the Microsoft-hosted versions of those offerings.

    I’ve heard that customers of the Dedicated (i.e., non-shared/non-multitenant) versions of Microsoft’s BPOS and its point-product parts — Exchange Online, SharePoint Online, Communications Online and Live Meeting — already have some of the 2010 feature updates. But those using the “Standard” (multitenant) versions do not.

    Microsoft didn’t provide specific dates as to when they’d deliver the updates to each of its managed services, but did say the updates would happen in fiscal 2011 (which runs from July 1, 2010, to June 30, 2011). Earlier this year, the Softies said to watch for a “preview” of these BPOS updates before the end of this calendar year, and advised companies to prepare their infrastructure now for these BPOS futures.

    The roadmap slides the Softies showed at the partner conference this week look just about identical to the ones I ran earlier. In November 2009, Microsoft shared privately information about the coming 2010 features for Exchange Online, SharePoint Online and Communications Online. I ran some of this information, shared with me by sources, in various recent blog posts.

    According to Microsoft officials this week, here’s what’s coming on the Exchange Online front (from the WPC 2010 slide deck):

    clip_image001
    (click on the slide to enlarge)

    Here’s what’s happening with SharePoint Online:

    clip_image002

    (click on the slide to enlarge)

    The on-premise Office Communications Server 14 is likely to ship at the very the end of this year. Communications Online users won’t be getting these new OCS “14″ features for a number of months into 2011, I’d bet.

    Here’s the new slide Microsoft showed this week at WPC ‘10, re: what’s coming, re: next-gen Communications Online features:

    clip_image003
    (click on the slide to enlarge)

    What else is on tap for BPOS users? As Microsoft said last fall (in those slides I ran), the BPOS team will be phasing in over the coming months:

    • Single Sign On with identity federation
    • Redesigned User Interface (for the console)
    • More administration and access control
    • New markets and languages
    • Enhance Syndication partner interface (”Syndication” is Microsoft’s program allowing mostly telco companies, but also some other partners to private-label its BPOS services.)

    We still have no exact dates (beyond “second half of 2011″) for these new features.

    Earlier this week, Microsoft rolled out the July refresh for BPOS, which added a hosted Blackberry Administration Center and Live Meeting updates to the current BPOS offering. Coming in the near term (again, no specific dates) are Office 2010 support and enhanced PowerShell scripting, the Softies reiterated at the Worldwide Partner Conference this week.

    Microsoft showcased at the show this week a number of its partners who’ve already jumped on the BPOS bandwagon. To encourage others to start selling the suite, Microsoft announced that it will offer partners 250 BPOS seats for their own use.

    Microsoft officials have said their goal is to continue to narrow the delta between the availability of on-premises server features and the Microsoft-hosted versions of these services, going forward. Partners and customers are both clamoring for this.

  • DPM: Protocols and Ports Used by DPM

     

    Protocol

    Port

    Details

    DCOM

    135/TCP   Dynamic

    The DPM control protocol uses DCOM. DPM issues commands to the file agent by invoking DCOM calls on the agent. The file agent responds by invoking DCOM calls on the DPM server.

    TCP port 135 is the DCE endpoint resolution point used by DCOM.

    By default, DCOM assigns ports dynamically from the TCP port range of 1024 through 65535. You can, however, configure this range by using Component Services. For more information, see Using Distributed COM with Firewalls (http://go.microsoft.com/fwlink/?LinkId=46088).

    TCP

    3148/TCP

     

    3149/TCP

    The DPM data channel is based on TCP. Both DPM and the file server initiate connections to enable DPM operations such as synchronization and recovery.

    DPM communicates with the DPM Agent Coordinator on port 3148 and with the file agent on port 3149.

    DNS

    53/UDP

    Used between DPM and the domain controller, and between the file server and the domain controller, for host name resolution.

    Kerberos

    88/UDP

    88/TCP

    Used between DPM and the domain controller, and between the file server and the domain controller, for authentication of the connection endpoint

    LDAP

    389/TCP

    389/UDP

    Used between DPM and the domain controller for Active Directory queries.

    NetBIOS

    137/UD

    138/UDP

    139/TCP

    Used between DPM and the file server, between DPM and the domain controller, and between the file server and the domain controller, for miscellaneous operations.

  • SCOM: how to create a report for all closed alerts.

     

    external source with good samples

    http://blogs.technet.com/b/kevinholman/archive/2008/07/21/auditing-on-alerts-from-the-data-warehouse.aspx

    http://www.systemcentercentral.com/PackCatalog/PackCatalogDetails/tabid/145/IndexId/69990/Default.aspx

    http://blogs.technet.com/b/jimmyharper/archive/2010/04/23/sample-queries-and-reports-from-my-mms-session.aspx

  • SCOM: How to monitor new line entries in a log or text file using OpsMgr 2007

    This was originally posted on the SCCM and OpsMgr Arabic blog.  If you ever have the need to monitor a text or log file for new entries then this should do the trick.

     

    You may wish to monitor any new entry in a log/text file and want to get an alert generated no matter what the entry is. Usually we want an alert to be generated once a word or expression is logged, but in this post I will be shedding lights on monitoring a log file and generate an alert when any new entry is logged in the log/text file.

    • Open OpsMgr Console and go to Authoring—> Management Pack Objects—> Rules
    • Click on “Scope“ button in the tool bar to narrow down our selection.
    • I assume the file is located on a windows computer, so we will search for “Windows Computer”
    • Select Windows Computer and then click Ok

    clip_image001

    • Right click on rules and select “Create a new rule”
    • Expand Alert Generating Rules—>Event Based—>Generic Text Log(Alert)

    clip_image001[5]

    • In the above window click new to create a new management pack to save this new rule in it. In my case I have created a management pack called “TestRuleMP”
    • In the next screen, give a meaningful name to this rule.
    • The Rule Target should be Windows Computer
    • Make sure to to uncheck the option “Rule is enable” before you proceed

    clip_image001[7]

    • In the next screen provide the pattern of the file. If the file name is fixed and not changing every time the file is created, then you may give the exact name of the log as LogName.txt  but if the log file name is changing every time is created (LogFileName01, LogFileName02, etc..) then you may put the log file name as the following: LogFileName*.txt and then click next

    clip_image001[11]

    • Now it is time to set your event expression to generate the alert .
    • Click Insert so a new line will be added.
    • In the parameter name write: Params/Param[1]
    • In the operator select "Match wildcard
    • In the value put “?” – without quotes

    clip_image001[13]

    • Proceed to configure the alert as the following:

    A new Entry was detect in the c:\log\bader.log

    Logfile Directory : $Data/EventData/DataItem/LogFileDirectory$
    Logfile name: $Data/EventData/DataItem/LogFileName$
    String:  $Data/EventData/DataItem/Params/Param[1]$

    clip_image001[15]

    • Once you are done with editing the alert, click create.
    • We have not enabled the rule yet so we need to override the rule and just enable it for a specific computer on which the log is located

    clip_image002

    • To reproduce the alert, I opened the log file and I typed a new line in it and saved the changes. See the below screenshot

    clip_image001[17]

    • Now the alert is generated

    clip_image001[19]

    You can notice that the alert description includes the new entry which was logged in the log file.

  • SCOM: How to troubleshoot gray agent states in System Center Operations Manager 2007

    Just an FYI that we just published a new Knowledge Base article that describes how to troubleshoot issues where an agent, management server or gateway in System Center Operations Manager 2007 or System Center Essentials 2007 and 2010 is in a gray state.  Or grey state, depending on your location and preferred spelling.

    The article is way too long to post here but below are some of the scenarios it covers:

    Scenario 1:
    There are only few agents that are impacted and they report to different management servers. Agents stay in this state all the time. Clearing the agent cache helps in resolving the problem temporarily. However the problem comes back after a few days.

    Scenario 2:
    There are only few agents that are impacted and they report to different management servers. Agents stay in this state all the time. Clearing the agent cache doesn’t help.

    Scenario 3:
    All the agents reporting to one particular management server/gateway are grayed out.

    Scenario 4: All the agents reporting to one particular management server flip-flop from gray to healthy and healthy to gray state intermittently.

    Scenario 5:
    All the agents reporting in the environment keep flip flopping from gray to healthy and healthy to gray state intermittently.

    and much much more.  If you ever find yourself having to troubleshoot OpsMgr 200 or SCE then this article is a must read. 

    Check out the following new KB for all the details:  KB2288515 - Troubleshooting gray agent states in System Center Operations Manager 2007

  • Exchange: Moving mailboxes Exchange 2007 vs. Exchange 2010

     

    External Source: http://msexchangeteam.com/archive/2010/07/19/455550.aspx

    Introduction

    Exchange Server 2007 used the Move-Mailbox cmdlet to move mailboxes between mailbox stores. Move-Mailbox makes a RPC connection to both the source and target mailbox databases and then starts the move process. Exchange Server 2010 uses the Mailbox Replication Service (MRS), a service that runs on all Exchange 2010 Client Access Servers, for mailbox move operations. MRS handles all mailbox moves including offline and online moves.

    Exchange 2007 vs. 2010

    This is a high level comparison of Exchange Server 2007 vs. Exchange Server 2010. There is a radical change in mailbox moves from Exchange 2007 to Exchange 2010 environment. In Exchange 2010, we implemented online mailbox moves.

    When you move a mailbox in previous versions of Exchange Server, the source mailbox gets locked and then the content is copied to the new mailbox on the target mailbox database. After the content is copied, the new mailbox is unlocked and the old one is deleted. This results in downtime for the user for the duration of the mailbox move operation. As long as you had smaller mailboxes, the downtime is not a big deal since this happens fairly quickly. With larger mailboxes (check out the Large Mailbox Vision whitepaper as well as Astrid's blog on the Top 10 Exchange Storage Myths), the downtime can be unacceptable.

    In Exchange 2010, we implemented online mailbox moves and we also implemented changes to the Store in Exchange 2007 SP2 so that when you move and upgrade from Exchange 2007 to 2010 you will benefit from online mailbox moves.

    The following terms are used in Exchange Server 2010 move operation:

    • Online mailbox move - Move mailbox operation wherein users are able to access their mailbox almost for the entire time of the operation except for the last part. Exchange Server 2010 uses online mailbox move.
    • Offline mailbox move is a move operation wherein users cannot access their mailboxes during the move.
    • Local Move is a move operation wherein the source and target mailboxes exist in the same forest and organization.
    • Remote Move is a move operation wherein the source and target are in different forests and organization.
    • Push is a move operation where the target is either an Exchange Server 2007 or Exchange Server 2003 and the source is an Exchange 2010 server in a local move. A push can also occur between Exchange 2010 where the source and the target are in different forests.
    • Pull is a move operation where the target is an Exchange 2010 server.
    Which Exchange Versions supports what?

    The following table outlines the type of mailbox moves supported by different Exchange Server versions.

    Mailbox Moves and Personal Archives

    Exchange Server 2010 introduced Personal Archives. An archive mailbox appears as an additional mailbox in Outlook 2010 or OWA. In Exchange 2010 RTM, an archive mailbox is moved along with the mailbox if one exists. Since archive mailboxes exist only in Exchange Server 2010, mailbox moves to legacy Exchange servers will fail if the mailbox being moved has an archive mailbox.

    Exchange 2010 cmdlets for move mailbox

    Mailbox moves can be performed using the EMC or the Shell (EMS. You can use the following cmdlets from the Shell to manage mailbox move requests. It's worth noting that suspending and resuming a move request can only be done through the Shell. Just like any other operation in Exchange 2010, you need to have certain permissions to perform the commands. The following table shows the RBAC permissions required for each cmdlet.

    Note: RBAC permission can be granted to administrators either by assignment of a management role or membership in a built-in role group.

    The Microsoft Exchange Mailbox Replication Service

    The Microsoft Exchange Mailbox Replication Service (MRS) is a Windows service and is dependent only on the Microsoft Exchange Active Directory Topology service and Net.TCP Port Sharing service. There is a pre-requisite in Exchange Server 2010 setup that checks if the Net.TCP Post Sharing service is set to automatic. If it's not, setup fails on CAS Servers. MRS is built on the Windows Communication Foundation (WCF), a part of .Net Framework 3.0 stack. MRS uses a configuration file MSExhangeMailBoxReplication.Exe.Config on every Client Access Server for its configuration information. By default it's located in C:\Program Files\Microsoft Exchange Server\V14\Bin folder. The process associated with the service is MSExchangeMailBoxReplication.exe

    How the Mailbox Replication Service works

    When a move mailbox request is issued, the command creates a message in the system attendant mailbox of the target database. From there MRS picks up the request and makes a MAPI.net connection to both the source and target databases. After a successful MAPI connection is made, MRS creates a mailbox in the target database and starts incremental synchronization of data. When it reaches to a point where it is about to complete the move, it locks the mailbox, updates Active Directory attributes, unlocks the mailbox and deletes the source mailbox.

    Here is the flow of the move operation in detail for online move.

    Local Online Mailbox Move:

    1. Administrator creates the move request using the New-MoveRequest command.

    2. The New-MoveRequest makes the following checks for mailbox being moved.

    o Gets the target and source mailbox server version

    o Checks the database versions to verify they are supported Exchange versions.

    o Determines the push or pull operation by the Exchange version information

    o Checks for an archive mailbox, if one is found, then adds it to the move request. Also checks to make sure you are not moving a mailbox with an archive to legacy Exchange system.

    o Checks the rule limit for legacy exchange servers

    o Checks mailbox quotas

    3. The New-MoveRequest command creates a request message in the target database's System Mailbox as special message.

    4. The following attributes are added to a user account for the mailbox in Active Directory. These attributes are used to store information about moving the mailbox and some are updated throughout the move.

    o msExchMailboxMoveBatchName

    o msExchMailboxMoveFlags

    o msExchMailboxMoveRemoteHostName

    o msExchMailboxMoveSourceMDBLink

    o msExchMailboxMoveStatus

    o msExchMailboxMoveTargetMDBLink

    These attributes will not be removed after the move request is completed unless a Remove-MoveRequest is run. If these attributes are not removed then another New-MoveRequest cannot be issued for the same mailbox.

    5. The New-MoveRequest command then "tickles" an MRS. A tickle is an operation where the command contacts an MRS directly to alert it to a new move request that is ready for pick up and processing. Which MRS is contacted is chosen at random from the CAS servers in the same AD site as the mailbox server where the target mailbox database is located.

    6. Mailbox Replication Service scans the mailbox databases for new interesting events. When it discovers the new interesting event, it then logs into the System Mailbox and gets information from the Move request messages.

    7. It then updates the message in the System Mailbox on the Mailbox Server's MRS that owns the moving of the mailbox.

    8. The Mailbox Replication Service will then update the msExchMailboxMoveStatus attribute on the mailbox object in Active Directory.

    9. It will then log into the source and target mailboxes using MAPI.Net and start the synchronization of the user data. This type of synchronization is also referred to as a heavy pipe operation.

    10. Once the initial synchronization is complete, most all of the mailbox data will be synchronized to the target mailbox. The Mailbox Replication Service will then lock the mailbox.

    11. Mailbox Replication Service will then complete the synchronization of the data including any new or changed items. This last synchronization data is typically not a full synchronization; instead it is a moving of changed and new items.

    12. The Mailbox Replication Service will then update the following attributes in Active Directory on the mailbox account to point to the new mailbox.

    o HomeMDB

    o HomeMTA

    o HomeServer

    o MSExchangeVersion (Set the appropriate Exchange Version)

    o Proxy Address (Typically changed in Cross forest moves)

    13. The move history is then written to the user's mailbox.

    In the following screenshot, we can see the location of the MailboxMoveHistory using MFCMAPI.

    Contents of the MailboxMoveHistory folder:

    14. The Mailbox Replication Service does not remove a move-mailbox request message from the System Mailbox. The message is removed when the move request is removed by the Remove-MoveRequest cmdlet.

    15. The Mailbox Replication Service then removes the source mailbox from the source database. It then changes the move status on msExchMailboxMoveStatus and msExchMailboxMoveFlags attributes to indicate that the move completed on the mailbox in Active Directory.

    16. Once the status has been changed to "Completed" the mailbox can be accessed again.

    An Offline move works similar to the steps listed above with the exception that it will lock the mailbox database so no one can access the mailbox.

    Mailbox Replication Service Queue

    Each MRS keeps track of all move requests in its Active Directory site. It does this by scanning all System Mailboxes in the site. As mentioned earlier, each move request command creates a message in the System Mailbox of the target database. These messages are saved in the following folders:

    • MailboxReplicationService Move Jobs
    • MailboxReplicationService Move Reports
    • MailboxReplicationService SyncStates

    The messages contain queue information about the move request. This queue information can be accessed using the Get-MoveRequestStatistics command with the MoveRequestQueue parameter.

    Below is a screenshot of a System Mailbox opened using MFCMAPI that has move request messages. Shown are the 3 different folders that the MRS uses to store information about move requests.

    System Mailbox in MFCMAPI

    The MailboxReplicationService Move Jobs and MailboxReplicationService Move Reports folders contain information about the move request stored as messages within the folders. Each message in the folder represents a single mailbox move represented by the msExchMailboxGUID attribute of the mailbox enabled account. The below screenshot shows the contents of the MailboxReplicationService Move Reports folder.

    Mailbox moves and database failures

    By default Mailbox Replication Service waits 30 seconds before attempting to reconnect to a database if it encounters transient problems during a move operation. It will try to reconnect every 30 seconds until a successful connection or 60 retries. If it cannot connect after 60 retries then it puts the move request into a failed state. The default retry interval and maximum number of retries can be changed by editing the MAXRetries and RetryDelay values in the MsexchangeMailboxReplication.exe.config file.

    Mailbox moves and High Availability

    Move mailbox operations that involve databases in a DAG environment are different than move mailbox operations on standalone databases.

    The active database, the passive database and log shipping are factors that affect move mailbox in a DAG environment. MRS checks with the Active manager component of the Exchange replication service before, during and right before completing a move request to see if the active copy is up, if log shipping is not lagging behind and if the passive copies are keeping up. The action taken depends on a property of the database called DataMoveReplicationConstraint. If this value is not set, then the move operation assumes SecondCopy option if the database has a copy. That is the move operation does not take into consideration log shipping and the passive copies. If this value is set the action depends on what the actual value is.

    Possible values for the DataMoveReplicationConstraint property are:

    • None - The move operation treats the move just as it treats move mailbox operation on a standalone database. This is the default if the database is not replicated.
    • SecondCopy - If the database is replicated then at least one passive mailbox database copy must have the changes synchronized. This is the default value.
    • SecondDatacenter - If the database is replicated to two AD sites then at least one passive mailbox database copy in another AD site must have the changes replicated.
    • AllDatacenters - If the database is replicated to multiple AD sites then at least one passive mailbox database copy in each AD site must have the changes replicated.
    • AllCopies - If the database is replicated then all passive mailbox database copies must have the changes replicated.

    The DataMoveReplicationConstraint property can be set by running the Set-MailboxDatabase with DataMoveReplicationConstraint parameter.

    Mailbox Replication Service and High Availability Configuration

    In addition to the DataMoveReplicationConstraint property of a database, the following two settings in msExchMailboxReplication.exe.config file also control the behavior of move mailbox that involves a DAG.

    • DataGuaranteeCheckPeriod - Controls how often the MRS checks with the active manager. The default value is 5 minutes with a minimum value of 30 seconds and maximum of 2 hours.
    • EnableDataGuaranteeCheck - When enabled the MRS checks with the active manager on the status of the mailbox databases. The default value is True.

    Note: In order for the Mailbox Replication Service to check with active manager to see if the mailbox database is healthy and not behind processing log files both the EnableDataGuaranteeCheck in the MSExchangeMailboxReplication.exe.config and the DataMoveReplicationConstraint on the mailbox database need to be enabled. If they are enabled then the MRS will throttle back on processing the data transfer when the active manager reports database replication is not in a healthy state.

    Database Availability Group Failover

    During a move operation if the active database becomes unavailable then MRS contacts the active manager to see which copy will take over. MRS then will logon to the mailbox on the new database and will continue the move from where it left off. This is provided that the DataMoveReplicationConstraint property is set to other than none and also the database was not down for longer than 30 minutes. (Or there is another copy satisfying the constraint. If say the database has 3 copies, it's entirely possible that MRS will just continue working after a failover even if the original server is down). If DataMoveReplicationConstraint is set to none then MRS will try to connect to the same database every 30 seconds for the next 30 minutes. The 30 minute is from the maximum retry of 60 times every 30 seconds. Off course this value can be changed in the in msExchMailboxReplication.exe.config file.

    Cross Forest Moves

    Exchange 2010 has the ability to move mailboxes between Active Directory forests. The MRS is responsible for moving mailboxes to an Exchange 2010 Mailbox server. When it comes to moving mailboxes from one Exchange forest to an Exchange 2010 forest, there are two move types:

    • Remote - An Exchange 2010 Client Access (CAS) server is present in the source forest
    • Remote Legacy - There is no Exchange 2010 CAS server in the source forest

    When there is no Exchange 2010 CAS server in an Exchange forest that is the source of a mailbox move, the MRS in the Exchange 2010 target forest is designed to process the move request in a manner similar to previous versions of Exchange. In this case the MRS communicates directly to the Active Directory (AD) Directory Service in the source forest, as well as the mailbox server where the source mailbox is located.

    When the source forest is also an Exchange 2010 forest, the MRS is designed to move mailboxes between the forests using a new feature that simplifies and improves the process.

    In previous versions of Exchange in order to move mailboxes between different Active Directory forests, the administrator would have to allow direct MAPI access to servers and configure trusts and give other administrators full access to each other's Exchange Organization. This way of moving mailboxes was not going to be effective with moving mailboxes to Exchange Online and between forests in Exchange 2010.

    To overcome these issues Exchange 2010 introduces the Mailbox Replication Proxy Service (MRSProxy). The Mailbox Replication Proxy service works in conjunction with the MRS to facilitate the required communication between the source and target servers in each Exchange 2010 forest. Each CAS server that has an instance of the MRS also has an instance of the MRSProxy service as part of the implementation. Essentially, Mailbox Replication Proxy Service is a web service for the Mailbox Replication Service. It is part of the Exchange Web Services (EWS). The Mailbox Replication Proxy service will proxy MAPI, ExRPCAdmin and LDAP requests between local and remote forests when moving mailboxes. These requests will be HTTP requests that the Mailbox Replication Proxy Service will proxy these requests to the Mailbox Replication Service. The Mailbox Replication Service then communicates with the mailbox servers and sends the data back through the Mailbox Replication Proxy Service. The Mailbox Replication Proxy Service then communicates back to the Mailbox Replication Proxy server that initiated the request.

  • SCOM: Samples of Monitoring Hardware with SCOM

     

    You can find few short movies from Dell on YouTube:

    http://www.youtube.com/watch?v=4KbxXmywHh0,

    http://www.youtube.com/watch?v=s4I14WELX98

     

    for EquaLogic from Dell,

    http://www.youtube.com/watch?v=53QePP5d9rw&feature=PlayList&p=BCBB0864D315C13F&playnext_from=PL&playnext=1&index=10

     

    PRO together with SCVMM. More info on Dell website:

    http://www.dell.com/content/topics/global.aspx/sitelets/solutions/management/microsoft_sms?c=us&cs=555&l=en&s=biz.

    Information about HP Insight Control you can find here:

    http://h18013.www1.hp.com/products/servers/management/integration-msc.html, there are short video, but without real product demo if I remember.

     

    At the end a movie to show how our solutions helps in real life problems :)

     http://www.youtube.com/watch?v=U0_WsgR6hPw&feature=related.

  • Architecture: Where do you draw the lines between business and IT ownership of data and information?

    External Source: http://blogs.forrester.com/boris_evelson/10-07-16-where_do_you_draw_lines_between_business_and_it_ownership_data_and_information

    There are many questions on this subject and it often turns into almost a religious debate.
    Let's throw some structure into it. Here's a decision-to-raw-data stack.

    1. Decisions
    2. Strategy
    3. Policies
    4. Objectives (e.g. clear understanding of what is driving revenue performance)
    5. Goals (e.g. achieve x% income growth)
    6. Calculated metrics (any combination, variation of the standard metrics or KPIs)
    7. 7. KPIs (e.g. profitability, liquidity, shareholders value)
    8. KPMs (e.g. enterprise value, trailing/forward price/earnings)
    9. Metrics (e.g. fee income growth %, non fee income growth %)
    10. Dimensions (e.g. customers, customer segments, products, time, region)
    11. Pre-calculated attributes (standard, cross enterprise metrics, KPIs and KPMs)
    12. Pre-built aggregates (used to speed up reports and queries)
    13. Analytical data (DW, DM)
    14. Operational data (ERP, CRM, financials, HR)

     

    Obviously, it's never a clear cut, binary decision, but in my humble opinion

    1-6 should emphasize business ownership

    10-14 should emphasize IT ownership

    7-9 is where it gets murky, and ownership depends on whether metric/KPI/KPM is

        1. standard and fixed,
        2. fluid and changes frequently,
        3. different by product, line of business, region
  • Virtualization: UPGRADING HYPER-V TO R2 SP1 BETA

     

    There are three steps and you should perform them in this order:

    1. 1. Ensure Virtual Machines are ready for the update
    2. 2. Update the Host
    3. 3. Update the Guest Integration Services

    Step 1: Ensure Virtual Machines are ready for the update. Completely shut down (not save state) your virtual machine and merge snapshots. Saved States are not compatible between different Hyper-V versions.

    Step 2: Update the host. Install Service Pack 1 on the parent partition. Restart when prompted.

    Step 3: Update the Guest Integration Services

    1. VMs running Windows 7 and Windows Server 2008 R2: Update the guest operating system with SP1. Updating the guest OS with Service Pack 1 will, in turn upgrade the Integration Services

    2. VMs not running Windows 7 SP1/Server 2008 R2 SP1: (e.g. Windows XP/Vista/7 RTM/2000/2003/2008/2008 R2 RTM) For other operating systems you will need to upgrade their Integration Services. To do so:

    a. Start the virtual machine connect to the virtual machine and go to the Action Menu

    b. Select the bottom menu option Insert Integration Services Setup Disk. You will then be prompted to run the Integration Services Setup. Do so and Restart when prompted.

    clip_image001

  • DPM: Upgrading to DPM 2010 from DPM 2010 Evaluation

     

    Upgrading to DPM 2010 from DPM 2010 Eval When Using a Local Instance of SQL Server

    Upgrading to DPM 2010 from DPM 2010 Eval When Using a Remote Instance of SQL Server

  • Deployment: User State Management Resources

     

  • SCOM: Cumulative Update 1 for Microsoft System Center Service Manager 2010 is available for download

    Cumulative Update 1 for Microsoft System Center Service Manager 2010 is available for download at the Microsoft Download website. This cumulative update is a rollup of fixes for System Center Service Manager 2010. This cumulative update is also a prerequisite for Service Manager Authoring Tool 1.0 Release Candidate. Additionally, this cumulative update contains a fix for a problem in which out-of-box workflows are run multiple times

    This cumulative update applies to the following Service Manager components:

    • Service Manager Management Server (SM Server)
    • Data Warehouse Management Server (DW Server)
    • Self-Service Portal
    • ServiceManager Console

    You can apply this cumulative update to these components. However, if the Self-Service Portal or the Service Manager Console is running on a system that has an English language locale, you do not have to apply this cumulative update on that system. The fixes for the Self-Service Portal and for the Service Manager Console are only for systems that have non-English language locales.

    For all the details and a download link see the following Knowledge Base article:

    KB983572 - Description of Cumulative Update 1 for System Center Service Manager 2010

    Download - http://www.microsoft.com/downloads/details.aspx?FamilyID=1f864b54-fd95-4bb7-98c5-e66183875714&displaylang=en

  • Windows: RemoteFX and Dynamic Memory in R2 Sp1 (Beta) Demo

     

    John Savill has put together a great 20 minute demo showing how (and why) to use  Hyper-V Dynamic Memory and Remote FX (both available now in Windows Server 2008 R2 SP1 Beta). Click the image to play the video.

    image

    At around the 15 minute mark check out Disney World in 3d over RFX:

    image

    And about 2 minutes later, check out the Halo 2 over RFX action.

    image

  • Windows: Beta-testing-microsoft-remotefx-in-service-pack-1.aspx

    http://blogs.msdn.com/b/rds/archive/2010/07/13/beta-testing-microsoft-remotefx-in-service-pack-1.aspx

     

    All the guides on the post:

    The following list includes all the new documents available with the SP1 Beta:

    • Step-by-step guides:
    • Overview guides:

     

    http://blogs.msdn.com/b/rds/archive/2010/07/08/more-partner-momentum-around-microsoft-remotefx-in-windows-server-2008-r2-sp1-beta.aspx

    AMD

    http://blogs.amd.com/work/2010/07/12/microsoft_remotefx/

    NVidia

    http://blogs.nvidia.com/ntersect/2010/07/nvidia-and-microsoft-enhancing-the-virtual-desktop-user-experience-with-microsoft-remotefx.html

  • WIN 7: What is new in Service Pack 1 for Windows 7

     

    Service Pack 1 for Windows 7 represents Microsoft’s continuing commitment to quality. While many of the updates contained in SP1 are available as individual downloads, the integration of these updates in SP1 enhances the ease of deployment for IT administrators. These enhancements are typically made available in the form of regular updates delivered via Windows Update and, in some cases, the Microsoft Download Center. All updates are then rolled-up, along with additional enhancements, into a single package called a Service Pack.


    SpreadsheetGear: Excel Charting Samples
    Richly formatted workbooks with fast and complete calculations are the heart and soul of a spreadsheet, but the ability to make good decisions is greatly enhanced by the ability to visualize data.more

    Various individual files and components have been updated. Also, the language-neutral design of Windows necessitates that the service pack be able to update any possible combination of the basic languages supported by Windows 7 with a single installer, so language files for the 36 basic languages will be included in the standalone installer at the final release. The beta will only be available in 5 languages. Here are the notable changes in Service Pack 1 for Windows 7.

    • Additional support for communication with third-party federation services - Additional support has been added to allow Windows 7 clients to effectively communicate with third-party identity federation services (those supporting the WS-Federation passive profile protocol). This change enhances platform interoperability, and improves the ability to communicate identity and authentication information between organizations.
    • Improved HDMI audio device performance - A small percentage of users have reported issues in which the connection between computers running Windows 7 and HDMI audio devices can be lost after system reboots. Updates have been incorporated into SP1 to ensure that connections between Windows 7 computers and HDMI audio devices are consistently maintained.
    • Corrected behavior when printing mixed-orientation XPS documents - Prior to the release of SP1, some customers have reported difficulty when printing mixed-orientation XPS documents (documents containing pages in both portrait and landscape orientation) using the XPS Viewer, resulting in all pages being printed entirely in either portrait or landscape mode. This issue has been addressed in SP1, allowing users to correctly print mixed-orientation documents using the XPS Viewer.
    • Change to behavior of “Restore previous folders at logon” functionality - SP1 changes the behavior of the “Restore previous folders at logon” function available in the Folder Options Explorer dialog. Prior to SP1, previous folders would be restored in a cascaded position based on the location of the most recently active folder. That behavior changes in SP1 so that all folders are restored to their previous positions.
    • Enhanced support for additional identities in RRAS and IPsec - Support for additional identification types has been added to the Identification field in the IKEv2 authentication protocol. This allows for a variety of additional forms of identification (such as E-mail ID or Certificate Subject) to be used when performing authentication using the IKEv2 protocol.
    • Support for Advanced Vector Extensions (AVX) - There has always been a growing need for ever more computing power and as usage models change, processors instruction set architectures evolve to support these growing demands. Advanced Vector Extensions (AVX) is a 256 bit instruction set extension for processors. AVX is designed to allow for improved performance for applications that are floating point intensive. Support for AVX is a part of SP1 to allow applications to fully utilize the new instruction set and register extensions.