Partner Technical Services Blog

A worldwide group of consultants who focus on helping Microsoft Partners succeed throughout the business cycle.

Partner Technical Services Blog

  • Configuring SharePoint 2013 Forms-Based Authentication with SQLMemberShipProvider

    Post courtesy Partner Solution Consultant Priyo Lahiri


    With SharePoint 2013, a lot of partners and customers are opening up their on premise deployment to their vendors and customers. While the way you would configure this is very similar to SharePoint 2010, things get a little tricky when you perform a real-world deployment spanned across multiple servers. This post is an end-to-end walkthrough of setting up Forms Based Authentication with SQLMemberShipProvider in a 3 tier SharePoint 2013 Deployment.


    It would be whole lot easier if I had a single server environment with the same account running everything and that account is also a Domain Admin. However, I chose a different approach since most likely this is how your real-world deployment will be setup and the steps are little different when your farm is spanned across 3 servers. Here is my environment:

    WFE01 – Web Server running Microsoft SharePoint Foundation Web Application. I am connecting to the SQL instance using an Alias. It’s a very smart move. If you have ever had to move your SharePoint databases across SQL Servers or decommission an aging SQL Server, you know that having a SQL Alias will save you from a lot of nightmares. If you are looking for a step by step, click here.

    APP01 – Central Admin Server. Note: this is NOT running Microsoft SharePoint Foundation Web Application and is configured to be a “True” application server. This also means that the Web Application that we create will not reside on this server.

    SQL01 – SQL Server running SQL Server 2012 with SP1

    SharePoint 2013 server RTM and Windows Server 2012 RTM are used for this set up.

    Tools to use

    While the steps documented below can be done without these tools, they do make your life a whole lot easier.

    1. FBA Configuration Manager for SharePoint 2013 – Author and Credit goes to Steve Peschka. The download comes with a ReadMe file. Please read it, since you need to register the WSP that comes with it.

    2. SharePoint 2013 FBA Pack – Author and Credit goes to Chris Coulson. Here is the documentation that will tell you how to install/activate/work with it. This not only will this make usonly tested the user management er management a breeze, it has some very useful features like password reset and self-service account management.

    NOTE: I have portion of the FBA Pack and didn’t have time to play with the rest of the features.

    How it’s done

    Step 1 – Create the Web Application

    In this step we will be creating the web application with Windows Authentication (Claims) and Forms Based Authentication (FBA) on the same Zone. In SharePoint 2013, you can have multiple authentication providers without extending the web application. Having said that, at times, you might have to extend the web application depending on your scenario. More on that on a different post where I will show you how to use LDAPMemberShipProvider to talk to your AD.

    From Central Administration, we will create a Web Application and call it and enable both Windows Auth and FBA. Note the names I am using: ASP.NET Membership Provider Name = SQL_Membership and ASP.NET Role manager name = SQL_Role. You can call them whatever you want, just ensure you use the same names everywhere.


    We will create a new App Pool and use the Web App Pool account. Make a note of this since you would need to give this account permission in the next step in the ASPNET database.


    Create the Web App and then the Site Collection, it doesn’t matter what template you choose. Once the Site Collection is created, visiting the site collection will take you to our default sign in page where you will be asked to choose an Authentication Provider to Sign In with. If you want your External Users only to have the option of FBA, you would want to set this default zone with Windows Auth and extend it and have the FBA on the extended web app. Obviously, the URL’s will then be different.

    Your sign in page should look like this (make sure your DNS record (CNAME) point to the WFE01)


    Do you want to see a custom sign in page with your company brand on it? Well, let’s defer that to a different post.

    Step 2 – Verify Tools

    Now that the web app is created, we will make sure FBA Pack and FBA Configuration manager is deployed as it should be. Go to Central Administration >> System Settings >> Manage Farm Solutions. Make sure fbaConfigFeature.wsp is globally deployed and visigo.sharepoint.formsbasedauthentication.wsp is deployed to See screenshot below. If the visigo.sharepoint.formsbasedauthentication.wsp is not deployed, click on the WSP and deploy it to your web application.


    Login to the site collection created in the above step and activate the following feature:

    Site Settings >> Site Collection Administration >> Site Collection Features >> Form based Authentication Management


    Once the feature is activated, it should add the following to your Site Settings under User and Permissions


    Step 3 – Creating the SQL Database for User Management

    The first step is to create the SQL Database that would hold the Extranet Users

    • Browse to c:\Windows\Microsoft .NET\Framwork64\v4.0.30319
    • Run aspnet_regsql.exe
    • Click Next
    • Choose Configure SQL Server for Application Services >> Click Next
    • Enter your SQL Server Name , choose Windows Authentication and type in a Database Name


    • Click Next twice to provision the database
    • Now we need to add the Application Pool that runs the web application and give it required permission. In this case, the application pool name is waterfall\spweb. Perform the following steps:
      • Open up SQL Management Studio, Expand the database we created and expand Security
      • Right click Users and add a new User
      • User Type = Windows User
      • User name = choose <yourAppPoolAccountName>
      • Login name = browse and choose the login name (should be same as the app pool name above)


      • Click Owned Schemas and choose the following:
        • aspnet_Membership_FullAccess
        • aspnet_Persolalization_FullAccess
        • aspnet_Profile_FullAccess
        • aspnet_Roles_FullAccess
        • aspnet_WebEvent_FullAccess


    Step 4 – Editing the web.config files

    We need edit the following web.config files:

    • Web Application Web.config – WFE server
    • STS Application web.config – WFE server and Application Server
    • Central Admin web.config – CA Server
    • If you have more WFEs and App Servers, you need to edit them as well. A lot of people puts these in there machine.config file as well so that it gets inherited to the web.config file. I am not too keen on editing the machine.config file.

    Let’s login to our WFE server and fire up FBAConfigMgr.exe. While you can get the code you need from here and edit web.config yourself, if you just let the tool run its course, it will create a Timer Job and do the task for you. In the FBAConfigMgr type in your application URL and from the sample configuration choose the following:

    • People Picker Wildcard
    • Connection String
    • Membership Provider
    • Role Provider

    Here is what the screen looks like when default values are chosen:


    We will modify the default values to reflect the following (highlighted items need modification per your environment):

    • Web Application URL -
    • People Picker Wildcard - <add key="SQL_Membership" value="%" />
    • Connection String -
      <add name="fbaSQL" connectionString="server=SQL01;database=Extranet_User_DB;Trusted_Connection=true" />
    • Membership Provider -
      <add connectionStringName="fbaSQL" applicationName="/"
      type="System.Web.Security.SqlMembershipProvider, System.Web,
      Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
    • Role Provider -
      <add connectionStringName="fbaSQL" applicationName="/"
      name="SQL_Role" type="System.Web.Security.SqlRoleProvider, System.Web,
      Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/>

    The screen should now look like this:


    It’s time to hit Apply Config. This will create a timer job to update your web.config files. Though it creates a backup, you should be proactive and take a backup of your web application web.config and sts web.config file. Here is how to back up the web.config file and here is how to find the STS web.config file.

    Once you click Apply Config, the tool will tell you when it’s done. It might take a few mins before you see any changes, so wait for it (you should see a new backup file created for your web.config file with time stamp and _FBAConfigMgr in the end of the file). To verify that the job is done, open up the web.config for your web application and search for <membership. You should see the following:

    <<Web Application web.config file>>


    The ConnectionStrings gets added to the end of the file right above </configuration>


    <<STS web.config file>>

    Open up the STS Web.Config and you should see the following:


    The ConnectionStrings gets added to the end of the file as well just like web.config of the web application.

    <<Central Administration web.config file on App Server>>

    If you go back to the application server and open up the web.config file for the Central Admin site, you will see there are no changes made there. So we will make that change manually. Create a backup of the file then open up the file and find <Machine. It should look like this:


    We will add the following (copied from web.config file of web application or the code from FBAConfigMgr)

    1. Search for <machineKey and paste the following under <rolemanager><providers>
    <add connectionStringName="fbaSQL" applicationName="/" name="SQL_Role" type="System.Web.Security.SqlRoleProvider, System.Web,&#xD;&#xA; Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />

    2. Under <membership><providers> paste the following
    <add connectionStringName="fbaSQL" applicationName="/" name="SQL_Membership" type="System.Web.Security.SqlMembershipProvider, System.Web,&#xD;&#xA; Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
    The screen should now look like this:

    3. Scroll to the end of the document and paste the following right before </configuration>

    <add name="fbaSQL" connectionString="server=SQL01;database=Extranet_User_DB;Trusted_Connection=true" />



    <<STS web.config file on App Server>>

    Just like the Central Admin web.config make the same changes on this web.config as well. Just make sure you are pasting the information from RoleManager Providers and Membership Providers in the right place. Here is what the code looks like (you can use the code below are make changes to the highlighted areas to suit your environment):




    <add connectionStringName="fbaSQL" applicationName="/" name="SQL_Membership" type="System.Web.Security.SqlMembershipProvider, System.Web,&#xD;&#xA; Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />





    <add connectionStringName="fbaSQL" applicationName="/" name="SQL_Role" type="System.Web.Security.SqlRoleProvider, System.Web,&#xD;&#xA; Version=2.0.3600.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />





    <add name="fbaSQL" connectionString="server=SQL01;database= Extranet_User_DB;Trusted_Connection=true" />


    Here is a screenshot


    Step 5 - Use FBA Pack to add and manage users

    Our configurations are done. We will now go to our site collection and use the FBA Pack to add / manage users and Roles

    Go to Site Settings and click on FBA User Management >> Click New User and create a dummy user and add him to the contributor group


    Step 6 – Verify Forms user

    Now open up IE in InPrivate mode and visit your site collection and this time choose Forms Authentication and enter the account information you just created to log in. You’re done!


    Click on the user name and My Settings, you will see the account information coming from SQL Membership Provider


    If you go to a document library and try and add the user there, you will see it resolves from your SQL database



    How to create SQL Alias for SharePoint

    Follow the steps below to create a SQL Alias on all your SharePoint Servers:

    TechNet Reference:

    1. Perform this on the Application Server that is hosting Central Administration

    a. Stop all SharePoint Services

    b. Open CLICONFIG.exe from C:\Windows\System32\cliconfg.exe (64 bit version of cliconfig.exe)

    c. Enable TCP/IP under general tab

    d. Click on Alias Tab

    e. Type Current SQL Server Name in the Alias Name field

    f. Type Current SQL Server Name in the Server field (see screenshot below. In your case SQL Alias and SQL Server name is the same)

    g. Validate SQL Alias

    i. Create a new text file on SharePoint Server and name it “TestDBConnection.udl”

    ii. Double click to open the file and enter your SQL Server Alias name

    iii. Use Windows Integrated Security

    iv. You should be able to see all your SharePoint databases when you click on “Select the database on the Server”

    h. Start all services for SharePoint Server / Reboot SharePoint Server

    i. Perform the steps above on all other SharePoint servers

    How to backup web.config file

    To back up web.config file, perform the following:

    · From IIS Manager (start >> Run > inetmgr)

    · Right click on the web site and click Explore

    · Copy the web.config file somewhere else, or the in the same location with a different name


    Where is the STS web.config file?

    · On your WFE open up IIS Manager and expand SharePoint Web Services

    · Right click on SecurityTockenServiceApplication and click Explore


  • Embedding a PowerPoint Deck on SharePoint 2010

    (Post dedicated to Nuri, Operations Manager for our delivery team in EMEA, and courtesy Sean Earp)

    With the addition of PowerPoint Web App to SharePoint 2010, you can now view and edit PowerPoint presentations directly from within your browser.  This technology has also been made available to consumers on services such as and


    In the past, it has been difficult to embed a PowerPoint document within a webpage, requiring workarounds such as saving the presentation as pictures, PDFs, or MHT documents.  If you have a public presentation, it is now extremely easy to embed a PowerPoint deck on any web page, following the steps on the aptly named how to embed a PowerPoint presentation on a web page post.

    Unfortunately, these steps do not work if your installation of PowerPoint Web App is local.  The Share –> Embed option available from is simply not present on SharePoint 2010.


    So what to do if you want to embed an internal, private, or confidential PowerPoint presentation on an internal SharePoint page?  Fortunately, it is possible to embed a presentation on a webpage without posting the presentation on a broadly available public site.

    Step 1: Ensure that Office Web Apps have been installed and configured on SharePoint 2010.  Those steps are out of scope for this article, but the official documentation should be all you need:  Deploy Office Web Apps (Installed on SharePoint 2010 Products)

    Step 2: Upload the PowerPoint to a document library


    Step 3: Click on the PowerPoint Deck to open it in PowerPoint Web App.  It will have a URL that looks like:



    Don’t worry about writing down the URL. Unfortunately, you can’t paste it into a Page Viewer web part without getting an error message.  So… a little magic to get the URL we need to embed our PowerPoint deck on our SharePoint Page.

    Step 4: Open the Developer Tools in Internet Explorer (F12), and search for iframe.


    Step 5: Copy the first result into your text editor of choice.  The magic URL you need is the one within the src attribute.


    Step 6: Delete everything except the part inside the quotes.  Before the PowerPointFrame.aspx, add the relative URL to your site collection _layouts directory, and copy the whole URL into your clipboard.


    Step 6: Go to the SharePoint Page you want to embed the PowerPoint into.  Add a Page Viewer Web Part to the page.  Open the tool pane for the web part,


    Step 7: In the Page Viewer tool pane, paste in the URL, and optionally enter a height/width and chrome state for the PowerPoint Deck.


    Step 8: Hit “OK” and be awed at how awesome it looks to have a fully functional PowerPoint deck embedded on your page.  You can view the deck full screen by clicking “Start Slide Show”, you can change slides, view notes, click links, or click the “popout” button to have the deck open up in a popout window.


    Super-secret-squirrel trick: If you want the deck to default to a slide other than the cover slide, click through to the slide you want, and then click the popout button in the top right of the PowerPoint Web App.  The deck will be open to that slide in its own window. 

    Use the same Developer Tools trick from step 4, but this time search for &SlideId.  You will see the URL has added two parameters… a slide ID and popout=1 (the URL will end with something like &SlideId=590&popout=1).  You can guess what popout=1 does, and the SlideId is some sort of internal reference to the Slide ID (I have no idea how it is generated, but it doesn’t matter Smile.  My web app-fu will work just the same). Just copy the &SlideID=somenumber and paste it to the end of your URL in the Page Viewer web part, and now your web page will display the PowerPoint deck starting on whatever page you specified!

    Additional Resources

    Office Web Apps technical library

  • SharePoint and Exchange Calendar together

    (post courtesy Anand Nigam)

    One of the cool things in SharePoint 2010 is the ability to show the Exchange Calendar on a SharePoint site, side by side. This is called as Calendar Overlay

    This post will walk through how to configure this.

    Step 1 (prerequisite)

    1. I have a SharePoint Site http://fabrikam which looks like this


    2. I also have a calendar “MySharePointCalender” , with a few calendar events entered.


    3. I have my Exchange Calendar in Outlook, with a few meeting/events there as well.


    4. What we want is to see events from my Exchange calendar show up on the SharePoint calendar.

    Step 2 (The actual process)

    1. Open the SharePoint calendar  --> Calendar Tools –> Calendar Overlay –>New Calendar,



    Fill in the :

    • Name: Give a name to this calendar
    • Type: Select Exchange
    • Outlook Web Access URL: the OWA url of your organization.
    • Exchange Web Service URL: which can be determined as follows:

    If your OWA URL is, then the Exchange web Service URL would be

    (in other words, from the OWA URL , remove the trailing “owa” and add “ews/exchange.asmx”)


    Step 3 (The awaiting Error and the fix)

    If you have not previously configured SharePoint to trust your Exchange server, you will receive the following error message:

    Could not establish trust relationship for the SSL/TLS secure channel with authority ‘dc’. (GUID)


    Here is the fix

    1. Get the CA Root Certificate for your domain

    (Just a note, there are many ways to get the certificate, I’m taking the one that is less prone to error)

    a. Go to the Server where you have the Certificate Authority installed. Open IIS and select the Server Certificates component.


    Double click on Server Certificates

    Locate the Root Certificate of the CA from the list, here is the one what I have.


    (To double check if this it the Root certificate, open the certificate and see the certification path, It should have just one entry (root), that is the name of the Certification Authority, in your domain.). Below the image or my root certificate


    b. Now that we have located the certificate, Open it go to Details tab and Click Copy to File

    clip_image026 clip_image028

    clip_image031 clip_image033

     clip_image036 clip_image038

    And now we have the Certificate exported to a file


    Copy this certificate to the SharePoint Server, and follow the below steps

    a. Open Central administration > Security> Manage Trust


    b. Click on New, Provide a Name (I use RootCA), and navigate to the RootCA.cer file you exported in the previous step and Click OK


    Now refresh the same calendar and confirm that you can see the Exchange calendar event for the logged in user


    Step 4 (Enhance the default behavior)

    Although we can now see the Exchange calendar, we can on only see the free/busy status, and not the actual details of the event. It would be good if we could have the details displayed here too. To display details:

    1. Open Outlook> File > Options>


    2. Go to the Calendar Section > click Free/Busy Options


    3. Select any one of the options below, I have selected Full details. Click Apply and Ok and exit out of Outlook.  Now refresh the SharePoint calendar and see the difference.



    Additional reading:

    Note: The calendar overlay is per user, meaning it will only show calendar items for the currently logged-in user.

  • Migrating File Shares to SharePoint Online

    (Post courtesy Partner Solution Consultant Andre Kieft)

    It has been a while since I created a blog post, but recently I received a lot of questions and requests for advice on how to migrate file shares to SharePoint and use SkyDrive Pro (SDP). So I figured to create a blog post with the things you need to consider as a Small and Medium Business (SMB) partner when you are planning to migrate file share content into SharePoint and want to make use of SDP for synchronizing the SharePoint content offline.

    Note: that these steps are both valid for SharePoint 2013 on-premises (on-prem) and SharePoint Online (SPO).

    Step 1 – Analyze your File Shares

    As a first step, try to understand the data that resides on the file shares. Ask yourself the following questions:

    • What is the total size of the file share data that the customer wants to migrate?
    • How many files are there in total?
    • What are the largest file sizes?
    • How deep are the folder structures nested?
    • Is there any content that is not being used anymore?
    • What file types are there?

    Let me try to explain why you should ask yourself these questions.

    Total Size

    If the total size of the file shares are more that the storage capacity that you have on SharePoint, you need to buy additional storage (SPO) or increase your disk capacity (on-prem). To determine how much storage you will have in SPO, please check the Total available tenant storage in the tables in this article. Another issues that may arise is that in SharePoint is that you reach the capacity per site collection. For SPO that is 100 Gigabyte, for on-premises the recommended size per site collection is around 200 Gigabyte. This would automatically mean that the content database is around 200 Gigabyte, which is the recommended size. Thought you can stretch this number up in on-prem, it is not recommended.

    So, what should I do when my customer has more than 100 Gigabyte?

    • Try to divide the file share content over multiple site collections when it concerns content which needs to be shared with others.
    • If certain content is just for personal use, try to migrate that specific content into the personal site of the user.

    How Many Files

    The total amount of files on the file shares is important as there are some limits in both SharePoint as well as SDP that can result in an unusable state of the library or list within SharePoint but you also might end up with missing files when using the SDP client.

    First, in SPO we have a fixed limit of 5000 items per view, folder or query. Reasoning behind this 5000 limit boils all the way down to how SQL works under the hood. If you would like to know more about it, please read this article. In on-prem there is a way to boost this up, but it is not something we recommend as the performance can significantly decrease when you increase this limit.

    Secondly for SDP there is also a 5000 limit for synchronizing team sites and 20000 for synchronizing personal sites. This means that if you have a document library that contains more that 5000 items, the rest of the items will not be synchronized locally.

    There is also a limit of 5 million items within a document library, but I guess that most customer in SMB won’t reach that limit very easily.

    So, what should I do if my data that I want to migrate to a document library contains more than 5000 items in one folder?

    • Try to divide that amount over multiple subfolders or create additional views that will limit the amount of documents displayed.

    But wait! If I already have 5000 items in one folder, doesn’t that mean that the rest of the other document won’t get synchronized when I use SDP?

    Yes, that is correct. So if you would like to use SDP to synchronize document offline, make sure that the total amount of documents per library in a team site, does not exceed 5000 documents in total.

    So, how do I fix that?

    • Look at the folder structure of the file share content and see if you can divide that data across multiple sites and/or libraries. So if there is a folder marketing for example, it might make more sense to migrate that data into a separate site anyway, as this department probably wants to store additional information besides just documents (e.g. calendar, general info about the marketing team, site mailbox etc). An additional benefit of spreading the data over multiple sites/libraries is that it will give the SDP users more granularity about what data they can take offline using SDP. If you would migrate everything into one big document library (not recommended), it would mean that all users will need to synchronize everything which can have a severe impact on your network bandwidth.

    Largest File Sizes

    Another limit that exists in SPO and on-prem is the maximum file size. For both the maximum size per file is 2 Gigabyte. In on-prem the default is 250 MB, but can be increased to a maximum of 2 Gigabyte.

    So, what if I have files that exceed this size?

    • Well, it won’t fit in SharePoint, so you can’t migrate these. So, see what type of files they are and determine what they are used for in the organization. Examples could be software distribution images, large media files, training courses or other materials. If these are still being used and not highly confidential, it is not a bad thing to keep these on alternative storage like a SAN, NAS or DVDs. If it concerns data that just needs to be kept for legal reasons and don’t require to be retrieved instantly, you might just put these on DVD or an external hard drive and store them in a safe for example.

    Folder Structures

    Another important aspect to look at on your file shares is the depth of nested folders and file length. The recommended total length of a URL in SharePoint is around 260 characters. You would think that 260 characters is pretty lengthy, but remember that URLs in SharePoint often has encoding applied to it, which takes up additional space. E.g. a space is one character but in Unicode this a %20, which takes up three characters. The problem is that you can run into issues when the URL becomes to large. More details about the exact limits can be found here, but as a best practice try to keep the URL length of a document under 260 characters.

    So, what if I have files that will have more than 260 characters in total URL length?

    • Make sure you keep your site URLs short (the site title name can be long though). E.g. don’t call the URL Human Resources, but call it HR. If you land on the site, you would still see the full name Human Resources as Site Title and URL are separate things in SharePoint.
    • Shorten the document name (e.g. strip of …v.1.2, or …modified by Andre), as SharePoint has versioning build in. More information about versioning can be found here.

    Idle Content

    When migrating file shares into SharePoint is often also a good momentum to clean up some of the information that the organization has been collecting over the years. If you find there is a lot of content which is not been accessed for a couple of years, what would be the point of migrating that data it to SharePoint?

    So, what should I do when I come across such content?

    • Discuss this with the customer and determine if it is really necessary to keep this data.
    • If the data cannot be purged, you might consider storing it on a DVD or external hard drive and keep it in a safe.
    • If the content has multiple versions, such as proposal 1.0.docx, proposal 1.1.docx, proposal final.docx, proposal modified by Andre.docx, you might consider just moving the latest version instead of migrating them all. This manual process might be time consuming, but can safe you lots of storage space in SharePoint. Versioning is also something that is build into the SharePoint system and is optimized to store multiple versions of the same document. For example, SharePoint only stores the delta of the next version, saving more storage space that way. Note that this functionality is only available in SharePoint on-prem.

    Types of Files

    Determine what kind of files the customer is having. Are they mainly Office documents? If so, then SharePoint is the best place to store such content. However, if you come across developers code for example, it is not a good idea to move that into SharePoint. There are also other file extensions that are not allowed in SPO and/or on-prem. A complete list of blocked file types for both SPO and on-prem can be found here.

    So, what if I come across such file extensions?

    • Well, you can’t move them into SharePoint, so you should either ask yourself, do I still need these files? And if so, is there an alternative storage facility such as a NAS, I can store these files on? If it concerns developer code, you might want to store such code on a Team Foundation Service Server instead.

    Tools for analyzing and fixing file share data

    In order to determine if you have large files or exceed the 5000 limit for example, you need to have some kind of tooling. There are a couple of approaches here.

    • First off, there is a PowerShell script that has been pimped up by a German colleague Hans Brender, which checks for blocked file types, bad characters in files and folders and finally for the maximum URL length. The script will even allow you to fix invalid characters and file extensions for you. It is a great script, but requires you to have some knowledge about PowerShell. Another alternative I was pointed at is a tool called SharePrep. This tool does a scan for URL length and invalid characters.
    • Secondly there are other 3rd party tools that can do a scan of your file share content such as Treesize. However such tools do not necessarily check for the SharePoint limitations we talked about in the earlier paragraphs, but at least they will give you a lot more insight about the size of the file share content.
    • Finally there are actual 3rd party migration tools that will move the file share content into SharePoint, but will check for invalid characters, extensions and URL length upfront. We will dig into these tools in Step 2 – Migrating your data.

    Step 2 – Migrating your data

    So, now that we have analyzed our file share content, it is time to move them into SharePoint. There are a couple of approaches here.

    Open with Explorer

    If you are in a document library you can open up the library in the Windows Explorer. That way you can just do a copy and paste from the files into SharePoint.


    But, there are some drawbacks using this scenario. First of all, I’ve seen lots of issues trying to open up the library in the Windows Explorer. Secondly, the technology that is used for copying the data into SharePoint is not very reliable, so keep that in mind when copying larger chunks of data. Finally there is also drag & drop you can use, but this is only limited to files (no folders) and only does a maximum of 100 files per drag. So this would mean if you have 1000 files, you need to drag them 10 times in 10 chunks. More information can be found in this article. Checking for invalid characters, extensions and URL length upfront are also not addressed when using the Open with Explorer method.

    Pros: Free, easy to use, works fine for smaller amounts of data

    Cons: Not always reliable, no metadata preservations, no detection upfront for things like invalid characters, file type restrictions, path lengths etc.

    SkyDrive Pro

    You could also use SDP to upload the data into a library. This is fine as long as you don’t sync more than 5000 items per library. Remember though that SDP is not a migration tool, but a sync tool, so it is not optimized for large chunks of data to be copied into SharePoint. Things like character and file type restrictions, path length etc. is on the list of the SDP team to address, but they are currently not there.

    The main drawbacks of using either the Open in Explorer option or using SDP is that when you use these tools, they don’t preserve the metadata of the files and folder that are on the file shares. By this I mean, things like the modified date or owner field are not migrated into SharePoint. The owner will become the user that is copying the data and the modified date will be the timestamp of the when the copy operation was executed. So if this metadata on the files shares is important, don’t use any of the methods mentioned earlier, but use one of the third party tools below.

    Pros: Free, easy to use, works fine for smaller amounts of data (max 5000 per team site library or 20000 per personal site)

    Cons: No metadata preservations, no detection upfront for things like invalid characters, file type restrictions, path lengths etc.

    3rd party tools

    Here are some of the 3rd party tools that will provide additional detection, fixing and migration capabilities that we mentioned earlier:

    (Thx to Raoul for pointing me to additional tools)

    The list above is in random order, where some have a focus on SMB, while other more focused on the enterprise segment. We can’t speak out any preference for one tool or the other, but most of the tools will have a free trial version available, so you can try them out yourself.


    So, when should I use what approach?

    Here is a short summary of capabilities:

      Open in Explorer SkyDrive Pro 3rd party
    Amount of data Relatively small No more than 5000 items per library Larger data sets
    Invalid character detection No No Mostly yes1
    URL length detection No No Mostly yes1
    Metadata preservation No No Mostly yes1
    Blocked file types detection No No Mostly yes1

    1This depends on the capabilities of the 3rd party tool.


    SDP gives me issues when synchronizing data
    Please check if you have the latest version of SDP installed. There have been stability issues in earlier released builds of the tool, but most of the issues should be fixed by now. You can check if you are running the latest version, by opening up Word-> File-> Account and click on Update Options-> View Updates. If your current version number is lower than the one you have, click on the Disable Updates button (click yes if prompted), then click Enable updates (click yes if prompted). This will force downloading the latest version of Office and thus the latest version of the SDP tool.


    If you are running the stand-alone version of SDP, make sure you have downloaded the latest version from here.

    Why is the upload taking so long?
    This really depends on a lot of things. It can depend on:

    • The method or tool that is used to upload the data
    • The available bandwidth for uploading the data. Tips:
      • Check your upload speed at and do a test for your nearest Office 365 data center. This will give you an indication of the maximum upload speed.
      • Often companies have less available upload bandwidth then people at home. If you have the chance, uploading from a home location might be faster.
      • Schedule the upload at times when there is much more bandwidth for uploading the data (usually at night)
      • Test your upload speed upfront by uploading maybe 1% of the data. Multiply it by 100 and you have a rough estimate of the total upload time.
    • The computers used for uploading the data. A slow laptop can become a bottle neck while uploading the data.

    If you feel that there are things missing here, please let me know and I’ll try to add them to this blog post.

  • SharePoint 2010–Returning Document ID in Search Results

    (Post courtesy Sean Earp, with research and XSLT authoring by Alaa Mostafa)

    One of my favorite features of SharePoint 2010 is the Document ID.

    As discussed in the MSDN article Developing with Document Management Features in SharePoint Server 2010 (ECM):

    A document ID is a unique identifier for a document or document set and a static URL that opens the document or document set associated with the document ID, regardless of the location of the document. Document IDs provide:

    • A way to reference items such as documents and document sets in SharePoint Server 2010 that is less fragile than using URLs. URLs break if the location of the item changes. In place of the URL, the document ID feature creates a static URL for each content item with a document ID assigned to it.

    • More flexible support for moving documents or document sets at different points in the document life cycle. For example, if you create a document on a MySite or Workspace page and then publish it on a team site, the document ID persists and travels with the document, circumventing the broken URL problem.

    • A document ID generator that assigns a unique document ID to items. You can customize the format of the IDs that the service generates. By using the document management API, you can write and use custom document ID providers.


    When browsing a document library with this feature enabled, you can display the Document ID column, and you will be able to see the Document ID for a given document.  Easy enough, and useful if you need to reference this Document ID in another system.

    This works great when you can browse a document library, perhaps using the new metadata navigation and filtering capabilities of SharePoint 2010, but if your document library is holding thousands and thousands of documents, users may resort to using search to find the document they are looking for.  Unfortunately, SharePoint search does not display the document ID is the search results by default.


    Fortunately, SharePoint indexes Document IDs as a managed property by default, which means that with a little magic, we can add the Document ID into the search results.

    In a nutshell, SharePoint retrieves the search results as XML, and uses XSLT to transform the XML into the pretty results you see on the search results page.  Same basic concept as HTML (which has your content) and CSS (which styles that content).  We just need to tell SharePoint to return the managed property with our Document ID, and then update the XSLT to display that managed property in the search results. 

    It is not as hard as it sounds.

    Assumptions: I assume you have enabled the Document ID feature on the site collection, all documents have been assigned Document IDs, and a full crawl has been done of the site.  I also assume you are a site collection administrator with full permissions to the site collection.

    From your Search Results page  in the site collection (wherever you have it),   click on Page –> Edit (or Site Actions –> Edit Page).  You will see a ton of zones and web parts (such as the Refinement Panel, Search Statistics, Search Box, etc.  You can customize the heck out of the search results page, and move things all over the place.


    For now, however, we are just going to modify the Search Core Results web part that contains…er… the core search results.  How intuitive!

    Edit the Search Core Results web part, and expand the section that says “Display Properties”.  Uncheck the box that says “Use Location Visualization”.  I have no idea why this option is named as it is… this is really the option that lets you edit the fetched properties and XSL.


    As a quick aside… although you can edit the fetched properties and XSL directly from the web page properties, the experience is horrible.  I strongly recommend using an XML editor like Visual Studio or NotePad++

    In the Fetched Properties section you will see a number of columns that look like the following.  these are the managed properties that are returned by SharePoint Search

    <Column Name="PictureHeight"/>  <Column Name="PictureWidth"/>

    Somewhere before the closing </Columns> tag, add a:

    <Column Name="spdocid"/>

    (Note: if you are using SharePoint search instead of FAST search replace all instances of “spdocid” with “DocID”)

    This will cause SharePoint to return the Document ID in the search results XML.  Now let’s modify the XSL so that we display the ID in the search results.  Click on the “XSL Editor…” and copy the XSL into your XML editor of choice (or, if you like pain, just edit the 938-line long XSL sheet in a browser that does no validation or color coding.  Your choice.)

    At the top of the XSL is a list of parameter names.  Add in the following parameter (order does not matter)

    <xsl:param name="spdocid" />


    Next, search for “DisplayAuthors.  After the DisplayAuthors call template, we are going to add a new call template called “DisplayID” to… well, display the ID. The template is wrapped in a conditional to ensure that if there is NOT a document ID, that it does not attempt to display a null value. 

    Add the following: following lines:

                  <xsl:if test="string-length($hasViewInBrowser) &gt; 0">
                          <xsl:call-template name="DisplayID">
                            <xsl:with-param name="spdocid" select="spdocid" />
                            <xsl:with-param name="browserlink" select="serverredirectedurl" />
                            <xsl:with-param name="currentId" select="$currentId" />


    Search for “DisplayString” and we will add a section to call our template, display the ID (along with a URL that links to the document), and we’ll put brackets around the Document ID so it stands out visually.  Add the following:

      <xsl:template name="DisplayID">
        <xsl:param name="spdocid" />
        <xsl:param name="currentId" />
        <xsl:param name="browserlink" />
        <xsl:if test="string-length($spdocid) &gt; 0">
          <xsl:text xml:space="default"> [ </xsl:text>
          <a href="{concat($browserlink, $ViewInBrowserReturnUrl)}" id="{concat($currentId,'_VBlink')}">
            <xsl:value-of select="$spdocid" />
          <xsl:text xml:space="default"> ] </xsl:text>

    We’re almost done!  Select all your XSL, copy it, and paste it back into your SharePoint window, hit Save –> Okay –> Check In –> Publish

    Voila!  The Document ID now shows up in the search results with a clickable link back to the source document.


    Random troubleshooting tip:  If you get the message “Property doesn't exist or is used in a manner inconsistent with schema settings”, this typically means one of two things:

    1. You created a custom managed property and have not yet run a full crawl so that this property does not exist in the index (this property is mapped out of the box, so it does not apply here)
    2. You are using the wrong managed property.  FAST search uses “spdocid” while SharePoint search uses “DocId”



    Attachments: I have attached a copy of the XSL I used for the above post to save you time copying and pasting into the right sections.  It works for me with SharePoint search, but use on a test server first and at your own risk.

  • Migrate from Gmail to Office 365 in 7 steps

    If you are doing a large migration from Gmail to Office 365, you will generally want to use a 3rd party tool that automates the process.  However, if you are migrating a small customer with a few mailboxes, it is quick, easy, (and free!) to do so manually).

    Here is how:

    What do you need to know before you begin?

    This guide covers migrating from Gmail to Office 365 and will take about an hour to complete.

    For more information on deploying Office 365, see the first article in the series at Office 365 Midsize Business Quick Deployment Guide and also watch the YouTube video at Office 365 Midsize Business Quick Deployment Guide video.

    Before you begin the Gmail to Office 365 migration, you need to know or have at hand a few key pieces of information:

    1. Your Google apps and Office 365 administrator account and password.
    2. The URLs to access the Google admin console, the Office 365 admin center, and the Exchange admin center. If you don't have them, don’t worry—they are covered later in this document.
    3. The user names and passwords of the Gmail mailboxes you want to migrate.
    4. How to create MX records at your Internet service provider.


    If you’re using Office 365 Midsize Business with the Microsoft Open License or the Open Value program, go to the get started with Office 365 page and create an Office 365 account first. After you’ve created the account, return to this document and begin Step 1: Sign in to the Gmail Admin console and Office 365 admin center.

    What Gmail information is migrated?

    1. Email is migrated, and this is covered in Step 5: Migrate a Gmail mailbox.

    2. Gmail contacts are migrated and imported by using a CSV file. This topic is covered in Step 6: Migrate Gmail contacts.

    3. Gmail calendar items are imported by exporting Google Calendar to an iCal file. This is covered in Step 7: Migrate Gmail calendar.

    Okay, let’s get started.

    Step 1: Sign in to the Gmail Admin console and Office 365 admin center


    Sign in to the Google Admin console

    1. By using your Google Apps administrative credentials, sign in to

    2. After you’re signed in, choose Users and verify the list of users you want to migrate to Office 365.


    Sign in to the Office 365 admin center or the Exchange admin center
    1. By using your Office 365 administrative credentials, sign in to

    2. After you’re signed in, you will be directed to the Office 365 admin center page.

    3. To go to the Exchange admin center, click the drop-down arrow next to the Admin name in the ribbon bar.


    4. From the list, select Exchange.

    5. Select Office 365 to return to the Office 365 admin center page.

    Step 2: Create Office 365 mailboxes for Gmail users you want to migrate

    One of the most important tasks in preparing to migrate Gmail to Office 365 is first creating an Office 365 mailbox for each Gmail mailbox you want to migrate. Fortunately, creating an Office 365 mailbox is easy. You simply create a new user account and assign the Exchange Online Plan license to the user. Refer to your list of Gmail mailboxes you want to migrate, and complete the following steps to create corresponding Office 365 mailboxes.

    To create an Office 365 mailbox for each user you want to migrate from Gmail

    1. From the Office 365 admin center, click users and groups > active users.

    2. Click the plus icon (+) to add a new user account. You can also create multiple user accounts at the same time by clicking the Bulk add icon, as shown in the following figure.


    • Click Assign role > Set user location, and then click Next.
    • On the Assign licenses page, ensure that Exchange Online Plan 1 or Exchange Online Plan 2 is selected. This helps ensure that the user account being created will have access to email.


    • On the Send results in email page, type an email address where you will receive the temporary password for the user.

      The newly created user name and password appear on the Results page and are also sent to the administrator via email.

    • Lastly, send the email message with the user name and temporary password information to each user.

    Step 3: Create a Gmail migration file

    The migration file, a comma-separated values (CSV) file, contains the list of Gmail accounts that will be migrated to Office 365. Each row of the file contains the email address of an Office 365 mailbox and the corresponding user name and password of the Gmail account that will be migrated.

    The CSV file can easily be created by using Microsoft Excel.


    Create the Gmail migration file

    1. On your local computer, open Excel 2013 or Excel 2010.

    2. Using the preceding figure as a template, create the migration file.

    3. Column A lists the Office 365 mailbox.

    4. Column B lists the Gmail user name.

    5. Column C lists the password for the Gmail user in Column B.

    6. Save the file as a CSV file type, and then close the program.

    Step 4: Verify that Office 365 can communicate with Gmail

    As part of the migration process, Office 365 must verify that it can communicate with Gmail. It’s very important to successfully connect to the Gmail server before continuing. If you do experience any problems performing this step, see Troubleshooting the Gmail connection to resolve the issue.

    Test the connection to the Gmail server

    1. Go to the Exchange admin center.

    2. Select migration > More > migration endpoints.

    3. Choose + and then select IMAP.

    4. Set IMAP server to, and leave the remaining settings as they are.

    5. Choose Next.


    • When you reach the new migration endpoint page, this verifies that Office 365 can connect to the Gmail server.


    • Enter a name for the connection and choose new to create the migration endpoint. The preceding figure uses Gmail-migration as the name of the migration endpoint.

    • The migration endpoints page appears and displays the endpoint you just created.


    Step 5: Migrate a Gmail mailbox

    When you migrate your Gmail mailbox to Office 365, only the items in your inbox or other mail folders are migrated. The steps for migrating your contacts and calendar items are covered in later steps.

    Migrate messages from Gmail to Office 365

    1. Go to the Exchange admin center.

    2. Navigate to Recipients > Migration.

    3. Click the plus icon (+), and choose Migrate to Exchange Online.

    4. Choose IMAP migration.

    5. Choose Browse, and specify the file created in Step 3: Create a Gmail migration file.

    6. On the Start the batch page, select Automatically start the batch. The status field will initially be set to Created, as shown below.

    7. The status will change to Syncing and then to Synced after the Gmail messages have been synchronized with Office 365.

    Step 6: Migrate Gmail contacts

    You migrate your contacts from Gmail to Office 365 by first exporting the list of contacts to a comma-separated values (CSV) file and then importing that file into Office 365.

    Export Gmail contacts to a CSV file

    1. Using your Google Apps administrative credentials, sign in to the Google admin console..

    2. Choose Contacts > More > Export.

    3. Choose All contacts > Outlook CSV format > Export.

    4. Select a location to save your file.


    When you export Gmail contacts to a CSV file, you must choose the Outlook CSV format to successfully import the Gmail contacts into Office 365.

    Import Gmail contacts into Office 365

    1. Using your Office 365 administrative credentials, sign in to the Office 365 admin center.

    2. Choose People > Settings > Import contacts.

    3. Select the Gmail CSV file you saved in Step 3: Create a Gmail migration file, and choose Next.

    4. After the Gmail contacts have been successfully imported into Office 365, choose finish.

    Step 7: Migrate Gmail calendar

    You migrate calendar items from Gmail to Office 365 by using a two-step process. First, you export the Gmail calendar items as an iCal file. Once the iCal file is saved, you use Microsoft Outlook to import the calendar items into the Outlook Calendar. You cannot import the iCal file directly into Outlook Web Access.


    There are third-party tools available that simplify the task of moving Gmail calendar items and contacts to Office 365 and Microsoft Outlook. An Internet search for “Gmail to Office 365 migration tools” lists some of these tools.

    Export your Gmail calendar to an iCal file

    1. Using your Google Apps administrative credentials, sign in to

    2. Choose Calendar > My calendars > Settings > Export calendars.

    3. Select a location to save your file. Gmail saves the iCal file as a compressed file. Be sure to decompress the file before proceeding to the next step.

    Import your Gmail calendar into Microsoft Outlook
    1. Set up Microsoft Outlook to access Office 365. For guidance, see Set up email in Outlook 2010 or Outlook 2013.

    2. Choose Import > Comma Separated Values (Windows) > Next.

    3. Select the iCalendar file you saved in the previous step..

    4. Choose Outlook’s calendar > Finish. You should now see the Gmail calendar items within the Outlook calendar.

    Verify Gmail migration completed successfully

    Now that you have migrated Gmail messages, contacts, and calendar items to Office 365, you can use Outlook Web App, which comes with Office 365, to verify that Gmail migrated successfully.

    Verify Gmail migrated successfully using Outlook Web App

    1. Open the email message sent by the Office 365 administrator that includes your temporary password.

    2. Go to the sign-in page

    3. Sign in with the user name and temporary password.

    4. Update your password, and set your time zone.


      It’s very important that you select the correct time zone to ensure your calendar and email settings are correct.

    5. When Outlook Web App opens, send an email message to the Office 365 administrator to verify that you can send email.

    6. Choose the Outlook icon, and verify that the Gmail messages have been migrated.

    7. Choose the People icon, and verify that the Gmail contacts have been migrated.

    8. Choose the Calendar icon, and verify that the Gmail calendar items have been migrated.


      You cannot import Gmail calendar items directly into Outlook Web App. However, you can view the items using Outlook Web App after they have been imported by Microsoft Outlook.

    Next steps after migrating Gmail to Office 365

    Well, you’ve reached the end of migrating Gmail to Office 365. At this stage, email is flowing to both Gmail and Office 365 mailboxes. Many administrators choose to keep both the Gmail and Office 365 mailboxes running in parallel for a period of time. There’s nothing wrong with this approach. The limitation is that email is updated to Office 365 from Gmail once every 24 hours. To remove this limitation and direct Gmail messages directly to Office 365, follow the procedure below.

    Route all future Gmail messages to Office 365

    1. Sign in to your DNS hosting provider’s website.

    2. Select your domain.

    3. Find the page where you can edit DNS records for your domain.

    4. Open a new browser window, and sign in to the Office 365 website using your Office 365 administrative credentials.

    5. Choose domains > your company domain > View DNS Settings > View DNS records.

    6. In the Exchange Online section, in the MX row, copy the Priority, Host Name, and Points to Address.


    1. Return to your DNS hosting provider’s website, and use this information to create a new MX record.

    2. Set the priority of the MX record to the highest value available, typically 0, and save the record.

    For detailed instructions for creating MX records to point to Office 365, see the article Create DNS records for Office 365 when you manage your DNS records.

    For information about creating an MX record, see Find your domain registrar or DNS hosting provider.


    Typically, it takes about 15 minutes for DNS changes to take effect. However, it can take up to 72 hours for a changed record to propagate throughout the DNS system.

    See the following list of resources to further your exploration of Office 365:

    • Join the Office 365 Yammer group to discuss the latest news about Office 365. Sign up on the Office 365 Yammer page to get started.
    • The Office 365 community site posts the latest developments and information related to Office 365. It includes a discussion area where site members can post questions and answers.

    Troubleshooting the Gmail connection

    The information in this article covers troubleshooting Step 4: Verify that Office 365 can communicate with Gmail. If you successfully created a connection to Gmail from Office 365, you can skip this topic. However, if you were not successful connecting to Gmail from Office 365, perform the following steps.

    Test the connection to the Gmail server

    1. Open Windows PowerShell as an administrator on your computer.

    2. From the Windows PowerShell command window, run Get-ExecutionPolicy.

      The Get-ExecutionPolicy cmdlet tells you which of the four execution policies (policies that determine which Windows PowerShell scripts, if any, will run on your computer) is set. In the next step, we’ll change this setting to remotesigned.

    3. From the Windows PowerShell command window, run Set-ExecutionPolicy remotesigned.

    4. Next, run the following command:

      $session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "" -Credential $cred -Authentication Basic -AllowRedirection
    5. When prompted to enter your Windows PowerShell credentials, enter your Office 365 administrator credentials.

    6. Next, run Import-PSSession $session.

      This command provides access so you can test the connection between Gmail and Office 365.

    7. To see a list of Office 365 mailboxes configured on Office 365, run Get-Mailbox. This is just a quick test to verify that we are communicating with Office 365.

    8. Finally, to test the connection between Gmail and Office 365, run the following command:

      Test-MigrationServerAvailability -IMAP -RemoteServer -Port 993 -Security SSL

      You should see Success appear in the Result row. If you see any errors, verify you have entered the command correctly.


    9. Now that you’ve verified that Office 365 can connect to Gmail, it’s important to disconnect from Office 365. To do that, from the Windows PowerShell command window, run Exit.

    10. Troubleshooting is now complete. Return to Step 4: Verify that Office 365 can communicate with Gmail.

  • Performing an Active Directory Health Check Before Upgrading

    (Post courtesy Bonoshri Sarkar)

    Hi everyone, this is Bonoshri Sarkar here. I have worked for Microsoft as Partner Technical Consultant specializing in Directory Services for the past two years; providing end to end consulting to enable partners to design, position, sell and deploy Microsoft Platforms for their customers. In my earlier role, I worked for more than 4 years on the Microsoft Support team focusing on Microsoft Directory Services.

    Since I have a great affinity for Directory Services, I thought it would be a great idea to pen down my thoughts and experience on ensuring a smooth Active Directory Upgrade.

    For any kind of Upgrade/ Migration / Transition to go smooth, and later on to have an healthy environment, it is required to spend a fair amount of time in planning and making sure that the source or the present environment is in a healthy state. Two driving factors for any upgrade or transition include the need to utilize the new features that the new version of the product has to offer, and the other being to ease the complexities and the issues of the current environment. However, most IT Pros do not take adequate steps to check the health of their existing Active Directory environment. In this post, I would like to address some of the key steps that an AD Administrator must perform prior to an upgrade or transition.

    In my experience of assisting customers and partners in different transitions, most of the issues pertain to the source domain or the source domain controllers, so I will discuss few important things which should be considered as mandatory before going for any kind of Upgrade / Migration / Transition.

    Performing an Active Directory Health Check

    The health check should be done in 2 phases.

    1. Planning Phase

    2. Deploy Phase (just before implementing the upgrade, transition or migration)

    In the first phase we should identify what all services and roles are running on the machine that we are planning to upgrade, and rule out things that we do not want to move to our new box.

    Putting emphasis on diagnosing AD issues, we can use dcdiag to ensure a healthier Active Directory, I know we have been using dcdiag for many years, and we look for failure messages in the output, but apart from the failure messages, we can also consider issues such as those highlighted in yellow below:




    If you notice the first part of dcdiag says “failed test replication”, which implies that there are issues with Active Directory replication with this Domain Controller.

    The second message tells us that there are issues with netlogon and sysvol which are default logon shares, both the errors can be interdependent or could be because of completely different reasons. 

    In this scenario we need to fix AD replication first or dig into it more to find what is causing these errors. Now you can use few more commands to check the AD replication like repadmin /syncall /eAP. In case of a huge enterprise, you can also use Replmon (2003).

    The third message tells us that the important services are running. We need to be sure that the above services are started to ensure a smooth transition.

    If we don’t get enough details from the dcdiag results, check the event viewer, and if you do not see anything restart the FRS service and then check the event viewer for Event ID 13516.


    Apart from dcdiag you can also use Netdiag to check the network status and get detailed information.

    In addition to this, make sure the NIC card drivers are updated on the old server. 

    Instead of disabling the hardware or software based firewall between on the servers (old &new), ensure that you make the appropriate exceptions and port configurations to ensure proper communication between the directory servers (see Active Directory and Active Directory Domain Services Port Requirements).

    Any third party legacy application(s) should be tested in lab environment to make sure that they are compatible with new version of server OS and Active Directory.


    We also have different versions of Exchange BPA (Best Practice Analyzer) tools depending on the version of Exchange to check Exchange integrity and Exchange specific permission (You can select Permission check to gather that information).

    Last but not the least read the migration or transition documents ( to make sure server has all the minimum requirements.

    Once we are sure that the servers are in healthy state do not forget to take a full and a system state backup using a supported backup system as documented in the TechNet article below

    All these stitches in time would definitely save you nine hours’ worth of troubleshooting. It’s up to you to decide, would you like to troubleshoot or enjoy your Fries with Coke?

    Additional References

  • Network Monitoring with System Center Operations Manager 2012

    (Post courtesy Nikunj Kansara)

    This post describes the network monitoring capabilities of the System Center Operations Manager 2012 Beta.

    In my opinion, network monitoring is the most exciting feature of the upcoming Operations Manager 2012 release. This article will help users to get an overview of the network monitoring, how to discover network devices, configure network monitoring rules and object discoveries, sneak-peek on reports generated out of network management and network dashboard.

    I have split up the blog in four different topics:

    How to discover the network devices:

    Discovery is the process of identifying network devices to be monitored.

    Operations Manager 2012 can monitor devices that use SNMP v1, v2c and V3.

    The benefit that we get by configuring Network Monitoring is that if a critical server seems to be down, and if network monitoring is configured, we will see an alert that a switch/router port is down which was connected to the critical server. We can also see the network topology diagram called the Network Vicinity view.

    Operations Manager 2012 provides the following monitoring for discovered network devices:

    • We can view connection health between the network devices and between the server and the network device
    • We can view the VLAN health based on health state of switches in VLAN
    • We can view HSRP group health based on health state of individual HSRP end points
    • We can view Port/Interface Monitoring like Up/Down, Inbound / Outbound volume traffic
    • We can view Port/Interface Utilization, Packets dropped, broadcasted.
    • We can view Processor Utilization for some certified devices
    • We can view Memory utilization some certified devices

    Network device discovery is performed by discovery rules that you create.

    Below are steps for creating the discovery rule:

    1. Open the Operations Console

    2. Go to Administration Workspace, right click Administration and the click Discovery

    Figure 1

    3. The What would you like to manage? Page in Figure 1 will open up and we need to select the Network Devices option and click Next.

    4. The General page in Figure 2 appears and we need to provide the Name of the discovery rule and then we need select the Management server from the drop down. And then click Next.


    • We can create one discovery rule per management server or gateway server.
    • If we are creating a second discovery rule then we will only see the management servers that don’t have any discovery rule associated with them.
    • Also, we might want plan ahead and strategically place the management servers or gateway servers so they can access the network devices that we would like to discover.

    Figure 2

    5. On the Discovery Method page in figure 3, we need to select the method to discover the network device. In this example we need to select Explicit discovery and then click next.


    • Differences between Explicit discovery and Recursive Discovery:
      • Explicit discovery – An explicit discovery rule will try to to discover the devices that you explicitly specify in the wizard by IP address or FQDN. It will only monitor those devices that it can successfully access. The rule will try to access the device by using ICMP, SNMP, or both depending on the configuration of the rule.
      • Recursive discovery – A recursive discovery rule will attempt to discover those devices that you explicitly specify in the wizard by IP address, as well as other network devices that are connected to the specified SNMP v1 or v2 device and that the specified SNMP v1 or v2 device knows about through the device’s Address Routing Protocol (ARP) table, its IP address table, or the topology Management Information Block (MIB).

    Figure 3

    6. On the Default Account Page in Figure 4, click on the Create default Run As Account as we need to create an account which will be used to discover the network devices.

    Figure 4

    7. On the Introduction page of Create Run As account Wizard in Figure 5, click next

    Figure 5

    8. On the General Properties page of the Create Run As account Wizard in Figure 6; enter the Display name of the Run As Account and click next.

    Figure 6

    9. On the Credentials page on the Create Run As account Wizard in Figure 7, enter the SNMP community string and click on create.

    SNMP Community Strings

    We can configure Read only [RO] and Read Write [RW] SNMP Community strings. With the RO Community string we have read access to the network device. For Operations Manager 2012, we need only RO SNMP Community String to access the device. So it’s should be easy to convince the network guys ;-)

    Figure 7

    10. On the Default Account Page in Figure 8, select the created Run As Account and click on Next.

    Figure 8

    11. On the Devices Page, click on Add Button

    Figure 9

    12. On the Add a device window in Figure 10, enter the IP address / Name of the device we want to monitor; Select the Access Mode as ICMP and SNMP (You can also select ICMP only and SNMP only); Select the version on SNMP as v1 or v2; Select the created Run As account and then click OK.


    • We use ICMP only in the scenario where we need to know the availability of the gateway router from the ISP to verify if the interface is up or down.
    • We use SNMP only in the scenario where we want to monitor a Firewall on which ICMP is blocked.
    • If we specify that a device uses both ICMP and SNMP, Operations Manager must be able to contact the device by using both methods or discovery will fail.
    • If you specify ICMP as the only protocol to use, discovery is limited to the specified device and monitoring is limited to whether the device is online or offline.

    Figure 10

    13. Now Click Next on the Devices Page as in Figure 11.

    Figure 11

    14. On the Schedule discovery Page in Figure 12, Select the discovery schedule and click Next.


    You may also select to run the discovery manually.

    Figure 12

    15. Click Create on the Summary page

    Figure 13

    16. Click Yes on the Warning box as in Figure 14. We need to distribute the created Run As account to the Management server for discovery and to the Management Server resource pool for monitoring that was selected in General properties [Figure 2]

    Figure 14

    17. Click close on Completion.

    Figure 15

    18. Now in the Administration Workspace, go to Discovery Rules Node under the Network Management Node. You will able to see the Discovery Rule that has created. Click Run if we want to Run the discovery manually. See Figure 16

    Figure 16

    19. See the Figure 17 for the Task Status window that appears when we run the Discovery Manually. The success Status suggests that the discovery is submitted successfully and not that the devices have been discovered. Click close.

    Figure 17

    20. We will see probing status of the discovery rule when it has actually found the device. See Figure 18

    Figure 18

    21. The Discover Rule starts processing the discovered components as in Figure 19

    Figure 19

    22. The status of the discovery rule will go to pending and will run again as per the discovery schedule that we selected Wizard. If we would have selected manual discovery option in the Wizard than the status would go to Idle. See Figure 20.

    Figure 20

    23. Go to Network Devices under Network Management to see the discovered device. See Figure 21.

    Figure 21

    24. Double click the Network device to view the properties page and more information about that discovered device. See Figure 22.

    Figure 22

    B. Network Monitoring:

    We will see some of the views that are relevant to the network device that we discovered in previous step.

    1. Go to Monitoring Workspace; double click the Network Monitoring Folder to see the Network views. See Figure 23.

    Figure 23

    2. Select the Network Devices view to see the Network Devices being monitored.

    Figure 24

    3. Click on the Health Explorer to the Subcomponents of the Switch. See Figure 25 & 26

    Figure 25

    Figure 26

    4. Click on the VLANs view to see the VLANs in which the switch is participating. See Figure 27

    Figure 27

    5. Click on the ICMP Ping Response Performance view or Processor utilization Performance view to see the performance graph for ping response. See Figure 28 & 29.

    Figure 28

    Figure 29

    C. Dashboard:

    1. To see the connections between the connected nodes and the network device, click on the Network Vicinity view. See figure 30.

    Figure 30

    2. Click on the show computers check box to see the connections. See figure 31.


    By default we can see connections which are one hop away from the network device.

    We can select at max 5 hops. In environments with large number of network devices, selecting five hops can take a while for Operations Manager 2012 to show the data and the view might not be useful to you.

    Figure 31

    3. Now coming back to Network devices view in Monitoring workspace, click on the Network Node Dashboard. We will able to view all the information related to Network devices in the just one window. See figures 32, 33, 34 and 35.

    Figure 32

    Figure 33

    Figure 34

    Figure 35

    D. Reporting: [See Figure 36]

    Processor Utilization Report: It displays the processor utilization of a particular network device in a specified period of time.

    Memory Utilization Report: It displays the percentage of free memory on a particular network device in a specified period of time.

    Interface Traffic Volume Report: It displays the rate of inbound and outbound traffic that goes through the selected port or interface in a specified period of time.

    Interface Error Packet Analysis Report: It displays the percentage of error packets or discarded packets, both inbound and outbound, for the selected port or interface.

    Interface Packet Analysis Report: It displays the types of packets (unicast or non-unicast) that traverse the selected port or interface.

    Figure 36

    Additional Resources

  • System Center Operations Manager 2012 Installation Walkthrough

    (Post courtesy Rohit Kochher)

    System Center Operations Manager 2012 has significant changes in setup from Operations Manager 2007. Setup of 2012 has become simpler and installation has become easier.

    If you want to follow along on a test server, you can download Beta version of SCOM 2012 from here.

    Note: The Root Management Server (RMS) concept which from Operations Manager 2007 R2 has been removed from Operations Manager 2012. All Operations Manager 2012 servers are management servers. However we do have an RMS emulator to support those management packs which target RMS. Architecturally, servers in Operations Manager 2012 have a peer-to-peer relationship and not a parent-child relationship like Operations Manager 2007 R2.

    In this blog we will discuss the setup of Operations Manager 2012 with some screenshots of the installation wizard. Microsoft SQL Server 2008 SP1 or 2008 R2 should be installed prior running SCOM 2012 Setup. You can get more information on SCOM 2012 supported configurations here.

    Now, once we run setup.exe we will see the following screen:


    You can click on Install for setup of Management server, Management Console, Web server and Reporting Server. Under Optional installations you can choose to install Local agent, Audit Collection Services, Gateway management server, and ACS for Unix/Linux.

    Once you click on Install you will get the screen to accept the agreement. Once you accept that you will get below screen


    You can select the component that you want to install. Clicking on the arrow pointing down in front of each role will give brief information about that role. There is no explicit option to install OPS DB and data warehouse, as they are integrated. Selecting given features, you will get screen for location of program files. The default location is C:\Program Files\System Center Operations Manager 2012.


    The next step will show you prerequisite failures (if any). You will get information for failures along with download links to install any missing prerequisites.

    Next you get screen to input information about management server. You can specify if it is first management server in new management group or an additional management server in an existing management group.


    You can specify the name of the management group here. You will also get the screen to specify operations database. We need to install both operations database and data warehouse in Operations Manager 2012. Installing Data warehouse is mandatory in 2012 (a change compared with Operations Manager 2007). The data warehouse is needed for things like dashboards etc. If this is second management server you can click on Add a management server to existing management group option.


    After specifying the required information about Operations database and clicking on next, you will get similar screen for Operations manager data warehouse.

    The next screen allows you to configure Operations Manager service accounts.


    You can specify the required accounts on this screen and click on next to complete the setup. This setup will automatically assign local administrators group on server to the Operations Manager admin role. Once you enter account information here, it will be automatically verified in the background. In case the account cannot be verified (or the password is incorrect), you will get a red warning as the above picture illustrates.

    After this, you will get the option to participate in the Microsoft Customer Experience Improvement Program (CEIP) and Error reporting. Finally, you will also get the option for configuring Microsoft Updates.


    The last screen will provide you with an installation summary. Clicking on Install will start the Installation. Once finished, you are all set to monitor your infrastructure! Some of the great features in Operations Manager 2012 are the new dashboards, network monitoring , and application monitoring; which will be covered in future posts.

    You can check the deployment guide for Operations Manager 2012 here.

    System Center Operations Manager 2012 Beta resources

  • Sending e-mails from Microsoft Dynamics CRM

    (post courtesy Sarkis Derbedrossian)

    I often meet Microsoft CRM users who don’t know how sending e-mail works within Microsoft Dynamics CRM. Most users think that when they create an e-mail in CRM and hit the send button, the e-mail is sent automatically. Neither Outlook nor CRM can send e-mails without a post system e.g. Exchange server. Below you will learn how e-mail within CRM works with and without Outlook

    E-mail in relation to CRM

    Once you've created an e-mail activity in MS CRM and click the Send button to send the e-mail, this mail is handled differently depending on the settings of each user is set to in MS CRM.

    E-mail can be handled through Outlook or directly through CRM ... but neither Outlook nor MS CRM can implement the physical handling of the e-mail. This is done by a mail server (Microsoft Exchange Server or another mail system).

    Sending an E-mail in MS CRM

    Do not make a fast conclusion and think that MS CRM can neither receive nor send e-mail. You should understand that the above task requires an e-mail system to accomplish.

    When you send email from MS CRM it usually happens by the following steps:

    1. The user creates an e-mail activity and clicks on the send button. The e-mail is now saved with the user as the recipient
    2. The e-mail gets synchronized to the user’s Outlook
    3. The users Outlook sends the e-mail to the mail server (Exchange)
    4. Exchange sends the e-mail through the internet

    What if the user does not have the Outlook client open? This will result in the mail will not be sent until the user logs into Outlook. For some situations this can be insufficient. Fortunately installing the e-mail router can solve it.

    Sending e-mail via the e-mail router

    If you want to be independent of Outlook, and thus could send email directly from MS CRM without using Outlook, this can be done by installing and configuring an E-mail Router.

    The e-mail Router is free software that comes with MS CRM. The software can be installed on any server that has access to a Mail Server (Exchange Server or other mail system (POP3/SMTP)) and MS CRM.

    When you send email from MS CRM using an E-mail Router it often happens by the following steps

    1. The user creates an e-mail activity and clicks on the send button. The e-mail is now saved with the user as the recipient
    2. The e-mail is sent to the e-mail router
    3. The email router sends the e-mail to the mail server (Exchange)
    4. Exchange sends the e-mail through the internet

    E-mail settings in CRM

    Depending on how you want your organization to send e-mails, remember to check the following settings:

    1. In CRM, Settings, Users
    2. Open the user form
    3. In the configuration section of the e-mail access, select the desired setting


    Configuring e-mail access

    It is possible to choose one of the following settings from the option list:

    Outlook cannot be used for sending and receiving e-mails which is related to MS CRM

    Microsoft Dynamics CRM for Outlook
    Outlook is responsible for sending / receiving e-mail. Integration with MS CRM for Outlook must be installed and configured. E-mails sent / received only when Outlook is active (open)

    E-mail router
    E-mail is sent and received with MS CRM Email Router. If this element is selected, a dialog box allows entering credentials. Check the box if you want to specify credentials

    Forwarded mailbox
    E-mail forwarded from another e-mail address. The e-mail Router is responsible for sending / receiving e-mails.

    More Information:

  • Capture a Windows® Image from a Reference Computer Using Capture Media—for IT Pros

    (This post courtesy of Simone Pace)

    In order to use System Center Configuration Manager 2007 to distribute the Windows 7 operating system to our managed clients, we need to provide to the OS bits somehow to the site server. One of the methods we can use is capturing a Windows 7 WIM image from a previously prepared reference computer.

    System Center Configuration Manager 2007 offers standard and easy ways to deploy software in our IT Infrastructure. One of the most relevant features we can take advantage of is the highly customizable Operating System Deployment capability built in the product.

    The Windows Vista® and Windows 7 new WIM image format further simplifies OS distribution by being independent from the destination client system’s hardware, so that we can use a single image to target different computers and keep our image repository less complex and more easily managed. This post shows an example of steps we can follow to successfully capture a WIM image of Windows 7 Ultimate Edition x64 from a reference computer.

    Note: Further posts will follow that illustrate the specific tasks required to upgrade a Windows XP computer.

    Testing lab description screenshots and computer names used in this article refers to a Virtual scenario running on a Hyper-V R2 host:

    • Domain: (single site)
    • All Servers are Windows 2008 R2 Enterprise servers.
    • Server CON-001
    • SCCM with almost all roles installed
    • SQL Server 2008
    • Windows Automated Installation Kit 2.0
    • WDS Transport Server role installed
    • Server CON-002
    • Active Directory Domain Controller role installed
    • DNS Server role installed
    • DHCP Server role installed
    • SCCM Primary Site: C01
    • Reference client is a Windows 7 Ultimate edition x64 clean setup

    1. Create a Capture Media iso file.

    The iso image we are creating in this section will be used to boot the reference machine and start the OS wim image creation sequence.

    a. Log on CON-001 and open the Configuration Manager console.

    b. Go to Task Sequences node.

    c. Click on “Create Task Sequence Media” in the action panel.

    d. Select Capture Media and click next on the welcome page.


    e. On the “Media file” click Browse, select the folder where you are going to save the media iso file, and give it a name (for example MediaCapture), click Next.clip_image004

    f. On “Boot Image” click Browse, and select the boot image suitable for your reference computer.

    Note: Two boot images (x86 and x64) are automatically added when you install WDS role in the system.

    g. On Distribution Point leave \\CON-001 (or select you preferred DP), click Next.clip_image006

    h. Review the summary and click Finish.

    i. The server starts building the iso image.


    j. Click Close to close the wizard.

    2. Prepare the reference computer.

    a. Log on CON-Ref7Client with user Administrator account

    b. Check the following requirements

    i. The computer must be a workgroup member.

    ii. The local Administrator password must be blank.

    iii. The local system policy must not require password complexity.

    iv. Apply the latest Service Pack and updates.

    v. Install the required applications.

    3. Capture the image using the Capture Media.

    a. Capture the MediaCaputer.iso you created in Step 1 in the Virtual DVD of the reference PC (if is a VM), or

    b. Burn the MediaCapture.iso on a DVD and insert it in the computer.

    c. Boot the reference computer normally.

    d. Start the autoplay DVD and launch the Capture Image Wizard.


    e. Click Next.

    f. Set the path where you want to save the wim file, give the image a name, and insert the appropriate credential to access the path and write on it.

    g. Click Next.

    h. Fill in the required data in the Image Information window.


    i. View the summary and launch the capture by clicking Finish.


    j. The program will start executing the sysprep phase.


    k. After sysprep, the computer will restart in WinPE to start the capture.


    l. (Reboot).


    m. Computer restarts in WinPE and starts the Capture.


    n. Capturing first Partition (1-2)


    o. And capturing second partition (2-2).


    Note: The number of partitions captured depends on the reference PC’s disk partitions. In the case shown, the VM had a 100Mb partition for BitLocker® capability (Partition 1 of 2).

    p. When finished, press OK to quit and restart.


    q. On the Server we can see the captured image file.


    4. Add the file to the image repository in SCCM 2007.

    a. Share a folder and move the image file (example \\ServerName\Images).

    b. Open the SCCM console, navigate to Site Database > Computer Management > Operating System Deployment > Operating System Images.

    c. Import the image by clicking Add Operating System Image in the task panel.

    d. Type or browse the network path to the image you want to import, and click Next.


    e. Fill in the required information, then click Next.


    f. Review the summary and complete the wizard.



    5. Distribute the image to Distribution Point.

    a. In the SCCM console, navigate to the image you uploaded in step 4 (Site Database > Computer Management > Operating System Deployment > Operating System Images) and select it.

    b. Click Manage Distribution Points in the action panel.


    c. Click Next on the wizard starting page.

    d. As DP doesn’t have the image deployed yet, leave the default selection (copy) and click Next.clip_image037

    e. Select the DPs where you want to deploy the image to and include their PXE DP’s hidden share.clip_image039

    f. Click Next and Next again in the Completion page.



    g. Check the copy progress in the Package Status folder until you see it is Installed.


    h. You are now ready to distribute the Windows 7 Ultimate x64 Image to client computers, either by upgrading or installing new machines.

  • WSUS not configured error during Configuration Manager 2012 Software Update Point Installation

    (Post courtesy Anil Malekani)

    Recently I tried configuring Software Update Management in Configuration Manager 2012. After installing WSUS on the Configuration Manager 2012 box, I tried to install Software Update Point as a site role.


    The Software Update Point role successfully installed, as per the SUPSetup.log file (under C:\Program Files\Microsoft Configuration Manager\Logs)

    However, my updates still did not appear on the console. After checking the Site Component status for SMS_WSUS_SYNC_MANAGER and SMS_WSUS_CONFIGURATION_MANAGER I noticed errors as below

    SMS_WSUS_SYNC_MANAGER: Message ID 6600




    I checked under WCM.log (under C:\Program Files\Microsoft Configuration Manager\Logs), and found the following proxy error


    SCF change notification triggered.        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    This SCCM2012.CORP80.COM system is the Top Site where WSUS Server is configured to Sync from Microsoft Update (WU/MU) OR do not Sync.        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    Found WSUS Admin dll of assembly version Microsoft.UpdateServices.Administration, Version=3.0.6000.273, Major Version = 0x30000, Minor Version = 0x17700111        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    Found WSUS Admin dll of assembly version Microsoft.UpdateServices.Administration, Version=3.1.6001.1, Major Version = 0x30001, Minor Version = 0x17710001        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    The installed WSUS build has the valid and supported WSUS Administration DLL assembly version (3.1.7600.226)        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    System.Net.WebException: The request failed with HTTP status 502: Proxy Error ( The host was not found. ).~~ at Microsoft.UpdateServices.Administration.AdminProxy.CreateUpdateServer(Object[] args)~~ at Microsoft.UpdateServices.Administration.AdminProxy.GetUpdateServer(String serverName, Boolean useSecureConnection, Int32 portNumber)~~ at Microsoft.SystemsManagementServer.WSUS.WSUSServer.ConnectToWSUSServer(String ServerName, Boolean UseSSL, Int32 PortNumber)        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    Remote configuration failed on WSUS Server.        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    STATMSG: ID=6600 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_WSUS_CONFIGURATION_MANAGER" SITE=CM1 PID=2424 TID=5408 GMTDATE=Fri Oct 14 00:20:03.092 2011 ISTR0="" ISTR1="" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=0        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)

    Waiting for changes for 46 minutes        SMS_WSUS_CONFIGURATION_MANAGER        1/1/1601 12:00:00 AM        5408 (0x1520)


    I validated that the proxy had been configured correctly and my browser settings also contained the same settings.

    Resolution: After spending some time I found that Configuration Manager 2012 uses the system account proxy settings, which were set to Automatically detect settings.

    1. Using the excellent PsExec utility, I opened a command prompt under the system account (using the –s parameter).
    2. Within this command prompt running as system, I launched Internet Explorer and removed proxy settings.
    3. Finally, updates started appearing in the console.


  • Automatically Collect Windows Azure Storage Analytic Logs

    This video shows you how to automatically collect Windows Azure Storage Analytic logs. Storage Analytics are key to diagnosing issues with blob, table and queue storage. You can run the Windows Azure Storage Analytics Diagnostics package (.DiagCab) to automatically collect the logs previously generated.

    Before using this package, you will need to enable Windows Azure Storage Analytics.


    Download .DiagCab

    How to use .DiagCab

    Storage Analytics Video

    Storage Analytics Documentation

    Storage Analytics Billing

    Storage Analytics Log Format

    Storage Analytics Logging - How to Enable and Where to Find the logs.


    This package will only work on a Windows 7 or later, or Windows Server 2008 R2 or later computer. You will need to have Microsoft Excel installed on the machine where you run this package in order to see the charts.

  • How to fix the ACS Error ACS50008 in Windows Azure

    This video shows how to fix the Error ACS50008 in the context of Windows Azure Access Control Service.

    This error usually is displayed as an Inner Message like this:

    An error occurred while processing your request.
    HTTP Error Code: 401
    Message: ACS20001: An error occurred while processing a WS-Federation sign-in response.
    Inner Message: ACS5008:SAML token is invalid.
    Trace ID: 903f515f-3196-40c9-a334-71277700aca6
    Timestamp: 2014-03-02 10:16:16Z


    How to fix Error ACS50008

    ACS Error Codes

    ACS Documentation

  • Configuring SharePoint 2013 Search with PowerShell

    Post courtesy Partner Solution Consultant Priyo Lahiri:

    I wrote this script for a demo during our Practice Accelerater for SharePoint 2013. If you have attended the session you have already seen this in action. If not, here is the script for you to try out in your lab.

    Disclaimer: before we proceed you should know that this script has been tested in my lab to work in a very specific scenario. If you wish to use this, your environment should exactly look like mine. In other words, we are not responsible if this script ruins your farm Smile.

    Take a note on the environment first:

    SharePoint Farm:

    • 2 web front end servers
    • 2 application servers
    • 1 SQL Server

    Following are the services running on the server:


    This script will provision Search on all the servers and configure our WFEs to host Query Processing Role.

    Very important: if your environment doesn’t look like this, STOP here.

    If you already have Search Configured, which would be your default setting if you have run the Farm Configuration Wizard, don’t use this script.

    Follow these TechNet guidance to understand more:

    As you can see from the above screenshot, my environment doesn’t even have Search Service started, so we are good to go in using this script. It’s ok to modify the script to use 3 server or 2 server environment, as long as Search has never been configured on your Farm or was configured and now removed.

    Let’s understand the script on a piecemeal basis:

    Section 1: setting up the environment and gather user inputs on Server Names, load SharePoint snap-in, managed account to use etc

    Set-ExecutionPolicy unrestricted
    # Start Loading SharePoint Snap-in
    $snapin = (Get-PSSnapin -name Microsoft.SharePoint.PowerShell -EA SilentlyContinue)
    IF ($snapin -ne $null){
    write-host -f Green "SharePoint Snap-in is loaded... No Action taken"}
    ELSE  {
    write-host -f Yellow "SharePoint Snap-in not found... Loading now"
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    write-host -f Green "SharePoint Snap-in is now loaded"}
    # END Loading SharePoint Snapin
    $hostA = Get-SPEnterpriseSearchServiceInstance -Identity "SP13App"
    $hostB = Get-SPEnterpriseSearchServiceInstance -Identity "SP13-App2"
    $hostC = Get-SPEnterpriseSearchServiceInstance -Identity "SP13WFE01"
    $hostD = Get-SPEnterpriseSearchServiceInstance -Identity "SP13WFE02"
    $searchName = "Fabricam Search Service"
    $searchDB = "SP_Services_Search_DB"
    $searchAcct = "fabricam\spService"
    $searchAcctCred = convertto-securestring "pass@word1" -asplaintext -force
    $searchManagedAcct = Get-SPManagedAccount | Where {$_.username-eq 'fabricam\spService'}
    $searchAppPoolName = "Search Services Application Pool"
    IF((Get-spserviceapplicationPool | Where {$ -eq "Search Services Application Pool"}).name -ne "Search Services Application Pool"){
    $searchAppPool = New-SPServiceApplicationPool -Name $searchAppPoolName -Account $searchManagedAcct} 

    Section 2: Starting Search Service on all servers. You will notice there are some error handling in this script, for example, instead of just firing off the commands, I am actually waiting for the Search Service to respond before I go over to the next step. I have always found this approach very stable.

    ## Start Search Service Instances
    Write-Host "Starting Search Service Instances..."
    # Server 1
    IF((Get-SPEnterpriseSearchServiceInstance -Identity $hostA).Status -eq 'Disabled'){
    Start-SPEnterpriseSearchServiceInstance -Identity $hostA 
    Write-Host "Starting Search Service Instance on" $hostA.Server.Name
    Do { Start-Sleep 5;
    Write-host -NoNewline "."  } 
    While ((Get-SPEnterpriseSearchServiceInstance -Identity $hostA).Status -eq 'Online')
    Write-Host -ForegroundColor Green "Search Service Instance Started on" $hostA.Server.Name
    } ELSE { Write-Host -f Green "Search Service Instance is already running on" $hostA.Server.Name  }
    #Server 2
    IF((Get-SPEnterpriseSearchServiceInstance -Identity $hostB).Status -eq 'Disabled'){
    Start-SPEnterpriseSearchServiceInstance -Identity $hostB 
    Write-Host "Starting Search Service Instance on" $hostB.Server.Name
    Do { Start-Sleep 5;
    Write-host -NoNewline "."  } 
    While ((Get-SPEnterpriseSearchServiceInstance -Identity $hostB).Status -eq 'Online')
    Write-Host -ForegroundColor Green "Search Service Instance Started on" $hostB.Server.Name
    } ELSE { Write-Host -f Green "Search Service Instance is already running on" $hostB.Server.Name  }
    #Server 3
    IF((Get-SPEnterpriseSearchServiceInstance -Identity $hostC).Status -eq 'Disabled'){
    Start-SPEnterpriseSearchServiceInstance -Identity $hostC 
    Write-Host "Starting Search Service Instance on" $hostC.Server.Name
    Do { Start-Sleep 5;
    Write-host -NoNewline "."  } 
    While ((Get-SPEnterpriseSearchServiceInstance -Identity $hostC).Status -eq 'Online')
    Write-Host -ForegroundColor Green "Search Service Instance Started on" $hostC.Server.Name
    } ELSE { Write-Host -f Green "Search Service Instance is already running on" $hostC.Server.Name  }
    #Server 4
    IF((Get-SPEnterpriseSearchServiceInstance -Identity $hostD).Status -eq 'Disabled'){
    Start-SPEnterpriseSearchServiceInstance -Identity $hostD 
    Write-Host "Starting Search Service Instance on" $hostD.Server.Name
    Do { Start-Sleep 5;
    Write-host -NoNewline "."  } 
    While ((Get-SPEnterpriseSearchServiceInstance -Identity $hostD).Status -eq 'Online')
    Write-Host -ForegroundColor Green "Search Service Instance Started on" $hostD.Server.Name
    } ELSE { Write-Host -f Green "Search Service Instance is already running on" $hostD.Server.Name  }

    Section 3: Starting the Search Query and Site Setting Service Instance on Application Servers. If you don’t wait for a response from Search Service on Application Servers in the step above and try to run this step, it’s very likely that it would fail.

    Additional Reference:

    This service provides the creation of a SearchServiceApplication or a SearchServiceApplicationProxy to a Search service application. This allows the caller to establish effective load balancing of Search queries across query servers.

    ## Start Query and Site Settings Service Instance
    Write-Host "
    Starting Search Query and Site Settings Service Instance on" $hostA.server.Name "and" $hostB.server.Name
    Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $hostA.server.Name
    Do { Start-Sleep 3;
    Write-host -NoNewline "."  } 
    While ((Get-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance | Where {$_.Server.Name -eq $hostA.server.Name}).status -ne 'Online')
    Write-Host -ForegroundColor Green "
        Query and Site Settings Service Instance Started on" $hostA.Server.Name
    Start-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance $hostB.server.Name
    Do { Start-Sleep 3;
    Write-host -NoNewline "."  } 
    While ((Get-SPEnterpriseSearchQueryAndSiteSettingsServiceInstance | Where {$_.Server.Name -eq $hostB.server.Name}).status -ne 'Online')
    Write-Host -ForegroundColor Green "
        Query and Site Settings Service Instance Started on" $hostB.Server.Name 


    Section 4: Create the Search Service Application

    ## Create Search Service Application
    Write-Host "
    Creating Search Service Application..."
    $searchAppPool = Get-SPServiceApplicationPool -Identity "Search Services Application Pool"
    IF ((Get-SPEnterpriseSearchServiceApplication).Status -ne 'Online'){
    Write-Host " Provisioning. Please wait..."
    $searchApp = New-SPEnterpriseSearchServiceApplication -Name $searchName -ApplicationPool $searchAppPool -AdminApplicationPool $searchAppPool -DatabaseName $searchDB
    DO {start-sleep 2;
    write-host -nonewline "." } While ( (Get-SPEnterpriseSearchServiceApplication).status -ne 'Online')
    Write-Host -f green " 
        Provisioned Search Service Application"
    } ELSE {  write-host -f green "Search Service Application already provisioned."
    $searchApp = Get-SPEnterpriseSearchServiceApplication

    Section 5: creating the Admin Component. Initial Search Topology is created with this as well.

    ## Set Search Admin Component
    Write-Host "Set Search Admin Component..."
    $AdminComponent = $searchApp | Get-SPEnterpriseSearchAdministrationComponent | Set-SPEnterpriseSearchAdministrationComponent -SearchServiceInstance $hostA 

    Section 6: get the Initial Search Topology and Clone it

    ## Get Initial Search Topology
    Write-Host "Get Initial Search Topology..."
    $initialTopology = Get-SPEnterpriseSearchTopology -SearchApplication $searchApp
    ## Create Clone Search Topology
    Write-Host "Creating Clone Search Topology..."
    $cloneTopology = New-SPEnterpriseSearchTopology -SearchApplication $searchApp -Clone -SearchTopology $initialTopology 


    Section 7: here is where we define where different Search Components will live. We want to create a redundant topology for load balancing and high availability. As long as same services are running on multiple servers, SharePoint will use its internal Load Balancer.

    We will deploy Admin Component, Crawl Component, Analytics Processing Component, Content Processing Component and Index Partition 0 on both the Application Servers and we will run Query Processing Component on both the Web Front End Servers.

    If you are new these components mentioned above, refer to this TechNet Article.

    ## Host-A Components
    Write-Host "Creating Host A Components (Admin, Crawl, Analytics, Content Processing, Index Partition)..."
    $AdminTopology = New-SPEnterpriseSearchAdminComponent -SearchServiceInstance $hostA -SearchTopology $cloneTopology
    $CrawlTopology = New-SPEnterpriseSearchCrawlComponent -SearchServiceInstance $hostA -SearchTopology $cloneTopology
    $AnalyticsTopology = New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchServiceInstance $hostA -SearchTopology $cloneTopology
    $ContentProcessingTopology = New-SPEnterpriseSearchContentProcessingComponent -SearchServiceInstance $hostA -SearchTopology $cloneTopology
    $IndexTopology = New-SPEnterpriseSearchIndexComponent -SearchServiceInstance $hostA -SearchTopology $cloneTopology -IndexPartition 0
    ## Host-B Components
    Write-Host "Creating Host B Components (Admin, Crawl, Analytics, Content Processing, Index Partition)..."
    $AdminTopology = New-SPEnterpriseSearchAdminComponent -SearchServiceInstance $hostB -SearchTopology $cloneTopology
    $CrawlTopology = New-SPEnterpriseSearchCrawlComponent -SearchServiceInstance $hostB -SearchTopology $cloneTopology
    $AnalyticsTopology = New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchServiceInstance $hostB -SearchTopology $cloneTopology
    $ContentProcessingTopology = New-SPEnterpriseSearchContentProcessingComponent -SearchServiceInstance $hostB -SearchTopology $cloneTopology
    $IndexTopology = New-SPEnterpriseSearchIndexComponent -SearchServiceInstance $hostB -SearchTopology $cloneTopology -IndexPartition 0
    ## Host-C Components
    Write-Host "Creating Host C Components (Query)..."
    $QueryTopology = New-SPEnterpriseSearchQueryProcessingComponent -SearchServiceInstance $hostC -SearchTopology $cloneTopology
    ## Host-D Components
    Write-Host "Creating Host D Components (Query)..."
    $QueryTopology = New-SPEnterpriseSearchQueryProcessingComponent -SearchServiceInstance $hostD -SearchTopology $cloneTopology

    Section 8: Next we will activate the new topology that we just created above and remove the initial Search topology which should be in “Inactive” State.

    ## Activate Clone Search Topology
    Write-Host "Activating Clone Search Topology...Please wait. This will take some time"
    Set-SPEnterpriseSearchTopology -Identity $cloneTopology
    ## Remove Initial Search Topology
    Write-Host "Removing Initial Search Topology..."
    $initialTopology = Get-SPEnterpriseSearchTopology -SearchApplication $searchApp | where {($_.State) -eq "Inactive"}
    Remove-SPEnterpriseSearchTopology -Identity $initialTopology -Confirm:$false 

    Section 9: Last step is to create the Search Service Application Proxy

    ## Create Search Service Application Proxy
    Write-Host "Creating Search Service Application Proxy..."
    $searchAppProxy = New-SPEnterpriseSearchServiceApplicationProxy -Name "$searchName Proxy" -SearchApplication $searchApp

    And here is the end result:

    Here are the services running on all servers:


    And here is the topology:




    Note: The referenced PowerShell script is attached to this post.  Use at your own risk, modify to meet the parameters of your own environment, and test before using in production!

  • Microsoft Virtual Academy Presents: "Building Blocks" a pre //build event

    What do @geektrainer and @bitchwhocodes have in common?

    They both have awesome Twitter handles! And they’re both sharing their experience and insights in our upcoming “Building Blocks” Jump Start series. These entertaining and inspiring technology experts are teaming up with other seasoned pros, including @codefoster and @mlunes90, for three lively days of deep dives to help you gear up for next month’s //build conference. Whether you’re a web, app, C#, .NET, or JavaScript developer, you’re sure to stretch your dev muscles before the //build workout.

    We start the series on March 26 with “Initialize(),” which focuses on various paradigms comparing JavaScript and C# side by side on the Microsoft platform. We continue on March 27 with “Construct(),” where you learn how to create great layout and style with XAML and HTML5. And we wrap on March 28 with “Extend(),” a session on successful mobile app and smart device strategies.

    Sign up for one, two, or all three sessions, and be sure to bring questions for the Q&A!

    Register now! Building Blocks series:

    Initialize(), Wednesday, March 26, 9:00am‒5pm PDT

    Construct(), Thursday, March 27, 9:00am‒5pm PDT

    Extend(), Friday, March 28, 9:00am‒5pm PDT
    Where: Live, online virtual classroom
    Cost: Free!

  • Integrating Remote Desktop Services with SharePoint Server 2010

    Post courtesy of Yashkumar Tolia

    One of the first things that an IT administrator (and even an end user) dreams for is “Integration”. For an administrator, integration of multiple technologies in your environment, consolidation of various products, management of everything from one single place, provision of data in a secured manner; are a few reasons. While for an end user, single point of access, anytime consumption of data and Single Sign-On pops up to one’s mind.

    Remote Desktop Services

    One of the prime technologies used by IT administrators for virtualization is Remote Desktop Services, previously known as Terminal Services. The TechNet article, Remote Desktop Services Overview, gives a great beginning to understanding this technology. As the link mentions, the major advantages of adopting this technology are:

    • Application deployment: You can quickly deploy Windows-based programs to computing devices across an enterprise. Remote Desktop Services is especially useful when you have programs that are frequently updated, infrequently used, or difficult to manage.
    • Application consolidation: Programs are installed and run from an RD Session Host server, eliminating the need for updating programs on client computers. This also reduces the amount of network bandwidth that is required to access programs.
    • Remote access: Users can access programs that are running on an RD Session Host server from devices such as home computers, kiosks, low-powered hardware, and operating systems other than Windows.
    • Branch office access: Remote Desktop Services provides better program performance for branch office workers who need access to centralized data stores. Data-intensive programs sometimes do not have client/server protocols that are optimized for low-speed connections. Programs of this kind frequently perform better over a Remote Desktop Services connection than over a typical wide area network.

    SharePoint Server 2010

    SharePoint Server 2010 is not just viewed as a content sharing and accessing product any more, but as a Business Collaboration Platform for the Enterprise and the Internet. With features like content management, workflows, search, SharePoint Server 2010 helps you to connect with colleagues and information; manage and govern enterprise content; balance user experience with policy and process; and help users find the content, information, and people. A great guide for understanding SharePoint Server 2010 is found in TechNet, SharePoint Server 2010.

    Integration of Remote Desktop Services with SharePoint Server 2010

    Integration of these 2 technologies opens up great avenues for consolidation. The Remote Desktop Web Access server role can be taken over by the already present SharePoint Server 2010 in the environment. This provides the possibility such as:

    • Single Website: As the SharePoint website is already present; this can be leveraged to publish the RemoteApps that are hosted on the Remote Desktop Session Host server. This reduces the URLs that the end user has to remember to access company data and applications.
    • Customization: The SharePoint Website, unlike the Remote Desktop Web Access default web portal, can be customized as per company needs. This gives the company freedom to brand, color code, provide additional links or shortcuts as required.
    • Accessing content through RemoteApps: If the content in SharePoint needs a particular RemoteApp to run, you can make a connection to the RemoteApp and then open the doc in it. This gives you the capability of accessing this data over the internet as well, without having to worry about security.

    Steps to integrate Remote Desktop Services with SharePoint Server 2010

    The integration of SharePoint Server 2010 (from now on, we will call it SPS) with Remote Desktop Services (from now on, we will call it RDS), is divided into 5 steps:

    1. Installation of RDS Session host server role
    2. Installation of SPS
    3. Installation of RDS Web Access server role on SPS
    4. Configuration of the Terminal Services Web Part
    5. Publishing of RemoteApps

    1. Installation of RDS Session host server

    Perform these steps on the RDS Session Host server:

    a. Go to Server Manager -> Roles -> Add Roles. This will take you to the Add Roles Wizard. Click Next.


    Figure 1: Add Roles Wizard

    b. Select Remote Desktop Services. Click Next.


    Figure 2: Role Selection

    c. Click Next.


    Figure 3: Introduction to Remote Desktop Services

    d. Select Remote Desktop Services Session Host role. Click Next.


    Figure 4: Role Service Selection

    e. Click Next.


    Figure 5: Uninstall and Reinstall Application for compatibility warning

    f. Select Require Network Level Authentication. Click Next.


    Figure 6: Network Level Authentication Selection

    g. Select the appropriate licensing scheme. Click Next.


    Figure 7: Licensing Mode Selection

    h. Select the appropriate users you want to give access to the RDSH server. Click Next.


    Figure 8: User Group Definition

    i. Select any of the features that you want to include in the Desktop Experience. Click Next.


    Figure 9: Enabling Desktop Experience

    j. Click Install. Reboot the server.


    Figure 10: Installation summary

    2. Installation of SPS 2010

    Perform these steps on the SPS server:

    a. Install SPS 2010.


    Figure 1: SharePoint Installation

    b. Check mark Run the SharePoint Products Configuration Wizard now. Click Close.


    Figure 2: SharePoint Installation completion and Run Configuration Wizard

    c. Click Next.


    Figure 3: Configuration Wizard

    d. Click Yes to restart the services.


    Figure 4: Restarting of Services

    e. Go to the SPS website by typing the following URL: http://<servername>/, to check if the SharePoint site is working fine or not.


    Figure 5: SharePoint Website Home Page

    3. Installation of RDS Web Access server role on SPS

    Perform these steps on the SPS server:

    a. Go to Server Manager -> Roles -> Add Roles. This will take you to the Add Roles Wizard. Click Next.


    Figure 1: Add Roles Wizard

    b. Select Remote Desktop Services. Click Next.


    Figure 2: Selection of Role Services

    c. Click Next.


    Figure 3: Introduction to Remote Desktop Services

    d. Click on Add required role services.


    Figure 4: Add required Role Services

    e. Click on Next.


    Figure 5: Introduction to IIS

    f. Click Next.


    Figure 6: Add role services

    g. Click Finish to finish installation.

    4. Configuration of the Terminal Services Web Part

    Perform these steps on SPS server:

    a. Go to %SystemDrive%:\inetpub\wwwroot\VirtualDirectories\80. Right click Web.config and edit it in wordpad.


    Figure 1: Editing web.config file

    b. In the <SafeControls> section, add the following line under the other SafeControl Assembly entries (as a single line):

    <SafeControl Assembly="TSPortalWebPart, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Namespace="Microsoft.TerminalServices.Publishing.Portal" TypeName="*" Safe="True" AllowRemoteDesigner="True" />


    Figure 2: Adding SafeControl Assembly

    c. Open an elevated command prompt. To do this, click Start, right-click Command Prompt, and then click Run as administrator:

    · Type mkdir "%SystemDrive%\Program Files\Common Files\Microsoft Shared\Web Server Extensions\wpresources\TSPortalWebPart\\images" and then press ENTER.

    · Type mkdir "%SystemDrive%\Program Files\Common Files\Microsoft Shared\Web Server Extensions\wpresources\TSPortalWebPart\\rdp" and then press ENTER.

    · Type cacls "%SystemDrive%\Program Files\Common Files\Microsoft Shared\Web Server Extensions\wpresources\TSPortalWebPart\\images” /T /E /P NetworkService:F and then press ENTER.

    · Type cacls "%SystemDrive%\Program Files\Common Files\Microsoft Shared\Web Server Extensions\wpresources\TSPortalWebPart\\rdp” /T /E /P NetworkService:F and then press ENTER.


    Figure 3: Adding files to the Web Parts


    d. Go to the SharePoint website as an administrator. In the upper-right corner, on the Site Actions tab, click Site Settings.


    Figure 4: Editing the Site Settings


    e. Under Galleries, click Web Parts.


    Figure 5: Adding Web Part to the Gallery


    f. Under the Web Part Gallery heading, click New.


    Figure 6: Adding the TSPortalWebPart to the list


    g. Select the check box next to Microsoft.TerminalServices.Publishing.Portal.TSPortalWebPart, and then click Populate Gallery.


    Figure 7: Adding the new Web Part

    5. Publishing of RemoteApps

    Perform these steps on the SPS server:

    a. On the Site Actions tab, click Edit Page.


    Figure 1: Edit Web Page

    b. Choose the location on the website where you want to add the Web Part, and then click Add a Web Part.


    Figure 2: Adding the Web Part to the site

    c. In the Add Web Parts -- Webpage Dialog dialog box, under the All Web Parts heading, select the TSPortalWebPart check box, and then click Add. The TSPortalWebPart Web Part will appear on the page.


    Figure 3: Select the TSPortal Web Part

     d. To configure the Web Part, click edit in the upper-right corner of the Web Part, and then click Modify Shared Web Part.


    Figure 4: Editing the Web Part

    e. In the RD Session Host server(s) or RemoteApp and Desktop Connection Management server name box, type <RDSservername> and then click OK.


    Figure 5: Adding the RDS Session Host Server

    f. Click Save icon in the top left corner of the website.


    Figure 6: Saving the Web Part to the site


    g. Test the application by running a RemoteApp.


    Figure 7: Selecting the RemoteApp


    Figure 8: Connecting to the RemoteApp


    Figure 9: Providing credentials


    Figure 10: Using the RemoteApp

    In this way, you can leverage your already existing environment and integrate it for a single website for the users to log into and get their RemoteApps. This combined with the search and content sharing capabilities gives the user a seamless experience.

    Additional Information

    To learn more, check out the following articles:

    Customizing Remote Desktop Web Access by Using Windows SharePoint Services Step-by-Step Guide

  • Chalk Talk for Developers: How to use and debug Background Transfer API issues

    In this talk we will take a look at how to use and debug Background Transfer API issues in your windows Store App.  For a brief preview of the talk, check out this video:

    Please join us on Wednesday February 26th, 2014 from 10am-12pm PST.  We’ll be hosting a live chalk talk with technical experts from our developer support team.

    We’ll look at demos of real-world scenarios, how the technology works, best practices for implementation, and development troubleshooting tips. There will also be plenty of time for you to ask your questions.

    If you are interested, sign up today as space is limited.

    WHAT: How to use and debug Background Transfer API issues

    WHEN: February 26th, 2014. 10am-12pm PST

    WHERE: Online Meeting

    REGISTRATION: email with the subject “Feb 26th Chalk Talk”.

    COST: Free

  • Take advantage of your Office 365 Internal Use Rights

    In the past, MPN Partners with the Cloud Essentials and Cloud Accelerate had access to Internal Use Rights for Office 365.  Now, however, all partners with a Microsoft Action Pack Subscription (MAPS) as well as Partners with a Silver or Gold competency all have access to free Internal Use Rights licenses for Office 365.  This gives you an opportunity to try out the service, so you can speak from experience when you discuss the benefits with your customers.  It also means that someone else takes care of running your servers so that you can spend more time working and less time patching and troubleshooting.

    I wanted to share a few resources to help get you started.  First, the page with all the information you need on your Internal Use Rights licenses, how to access them, how to earn more licenses, and how to activate your partner features is available at:

    In the following video, York Hutton walks through the Internal Use Rights (IUR) core benefits, discussing how they now give partners the power of choice to mix and match online services and on-premises software licenses. Microsoft partners can choose between work-alike solutions for productivity, demonstration, development, testing, and internal training purposes.

    In this video, York walks through the process of activating your IUR benefits, whether you are using them for the first time, or transitioning from a previous license grant:

    A few additional resources:
    KB2887467: Support Article: What are my internal use rights benefits?

    Office 365 partner features how-to guide (Learn about partner features available to help you sell to and manage your customers, including how to offer and use delegated administration, and how to send quotes and trials.)

    If you have program questions (how do I get my license, where is my key, how do I sign up for MAPS or renew my membership?) visit the Partner Membership Community

    If you have technical questions (why am I getting an error message when migrating my mailboxes? how do I resolve a DirSync error about an invalid attribute?), visit the Partner Online Services community.

    If you have a Silver or Gold competency, you have access to 20 and 50 (respectively) hours of advisory services consultation with a Partner Technical Consultant.  These consultants are a great resource to help plan for a deployment (even if it is an internal deployment). Submit an advisory request via:

    All partners holding current internal-use software licenses available through a cloud program must make the transition so that they are in alignment with the new internal-use software license process and entitlements, which are available to Action Pack subscribers or competency partners, prior to June 30, 2014, or your internal-use software licenses will expire.
    Download the instructions to transition to the new system

  • Troubleshooting and Debugging Windows Azure Web Sites

    The Tech Support team for Windows Azure has put together an excellent series of short videos for those of you working with Windows Azure Web Sites.  The links are below.

    Diagnostics in Windows Azure Web Sites

    Remote Debugging in Windows Azure Web Sites

    Failed Request Tracing

    Application Logging and Crashes

    To be notified of future videos in the Azure Troubleshooting series, subscribe to the Channel 9 series.

  • SLA and Disaster Recovery Planning for Microsoft Business Intelligence

    (Post courtesy Partner Technical Consultant specializing on Data Analytics)

    Service Level Agreement Planning and Disaster Recovery Planning for the Microsoft Business Intelligence Stack with Microsoft SQL Server and Microsoft SharePoint Server

    For a fully implemented Microsoft Business Intelligence stack, which might be composed of SQL Server, SSAS, SSIS, SSRS, SharePoint Server, MDS, and DQS; the question may arise regarding how to ensure the consistent status of all of the applications in case of a total or partial failure, where the different components may be subject to varying maintenance schedules.

    In the worst case, disaster recovery requires us to recover from a situation where the primary data center, which hosts the servers, is unable to continue operation. Even potentially smaller disruptions like power outages, data corruption or accidental deletion of data can force us to restore data or configuration from our backups. It is well known that fault-tolerance is achieved through redundancy, ideally not only at the data layer but also in each and every component of the network and all services (switches, servers, Active Directory, SMTP…)

    In this blog post we would like to focus on the Microsoft Business Intelligence stack and provide an overview what you need to consider when defining Service Level Agreements and how to prepare for a fast resumption of operation after such an unwelcome event has occurred.

    Balancing cost and risk of downtime

    First, let's consider the two factors that determine the Service Level Agreement corresponding to Availability. The whitepaper on "High Availability with SQL Server 2008 R2" at explains it concisely:

    "The two main requirements around high-availability are commonly known as RTO and RPO. RTO stands for Recovery Time Objective and is the maximum allowable downtime when a failure occurs. RPO stands for Recovery Point Objective and is the maximum allowable data-loss when a failure occurs. Apart from specifying a number, it is also necessary to contextualize the number. For example, when specifying that a database must be available 99.99% of the time, is that 99.99% of 24x7 or is there an allowable maintenance window?"

    This means that both RPO, i.e. the amount of data you are willing to lose, and RTO, the duration of the outage, need to be determined individually, depending on your customer's specific needs. Their calculation follows an actuarial principle in that cost and risk need to be balanced. Please do not forget that the RTO does not only depend on how soon your services are back online but might in certain circumstances encompass the amount of time needed to restore data up to a certain point in time from backups as well.

    Assuming a 24x7x365 operation, the following calculation applies, taken from "Create a high availability architecture and strategy for SharePoint 2013" at

       Availability class            Availability measurement             Annual down time

       Two nines                       99%                                                   3.7 days

       Three nines                    99.9%                                                8.8 hours

       Four nines                       99.99%                                              53 minutes

       Five nines                        99.999%                                           5.3 minutes

    So now we start to appreciate what it means that in Windows Azure we receive a general SLA of 99.9% across services respectively 99.95% for cloud services, cf.

    And here is one more argument in favor of using Windows Azure as your secondary site and standby data center: If you back up your databases and transaction logs to Azure blob storage and take Hyper-V based snapshots of your virtual machines, which you then transfer to Azure blob storage, then you will only incur the cheap storage cost and still be able to turn on the VM's any time you decide to bring them online, and start paying for them only while they are running. Windows Server 2012 Backup and Windows Azure Backup allow you to backup system state and files/folders to Windows Azure storage as well.

    Alternatively, availability can be calculated as the expected time between two consecutive failures for a repairable system as per the following formula:

       Availability = MTTF / (MTTF + MTTR)

    where MTTF = Mean Time To Failure and MTTR = Mean Time To Recover.


    A disaster recovery concept needs to encompass the whole architecture and all technologies involved and include best practices on functional and non-functional requirements (non-functional refers to software behavior like performance, security, etc.).

    To summarize, partners need to define in the SLA towards their customers an RPO (recovery point objective) and RTO (recovery time objective). For this, they are looking for a disaster recovery concept that takes into account:

    - full, differential and transaction log backups (assuming the database is in full recovery mode)

    - application backups

    - any add-on components of the software

    - Hyper-V virtual images of production servers

    With that let's take a detailed look at an end-to-end disaster recovery planning for a Microsoft BI solution.

    SQL Server databases

    To begin with, how do the above concepts apply to the SQL Server databases?

    Where would you look in the first place to find out about the recovery time of all of your databases? Correct, it is the SQL Server's error log, which can be read along a timeline.

    To estimate the roll forward rate for a standalone system, one could use a test copy of a database and restore a transaction log from a high-load time period to it. The application design plays an important role as well: Short-running transactions reduce the roll forward time.

    Upon failover in an AlwaysOn failover cluster instance, all databases need to be recovered on the new node, which means that transactions that are committed in the transaction log need to be rolled forward in the database, whereas transactions that got aborted have to be rolled back.

    Side note: In a Failover Cluster Instance, the time for switchover is furthermore impacted by factors like for example storage regrouping or DNS/network name provisioning. Regarding the client side, one can configure the connection timeout in order to accelerate the time needed to reestablish a broken connection.

    The new SQL Server 2012 Availability Groups make it easy to observe the RPO and RTO. For details, see "Monitor for RTO and RPO" at

    Here are some tips for an efficient backup strategy of your SQL Server databases:

    - Use a separate physical location where you store the backups.

    - Have a schedule to carry out regular backups, for example nightly full backups, every 6 hours a differential backup, and every 30 minutes a transactional log backup, if you need a point-in-time recovery.

    - Enable CHECKSUM on backups. This is the default with backup compression, which is available in Standard, Business Intelligence and Enterprise Edition.

    - Test your backups periodically by restoring them because you might unknowingly carry on some data corruption, making your backups useless. Monitor the suspect_pages table in MSDB to determine when a page level restore is sufficient.

    - With regards to long-term archival, it is considered good practice to maintain three different retention periods. If you leverage three rotational schemes, thus for example create full backups daily, weekly and monthly and store them onto different media sets each, then you could regenerate your data from these if necessary. This is called the grandfather-father-son principle and allows for reusing the media sets after their retention period. As an example, a backup on a Monday overwrites that of some previous Monday and so on. The screenshot at depicts these options very well.

    - Filegroups for historical partitions can be marked as read-only, hence require only a one-time filegroup backup. A piecemeal restore of read-write filegroups can accelerate recovery.

    - Use "SQL Server Backup to Windows Azure" to upload the backup files for offsite storage, optimally with compression and encryption, even for versions earlier than SQL Server 2014. Check out the "Microsoft SQL Server Backup to Microsoft Windows Azure Tool" at

    - While the RBS FILESTREAM provider, which uses local disk storage, is integrated with SQL Server's backup and restore procedures, with a third party RBS provider it will be your responsibility to back up the Remote Blob Storage separately in a consistent manner, cf. "Plan for RBS in SharePoint 2013"

    Fortunately, all Microsoft products are built to scale for availability. With SQL Server Availability Groups in SQL Server 2012 and higher you get a highly available set of databases and of secondary replicas for failover, disaster recovery purposes or to load-balance your read requests. Availability groups are a feature of SQL Server Enterprise Edition, which comes with even more online features than the other editions to allow for higher availability and faster recovery, noticeably online page and file restore or Database Recovery Advisor. The latter is helpful for point-in-time restores across sometimes complicated backup chains. For a concise list please see the table at:

    With SQL Server Availability Groups spread out to Windows Azure virtual machines it is even possible to host your secondary database replicas in Azure and, for example, run your System Center Data Protection Manager and its agent in the cloud against them.

    Marked transactions allow you to restore several databases, for example the MDS database, the SSIS catalog and your corresponding user databases on the same instance of SQL Server consistently up the very same point in time, which can be advantageous if a major event, for example a fusion of two companies’ databases, occurs. See "Use Marked Transactions to Recover Related Databases Consistently (Full Recovery Model)" at


    Since SQL Server Analysis Services is mainly a read-only system, you can do without things like transaction logging or differential backups. If metadata (.xmla files) is available, then this is sufficient to recreate and reprocess your cubes. If you even have functional database backups (.abf files), then those can be restored and used.

    It is possible to run a separate SSAS server, which has the same configuration settings, in a remote location and supply it regularly with the latest data via database synchronization.


    - When running SSAS in SharePoint mode (as the POWERPIVOT instance), the SharePoint content and service application databases contain the application and data files.

    - If you host your secondary replica for read access in Windows Azure, you will want to place your SSAS instance running in an Azure VM within the same Availability Set.


    SQL Server Integration Services since version 2012 offers two deployment modes: package-based for backward compatibility and the new project-based deployment. Backup and restore procedures depend on the storage location of the data. The package store can be folders in the file system or the msdb database. Any files should be copied away together with a script for dtutil to be able to upload them, additionally any centrally managed configuration files. Starting with SQL Server 2012, it is strongly recommended to use project deployment for the Integration Services server. The SSISDB catalog is a database that stores projects, packages, parameters, environments, operational history, and as such can be backed up into a .bak file. You also need to back up the master key for the SSISDB database, whereby the resulting file will be encrypted with a password you specify. Unless the key changes, this is a one-time operation.


    With SQL Server Reporting Services in native mode being a stateless service, it is the ReportServer database which contains the metadata and report snapshots with data. It can be protected as required for your SLA and RTO via full or differential backups. Experience has shown that doing just full backups oftentimes works fast enough. The ReportServerTempDB database can be recreated anytime. Do not forget to back up the RecoveryKey, which encrypts the database. This should be done at creation time, which suffices unless the service identity or computer name changes. In case of subscriptions, you need to back up the SQL Server Agent jobs as well. This can be accomplished via a simultaneous backup of the msdb database. For a backup of the Report Server configuration and custom assemblies kindly refer the corresponding links in the final section of this blog post.

    Concerning SQL Server Reporting Services in SharePoint mode, the SharePoint 2013 built-in backup does not take care of the SSRS databases – with the additional Reporting Service Alerting database - so the previous paragraph is still valid, which means you must use SQL Server tools for SharePoint Server or SQL Server (Express) tools for SharePoint Foundation. As for the application part, since SSRS in SharePoint mode is a true SharePoint Service application, configuration occurs through Central Administration and SharePoint Server's own backup and recovery applies.

    SharePoint Server

    The BI, also called Insights, features of SharePoint Server, like for example Excel Services, Access Services, Visio Services, PerformancePoint Services benefit from SharePoint Server's backup options for service applications. A restore of a service application database has to be followed by provisioning the service application. Please find further details in the TechNet articles referenced below.


    Master Data Services consists of a database wherein all master data as well as MDS system settings are stored plus a Master Data Manager web application. Scheduling daily full backups and more frequent transaction log backups is recommended. MDSModelDeploy.exe is a useful tool for creating packages of your model objects and data.

    Side note: In our experience it is less the IIS-hosted website that tends to cause a bottleneck at high load than the MDS database itself. Hence, a scale-out would not necessarily involve just several MDS web sites, pointing to the same database, although this allows for redundancy and increased availability while web servers get updated. Rather it would separate out models into different MDS databases. On the one hand, this increases the overhead for security accounts and administration, given that the metadata tables are completely isolated from each other; on the other hand, blockings are avoided and databases can be managed independently.


    Data Quality Services keeps its information in three databases: DQS_MAIN, DQS_PROJECTS, and DQS_STAGING_DATA, therefore can neatly be integrated into your SQL Server backup and restore processes. With the help of the command DQSInstaller.exe it is even possible to export all of the published knowledge bases from a Data Quality Server to a DQS backup file (.dqsb) in one go.

    Cross-cutting best practices

    - Making use of SQL alias for connections to your SQL Server computer eases the process of moving a database, for example when a SQL virtual cluster name changes. For instructions see for example "Install & Configure SharePoint 2013 with SQL Client Alias" It shows how you gain flexibility over the SQL Server connection string by appropriately populating the SQL Server Network Configuration and SQL Server Client Network Utility. This procedure has significant advantages over the DNS A record or CNAME alias in that the SQL alias does not change the Kerberos SPN format for connections. You continue to use the registered DNS host name (A record) in the Service Principal Name when connecting. Furthermore, it allows you to specify more than one alias pointing to the same instance. For example, you can create an alias for your content databases, search databases etc. and thereby plan ahead for future scale out.

    - System Center Data Protection Manager can be used for both database backups and application server backups. For a list of protected workloads please see the left-hand navigation bar on the page "Administering and Managing System Center 2012 - Data Protection Manager"

    - In the context of private clouds, System Center comes into play with its Operations Manager to monitor SQL Server instances and virtual machines and its Virtual Machine Manager to quickly provision new virtual machines.

    Closing words

    SQL Server and SharePoint Server allow for robust disaster recovery routines as part of your business continuity plan. New hybrid and cloud based solutions enhance traditional possibilities greatly.

    As has become clear, configuration changes that occur outside of user databases should always happen in a controlled manner, requiring a tight Change Management process.

    Further reading

    "Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery"

    With some good discussions: "Simple script to backup all SQL Server databases"

    "Back Up and Restore of System Databases (SQL Server)"

    "SQL Server AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups"

    "SQL Server AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups"

    "Backup and Restore of Analysis Services Databases"

    "Disaster Recovery for PowerPivot for SharePoint"

    "Package Backup and Restore (SSIS Service)"

    "Backup, Restore, and Move the SSIS Catalog"

    "Backup and Restore Operations for Reporting Services"

    "Migrate a Reporting Services Installation (Native Mode)"

    "Migrate a Reporting Services Installation (SharePoint Mode)"

    "Backup and Restore Reporting Services Service Applications"

    "Planning Disaster Recovery for Microsoft SQL Server Reporting Services in SharePoint Integrated Mode"

    "Overview of backup and recovery in SharePoint 2013"

    "Plan for backup and recovery in SharePoint 2013"

    "Backup and restore SharePoint 2013"

    "Supported high availability and disaster recovery options for SharePoint databases (SharePoint 2013)"

    "Database Requirements (Master Data Services)"

    "Web Application Requirements (Master Data Services)"

    "Export and Import DQS Knowledge Bases Using DQSInstaller.exe"

    "Using AlwaysOn Availability Groups for High Availability and Disaster Recovery of Data Quality Services (DQS)"

    "Install SQL Server 2012 Business Intelligence Features"

    "SQLCATs Guide to High Availability and Disaster Recovery", "SQLCAT's Guide to BI and Analytics"

    Case Study for failover to a standby data center: "High Availability and Disaster Recovery at ServiceU: A SQL Server 2008 Technical Case Study"

    "Business Continuity in Windows Azure SQL Database"

    "SQL Server Managed Backup to Windows Azure"

    "SQL Server Deployment in Windows Azure Virtual Machines"

    Hybrid storage appliance "StorSimple cloud integrated storage"

    This posting is provided "AS IS" with no warranties, and confers no rights.

  • New Windows Azure Training for Partners

    Seems like everywhere you turn these days, Windows Azure is a hot topic.  We get questions daily about where to find some great Windows Azure training.  Well, we’re excited to let our Partners know that a new set of Windows Azure training is available for you.  The training, Partner Practice Enablement: Windows Azure Technical Training, is available in multiple formats (more on that below).

    What Kind of Training is it?  What will it cover?

    The level 200-300  training starts with an introduction to Windows Azure Virtual Machines and Virtual Networks (Infrastructure Services). It delivers the foundational knowledge needed for users intending to run new workloads in Windows Azure or migrate existing workloads from on-premises.

    Students will be introduced to the rich features of Windows Azure Active Directory and see how it can be used to achieve single sign-on across cloud applications, protect application access, enforce multi-factor authentication, and integrate with Windows Server Active Directory. The 8 modules in the training are:




    Introduction to Windows Azure Infrastructure Services


    Windows Azure Infrastructure Services Networking


    Windows Azure Active Directory


    Windows Azure Active Directory Integration


    Cloud Services, Websites and Infrastructure Services


    Development and Test


    SQL Server and SharePoint Server in Windows Azure


    Management and Monitoring of Virtual Machines

    Each module includes an instructional session, Q&A, and self-study guides for additional hands on learning.

    Who should attend?

    Anyone who is new to, or has not worked with Windows Azure at all, can benefit from this training.  The training will have a technical focus, so technical sellers, implementers, and support experts are encouraged to participate.

    When is the training and how do I sign up?

    Web-based live training will run weekly from March 4th until April 24th.  The webcasts are offered twice a day at 7AM and 5PM Pacific time.  The schedule is as follows:

    Module 1: March 4th Module 5: April 8th
    Module 2: March 11th Module 6: April 15th
    Module 3: March 18th Module 7: April 17th
    Module 4: March 25th Module 8: April 24th


    in addition to the live training, we are making self-study recordings of the training sessions available via an MPN Learning Path:


    What does this training cost?

    The self-study content of the Learning Path is provided at no cost.

    For the live webcasts, two Partner Advisory Hours will be deducted from your organization's balance per session independent of the number of attendees from your organization.


    If you have any questions, please send an email to

  • Windows Store App Development Training Videos (Spanish Versions)

    Recently we’ve released several short training videos around building Windows Store apps.  As a part of this process we’re now taking some of the most popular and offering them in additional languages.  In the coming weeks we’ll bring you French, German, Portuguese, and Turkish as a start.

    Today we’re excited to bring the first five in Spanish.

    Desarrollo de Windows 8.1 Store Apps en C#

    Certificación de Store Apps

    Plantillas de Proyecto de Visual Studio para Store Apps

    Consejos y Trucos para Implementación de Notificaciones en Store Apps

    Como Adaptar Store Apps a los Tamaños de Ventana

    Como Implementar Live Tile del Store App en menos de 10 minutos

  • Desarrollo de Windows 8.1 Store Apps en C#

    Windows 8.1 ofrece una oportunidad para los ISV de desarrollo de aplicaciones novedosas y su monetización a través de la Tienda de Windows. Las Store Apps permiten implementar rápidamente soluciones de movilidad que requieren un interfaz táctil y una conectividad permanente a datos. Esta serie de webcasts está pensada en desarrolladores de C# que quieren conocer la nueva funcionalidad disponible para las Store Apps en Windows 8.1

    Hemos compartido el material de este curso en el OneDrive.







    Windows 8.1 para Desarrolladores


    Level 100

    Introducimos la funcionalidad del SS.OO Windows 8.1 de interés para desarrolladores. Comentaremos como las Store Apps integran con el SS.OO., introducimos la Tienda de Windows, las APIs y las herramientas de desarrollo.




    Controles XAML UI


    Level 200

    Presentamos el resumen de los controles XAML básicos para construir el interfaz de usuario: TextBox, RichTextBlock, ProgressRing, RichEditBox, Date y TimePicker, Buttons , Flyouts, Shapes, Paths y Images. También explicaremos la aplicación de estilos a los controles.


    10:00-11:00 GMT+1


    Controles de Listado Modernos


    Level 200

    Presentamos los controles modernos de listados: FlipView, GridView, ListView y SemanticView. También comentamos mejoras de rendimiento en listados que podemos conseguir utilizando virtualización de UI y carga de datos incremental.


    10:00-11:00 GMT+1


    Lenguaje de Diseño de Store Apps


    Level 100

    Presentamos los principios del diseño de las Store Apps. El propósito es ayudar al desarrollador a aplicar el diseño para conseguir que el aplicativo tenga una marca (brand) distinta y se perciba como una parte integral de Windows 8.1. Explicamos los 4 pilares de diseño: Principles, Personality, Patterns and Platform.


    10:00-11:00 GMT+1


    Navegación, Comandos, Ventanas y Layout


    Level 200

    Comentamos el mecanismo de navegación entre las páginas y como implementar comandos con CommandBar. Finalmente, explicamos cómo adaptar el layout del interfaz de usuario a los cambios en el tamaño de ventanas que contienen el aplicativo.


    10:00-10:40 GMT+1


    Controles WebView y RenderTargetBitMap


    Level 200

    Explicamos cómo integrar el contenido HTML en la App utilizando control WebView. También comentamos como generar una imagen en variedad de formatos desde una rama del árbol visual de XAML y compartirlo con otros aplicativos utilizando RenderTargetBitMap


    10:00-11:00 GMT+1


    APIs de Windows Store

    Level 200


    Esta presentación está enfocada en las APIs de Tienda de Windows y el diseño de la aplicación para monetizarla. Explicamos los modelos disponibles (add-funded, trial, in-app purchase, consumables), como gestionar el cambio de licencia del aplicativo y habilitar la funcionalidad correspondiente. Además comentaremos el proceso de publicación del aplicativo en la Tienda de Windows.


    10:00-10:40 GMT+1


    Integrando con los Contactos y Calendario

    Level 200


    En esta presentación explicamos cómo integrar con las app estándar de Windows 8.1: Contactos y Calendario utilizando el API de contrato. Conseguiremos que el aplicativo que necesita gestionar datos de sobre personas o citas puede acceder a la información que gestionan estos apps.

    Podéis encontrar videos adicionales sobre el desarrollo de Store Apps en XAML y C# en inglés en:

    “Designing Your XAML UI with Blend Jump Start”

    “XAML Deep Dive for Windows & Windows Phone Apps Jump Start”

    "Windows Store App Development Essentials with C# Refresh"

    "Advanced Windows Store App Development Using C# Refresh"

    También hemos liberado el Update 2 de Visual Studio 2013 que puedes descargar de

    “Microsoft Visual Studio 2013 Update 2”

  • Getting Started with Side Loading Windows Apps

    If you have worked with Windows 8 for any length of time, chances are your customer has wanted to deploy a custom app. The app isn’t something you want to publish to the Windows Store, so what is the best way to go about deploying it to all the customer’s devices?  Consider side loading.  This brief video will explain how to get started with side loading in test environments.

    After watching the video if you want to go deeper, or have specific questions please don’t hesitated to contact Partner Support.  We are here to help.