• Office 365 - Creating a Subsite (Web) using CSOM in SharePoint Online

    SharePoint Online has a number of PowerShell Cmdlets - https://technet.microsoft.com/en-us/library/fp161374(v=office.15).aspx, these Cmdlets include New-SPOSite which provides the ability to create a Site Collection, unfortunately it's not possible to create a SubSite (web) using these Cmdlets - it is however possible to use CSOM to do this and the script below demonstrates how to create a SubSite and a site beneath the SubSite (a SubSubSite?).

    The script below is broken into three sections, the first section is used to connect to the SharePoint Online Site that you wish to create the Subsite within, simply update the highlighted $Site variable with the relevant URL.

    The second section creates a SubSite within the Site Collection. Update the highlighted variables to match your requirements, I've used the Team Site template (STS#0) in this example.

    The third section creates a Site beneath the SubSite that was just created, again update the highlighted variables to meet your requirements. The key thing to note when creating a SubSite beneath an existing SubSite is that you must include the relative path to the new SubSite within the $WCI.Url variable. In this case I first created a SubSite with the URL https://site.sharepoint.com/SubSite to create a SubSite beneath this I must specify the relative path of the SubSubSite to create therefore /SubSite/SubSubSite.

    #Add references to SharePoint client assemblies and authenticate to Office 365 site
    Add-Type -Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.dll"
    Add-Type -Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.Publishing.dll"
    Add-Type -Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"
    $Username = Read-Host -Prompt "Please enter your username"
    $Password = Read-Host -Prompt "Please enter your password" -AsSecureString
    $Site = "https://site.sharepoint.com"
    $Context = New-Object Microsoft.SharePoint.Client.ClientContext($Site)
    $Creds = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($Username,$Password)
    $Context.Credentials = $Creds


    #Create SubSite
    $WCI = New-Object Microsoft.SharePoint.Client.WebCreationInformation
    $WCI.WebTemplate = "STS#0"
    $WCI.Description = "SubSite"
    $WCI.Title = "SubSite"
    $WCI.Url = "SubSite"
    $WCI.Language = "1033"
    $SubWeb = $Context.Web.Webs.Add($WCI)
    $Context.ExecuteQuery()

    #Create SubSubSite
    $WCI = New-Object Microsoft.SharePoint.Client.WebCreationInformation
    $WCI.WebTemplate = "STS#0"
    $WCI.Description = "SubSubSite"
    $WCI.Title = "SubSubSite"
    $WCI.Url = "SubSite/SubSubSite"
    $WCI.Language = "1033"
    $SubWeb = $Context.Web.Webs.Add($WCI)
    $Context.ExecuteQuery()

    Brendan Griffin - @brendankarl

    Steve Jeffery - @moss_sjeffery

  • Flush the SharePoint Configuration Cache

    From time to time you may need to flush the SharePoint configuration cache on servers within your farm, my colleague Joe Rodgers blogged about this many moons ago - http://blogs.msdn.com/b/josrod/archive/2007/12/12/clear-the-sharepoint-configuration-cache-for-timer-job-and-psconfig-errors.aspx. If you run into a scenario that you need to flush the configuration cache on all servers within a farm this can become a very boring and laborious job.

    I wrote the following script which will work with MOSS 2007, SharePoint 2010 and SharePoint 2013, simply execute from one server within the farm and the script will perform a configuration cache flush on all servers within the farm.

    $ErrorActionPreference = "Stop"
    #Load SharePoint assembly
    $Assemblies = [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")

    #Retrieve SharePoint servers
    $Farm = [Microsoft.SharePoint.Administration.SPFarm]::Local
    $SPServers = $Farm.Servers | Where {$_.Role -eq "Application"} | Foreach {$_.Name}

    #Detect if running on MOSS 2007 and set the appropriate name for the service
    If (Test-Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12")
        {
        $Timer="SPTimerV3";
        }
    Else {
        $Timer = "SPTimerV4"
       
         }

    If ([Environment]::OSVersion.Version.Major -lt 6)
    {
    $Folder = "Documents and Settings\All Users\Application Data\Microsoft\SharePoint\Config"
    }
    Else
    {
    $Folder = "programdata\Microsoft\SharePoint\Config"
    }

    #Loop through each server
    Foreach ($Server in $SPServers)
    {
    Try {
        #Stop the Timer Service
        Write-Host "-Stopping the Timer Service on $Server" -ForegroundColor Green
        $A = (Get-WmiObject Win32_Service -filter "name='$Timer'" -ComputerName $Server).StopService()
        While ($B = (Get-WmiObject Win32_Service -filter "name='$Timer'" -ComputerName $Server).State -ne "Stopped")
                {
                Start-Sleep 5
                }

        #Clear the Config Cache
        Write-Host "-Clearing the Config Cache on $Server" -ForegroundColor Green
        $ConfigCache = Get-ChildItem \\$Server\c$\$Folder | Sort LastWriteTime | Select -last 1
        $ConfigCachePath = $ConfigCache.FullName
        Remove-Item -Path $ConfigCachePath -Include *.XML -Recurse
        "1" > "$ConfigCachePath\Cache.ini"

        #Restart the Timer Service
        Write-Host "-Starting the Timer Service on $Server" -ForegroundColor Green
        $C = (Get-WmiObject Win32_Service -filter "name='$Timer'" -ComputerName $Server).StartService()
        }
    Catch {
          Write-Host "Unable to flush the cache on $Server, please do this manually" -ForegroundColor Red
          }
    }
    Write-Host "-Configuration Cache Flush Complete" -ForegroundColor Yellow


    Brendan Griffin - @brendankarl

  • SharePoint Update Deployment - Automating Parallel Content Database Upgrades

    I recently helped a customer to deploy a Cumulative Update to their SharePoint environment, due to the amount of content hosted within the farm the customer uses the approach of detaching all content databases, upgrading the farm and then re-attaching and upgrading all content databases afterwards, this potentially reduces the amount of downtime as content databases can be upgraded in parallel (using separate PowerShell sessions).

    I've put together a script that automates the upgrade of content databases once they have been re-attached to the farm (where they will be running in compatibility mode until upgraded).

    The following script splits the content databases into two batches and then executes a PowerShell job on the server for each batch to upgrade the databases in parallel (2 at a time instead of 1). It has been tested on SharePoint 2010 but should also work on SharePoint 2013.

    #Specify two script blocks for the two batches of upgrades to perform
    $Batch1 = {
    asnp *SharePoint* -ea 0
    $CDB = Get-SPContentDatabase
    $Count = $CDB.Count
    $Batch1 = [Decimal]::Round(($Count/2))
    $CDB[0..$Batch1] | Upgrade-SPContentDatabase -Confirm:$false
    }

    $Batch2 = {
    asnp *SharePoint* -ea 0
    $CDB = Get-SPContentDatabase
    $Count = $CDB.Count
    $Batch1 = [Decimal]::Round(($Count/2))
    $CDB[($Batch1 + 1)..($CDB.Count -1)] | Upgrade-SPContentDatabase -Confirm:$false
    }

    #Start the two upgrade jobs in parallel
    Start-Job -ScriptBlock $Batch1
    Start-Job -ScriptBlock $Batch2

    #Report the status - re-run as needed
    Get-Job
    #Reports the job output once the job has completed
    Get-Job | Receive-Job

    Brendan Griffin - @brendankarl

  • SharePoint 2013 and Office 365 Hybrid

    I recently presented a session at SharePoint Saturday UK on Configuring SharePoint 2013 and Office 365 Hybrid, as promised to the attendees attached is the presentation (including demo videos). This is an updated version of my previous session at the Yorkshire SharePoint User Group which goes into greater depth on the identity requirements - http://blogs.technet.com/b/fromthefield/archive/2014/11/04/hybrid-search-with-sharepoint-2013-and-office-365.aspx.

    Brendan Griffin - @brendankarl

  • SP2010 workflow performance

    Today I want to talk about something quite odd that I ran across at one of my customers...

    They had some issues with one of their workflows not "performing" well... It took ages for the workflow to complete.

    One of the most important troubleshooting steps with workflows is usually quite simple...

    It's just "Wait..."

    But since that was already done for a few days, we had to take a closer look to see what was going on. SharePoint workflows can be run in either the W3WP process, or they can be handed over to the OWS Timer service. Usually this transition happens when you add a workflow action that calls for a "wait" or when some events need to be picked up as a reaction to edits... So we checked the timer service and reviewed the running timer jobs. The odd thing was that the workflow timer job is running on every server on the farm for several hours already... That is very odd, since usually that job only runs for a few minutes for each run, so we suspected a hung workflow. The workflow engine provides a lot of ULS entries if it fails somewhere, but non of those showed up... A good collection of the events that are output on failure can be found here...  http://blogs.technet.com/b/sharepointdevelopersupport/archive/2013/03/12/sharepoint-workflow-architecture-part-3.aspx (This also provides some very good information on how to troubleshoot workflows in general)

    So we increased the ULS logging and that helped us to identify what was happening in the job. The next step was to take the data, port it to a single server, and analyze it (merge-splogfile). But since it was a busy production server the amount gathered was very high. We found that all different OWS timer jobs were constantly updating list items in the same sitecollection, but there were no obvious workflow errors in the ULS logs. So now we had at least a place were we could have a closer look.

    As it turned out the sitecollection in question was already known to the customer... The sitecollection was Nintex-Workflow enabeled (Only users with a special inhouse training are allowed to use the nintex workflow components for that customer).

    Now lets go a bit of topic here... The power user on that sitecollection had a small issue, he actually managed to solve...

    He had a list and wanted to utilize [today] in a calculated field. But SharePoint does not support the use of dynamic data (Like [me] or [today]) in a calculated field. But we can use other fields in a calculated fields. So the user was outsmarting SharePoint by adding a field called "today" to his lists... But in order for this to work, the field had to be updated each day. So he found a workflow action that helped him out:

    Armed with this knowledge, he went on and created a site workflow, which should be executed daily... Here is a picture of the workflow the user designed:

    Basically it was a very simple workflow. The 1st Action was to activate the correct permissions to edit the lists, and then we had 12 X the Action "Update multiple Items" for 12 different lists on the farm.

    And since this workflow had to do his work each day, they used the aceduled it for every day... The lists themselves are not really "huge" and the total amount of items for all lists is only about 10.000, so there are no throttling exceptions or anything else...

    So back to our workflow issue... And why has this caused a Problem?

    Well... There are 2 timeout when it comes to workflows. One of these timeouts is the Event delivery timeout... This timeout determines how long a workflow can run. It is documented here: http://technet.microsoft.com/en-us/library/cc262968(v=office.12).aspx

    NoteNote:

    If you create a workflow solution that has a very long processing time to start your workflows, complete tasks, or modify workflows, you should consider increasing this value. View the ULS logs and watch the Microsoft SQL Server table ScheduledWorkItems to determine if the workflow jobs are timing out. The default folder location for the ULS log is Program Files\Common Files\Microsoft Shared\Web server extensions\12\Logs. In the ULS log file, you can use "workflow" or "workflow infrastructure" as search keywords.

    There is also a 2nd timeout, that determins when a workflow has died. This timeout will kick in after 20 minutes and will reset the state of the workflow back, so it will be processed again.

    Now there was a point when the server tipped over, and the processing for all those lists exceeded the 2nd timeout. Which caused the workflows to be reset. Each day there was a new workflow added to the queue... None of these workflows managed to complete, and we piled up a total of about 100 neverending workflows over time. The issue was never noticed, since the workflows managed to update "most" items, and so everything looked fine... There was only one strange result. The Workflow timer server keept increasing in duration, and took longer and longer to complete... (100 X 20 minutes = 2000 minutes, 5 Servers total in the Farm means an average of 6-7 hours per workflow timer job run per server)

    This started to raise other alerts, since the normal workflows got "stuck" behind this one (Processing is done one contentDB at a time) and things like approval took to long to be processed and users started to complain.

    How could this have been fixed?

    Assuming the user insists on keeping his solution, the easiest way to fix it would be to use the FILTER in the workflow action in order to only retrieve items that are NOT todays date. If the workflow fails the 1st time, it will just restart and fix the items it left over from the first run, and not doing millions over millions update actions to the same records setting todays date to today...