Many fine folks were generous enough to point out to me this week that you can now (actually since June’ish I’m told) use an access token you get from ADAL in conjunction with the o365 APIs to use ALSO with the SharePoint REST API as well CSOM. Shocking!! This may be what they call “asleep at the wheel” (or taking the red pill), but now that I’m wide awake again and aware of my surroundings it seemed like a good time to augment and update some of the content that’s out there.
Jeremy Thake did a nice write up on this concept originally at http://www.jeremythake.com/2014/06/using-the-sharepoint-csom-and-rest-api-with-office-365-api-via-azure-ad/. While much of it still applies, I decided to revisit it myself so a) I would appear to know what the hell I’m talking about and b) I could update the really awful goo that he was forced to use to get an access token. So in this little sample I’m going to use a standard flow to get an access token using ADAL (Active Directory Authentication Library). I’m also going to be a little verbose and do a few things that you can do “less manually” with some tools for Visual Studio, just to make sure you are clear on what those tools are actually doing for you (especially if you’re <gasp> not using Visual Studio). Here we go.
You can use the Office 365 Tools add in for Visual Studio, or just create the application yourself manually in the Azure portal. That’s what I’m doing in this case. I begin by opening the portal and selecting my Azure Active Directory instance that’s used with my Office 365 tenant. Then I click on the Applications tab, and then click on the ADD button at the bottom of the page to create a new application. Here’s how we do it:
Click the check button and you’ve completed the first step – your application has been created. Now go ahead and click on the Configure link so you can add the permissions you need to access your o365 content.
Click the Add Application button at the bottom of the page and a Permission to Other Applications dialog pops up. Click on the Office 365 SharePoint Online application and then click on the + sign next to it. When you’ve completed that click the check button to save your changes:
Now you’ll have an item in the list of applications that your application wants permissions to, and you can click on the Permissions drop down for it to configure the permissions you want your application to have in the Office 365 tenant where it’s used. Here’s what that looks like:
After you’ve made all of these changes, click the SAVE button at the bottom of the page but DON’T close the application configuration page just yet, there’s some info we need to grab out of there for connecting to it. Scroll up to the top of the page and copy the a) CLIENT ID and b) REDIRECT URIS values. You are going to use this when you ask a user to consent to having your application access his or her SharePoint content:
Okay, now we’re going to create a little winforms app that uses ADAL to get an access token, and then CSOM and REST to get data out of SharePoint. So to begin the next step I’m going to create a new winforms application, and dive right into the code-behind in my form. Once in there I’m going to plug in my Client ID and Redirect URI that I got from my application configuration in Azure.
Now…if you follow my last o365 API blog post here: http://blogs.technet.com/b/speschka/archive/2014/12/08/oauth-o365-apis-and-azure-service-management-apis-using-them-all-together.aspx, you’ll see I’m going to use nearly identical techniques that I described there to get my access token from Azure AD. To begin with, here are the standard set of application details I’m adding to my app:
Now I’m going to use NuGet to add in support for ADAL and o365; when I’m done it looks like this:
I’m also going to add references to Microsoft.SharePoint.Client and Microsoft.SharePoint.Client.Runtime so I can use CSOM from my application. Next I’m going to add that same helper method I described in my previous blog post to get the access token, so let’s use this code here:
private async Task<AuthenticationResult> AcquireTokenAsync(string authContextUrl,
string resourceId)
{
AuthenticationResult ar = null;
try
//create a new authentication context for our app
AuthContext = new AuthenticationContext(authContextUrl);
//look to see if we have an authentication context in cache already
if (AuthContext.TokenCache.ReadItems().Count() > 0)
//re-bind AuthenticationContext to the authority
//source of the cached token.
//this is needed for the cache to work when asking
//for a token from that authority.
string cachedAuthority =
AuthContext.TokenCache.ReadItems().First().Authority;
AuthContext = new AuthenticationContext(cachedAuthority);
}
//try to get the AccessToken silently using the
//resourceId that was passed in
//and the client ID of the application.
ar = (await AuthContext.AcquireTokenSilentAsync(resourceId, ClientID));
catch (Exception)
//not in cache; we'll get it with the full oauth flow
if (ar == null)
ar = AuthContext.AcquireToken(resourceId, ClientID, ReturnUri);
catch (Exception acquireEx)
//utter failure here, we need let the user know we just can't do it
MessageBox.Show("Error trying to acquire authentication result: " +
acquireEx.Message);
return ar;
Okay, now we have the most important part of the guts done, let’s go back and look at the basics of what Jeremy was showing in his post:
This one is virtually identical to Jeremy’s code, it just a) uses my updated code for obtaining an access token and b) shows how to create the Resource ID that you want to use for getting content from your SharePoint sites. I tested this code against a couple of different o365 tenants and I can happily <small tear in eye> say that it worked in both.
private async void RestListBtn_Click(object sender, EventArgs e)
AuthenticationResult ar = await AcquireTokenAsync(CommonAuthority,
GetSharePointHost(SiteUrlTxt.Text));
if (ar != null)
string requestUrl = SiteUrlTxt.Text + "/_api/Web/Lists";
HttpClient hc = new HttpClient();
//add the header with the access token
hc.DefaultRequestHeaders.Authorization = new
System.Net.Http.Headers.AuthenticationHeaderValue(
"Bearer", ar.AccessToken);
HttpResponseMessage hrm = await hc.GetAsync(new Uri(requestUrl));
if (hrm.IsSuccessStatusCode)
ResultsTxt.Text = await hrm.Content.ReadAsStringAsync();
else
MessageBox.Show("Unable to get subscription information.");
catch (Exception ex)
MessageBox.Show("Error: " + ex.Message);
private string GetSharePointHost(string url)
Uri theHost = new Uri(url);
return theHost.Scheme + "://" + theHost.Host + "/";
The main thing to call out here really is the GetSharePointHost method that creates the Url that is used as the Resource ID when getting an access token. The net of this advice is that no matter where your SharePoint site lives, you really want to use the root site as the Resource ID. So if your o365 site is https://steve.sharepoint.com/sites/blazers, the Resource ID to get an access token should be https://steve.sharepoint.com. Doesn’t matter where the site is in that hierarchy – from the root all the way down as far as you want to go – you want to make sure you use the root site as the Resource ID. About a billion thanks to Dan K. for clearing that up for me; your Christmas card is in the mail! The other thing worth noting is that you MUST include a trailing slash on the Url that you use as the Resource ID. It cause ADAL no end of pain and suffering when you leave that out. Both of these little gems are captured in those two lines of code above. Yay!
This one is also quite similar to Jeremy’s…again the main differences are that it uses my standard method to get the access token, and for some reason I get a somewhat different class naming structure for my WebRequestEventArgs. Here’s the relevant code:
private void CsomListBtn_Click(object sender, EventArgs e)
ClientContext ctx = new ClientContext(SiteUrlTxt.Text);
ctx.ExecutingWebRequest += ctx_ExecutingWebRequest;
ctx.Load(ctx.Web.Lists);
ctx.ExecuteQuery();
string theLists = string.Empty;
foreach (List lst in ctx.Web.Lists)
theLists += lst.Title + Environment.NewLine;
ResultsTxt.Text = theLists;
async void ctx_ExecutingWebRequest(object sender, WebRequestEventArgs e)
e.WebRequestExecutor.RequestHeaders["Authorization"] =
"Bearer " + ar.AccessToken;
So there you have it. I must say, I find this ridiculously cool. Many thanks to Jeremy for his original post and for the loads of people who basked in the glory of pointing out to me that I missed it. :-) Just kidding of course, I love having a legion of friends that can read on my behalf and get me pointed in the right direction as needed. Hopefully this post will help one of you in the same way. I’ve attached the complete source code to this post, so just change the ClientID and ReturnUri variables after you create your application and you should be good to go. As usual, I've also included the Word document from which this sorry mess of Notepad-like content was created.
I’ve been spending some time lately fooling around the o365 API’s. Frankly, it has been a hugely frustrating experience; I’ve never seen so many documentation gaps and examples and sample code that didn’t work, to flat out wouldn’t compile in a long time. So, once I finally stumbled up on the “right” collection of code that worked to get me the all important access token for the o365 APIs I decided to take a quick detour to see if I could use the same approach to manage my Azure subscription with the Service Management APIs. Turns out you can, and actually do so in a way that leverages both of the APIs together in kind of a weird way. Here’s the story of how it works (with a special twist at the end, you’ll have to read all the way down to see it).
The first step in getting any of this working is to create an application in Azure. In that app, you can grant the rights that it is going to require – to things like Office 365 and the Azure Service APIs. The easiest way to do this scenario is to create the application using the Microsoft Office 365 API Tools for Visual Studio, which as of this writing you can find here: https://visualstudiogallery.msdn.microsoft.com/a15b85e6-69a7-4fdf-adda-a38066bb5155.
Once you have those installed, the next step is to create a new project – whatever kind of project type is appropriate for what you are trying to accomplish. Once you’ve done that you want right-click on your project name and select “Add Connected Service”, like this:
When you add your Connected Service the first thing you’ll do is click on the Register your app link. It will first ask you to log in using your Azure organizational account. This account needs to be an admin account – one that has rights to create a new application in an Azure Active Directory:
After entering your credentials Visual Studio will take care of registering the app with Azure. It then presents you with a list of permission scopes, and for each one you can click on the scope to select which permissions in that scope you want your app to have. Remember that when a user installs your application it will ask if it’s okay for the app to have access to these things you’re defining in the permissions list.
In this case I’m just going to select the Users and Groups scope and select the permissions to Read directory data and Access your organization’s directory:
That should be enough permissions to use the o365 Discovery Service API; I’ll explain why we want to do that in a bit. For now just click the Apply button, then click the OK button to save your changes. When you do that Visual Studio will begin modifying the application with the permissions you requested. In the Output window in fact you should see it add the Discovery Service NuGet package to your project (“ServiceApi” is the name of my project in Visual Studio):
Adding 'Microsoft.Office365.Discovery' to ServiceApi.
...
When it’s done with that it should open up a page in the Visual Studio browser window that includes the interesting information about what you need to do next, and of course, no details on how to actually do it. That’s why you’re reading this blog post now, right?? :-)
The next thing we’re going to do is to go into the Azure Management Portal and add the other rights we want for our application, which is to use the Service Management API. So open up your browser and log into the Azure Management Portal at https://manage.windowsazure.com. Scroll down and click on the Active Directory icon in the left navigation, then click on your Azure Active Directory domain in the right pane. When you go into the details for your directory, click on the Applications tab, then find the application you just created. Click on it and then click on the Configure link.
One of the things you’ll see at the top of the page is the CLIENT ID field. If you look in Visual Studio at your project, you should have had something like an app.config file added to it when you set up the application (app.config is added to winforms project; the config file that is added will vary based on your project type). Open up the app.config file and you will see an appSettings section and an entry for ida:ClientId. The value for it should be the same as the CLIENT ID value you see for your application in the Azure portal.
Scroll down to the bottom of the page for the application to the “permissions to other applications” section. You should see one entry in there already, and that’s for the permission we requested for working with the directory data. Now click on the drop down in that section that says “Select application”, and select Windows Azure Service Management API. On the drop down next to it that says “Delegated Permissions: 0”, click on it and check the box next to the item that says “Access Azure Service Management”.
Once you’ve done that click the SAVE button at the bottom of the page. You’re done now with all of the configuration you need to do in Azure, so let’s shift gears back into our project.
The next thing we need to do is plug the application key values we need into our application. That includes the client ID, redirect URI, the Discovery Service resource ID and the Azure Service Management resource ID. You can get the client ID and redirect URI values from the app.config file that was added to your project. If for some reason you can’t find it in your project, you can also find it in the Configure tab for the application in the Azure management portal. The client ID value is in the CLIENT ID field, and the redirect URI value is in the REDIRECT URIS list. Finally, we are also going to use what’s called the “Common Authority” for getting an authentication context – an ADAL class (Active Directory Authentication Library that is used under the covers to get an access token to the o365 and Azure resources).
The resource IDs are fixed – meaning they are always the same, no matter what application you are using them from. So with that in mind, here’s what the values look like once I’ve added them to the code behind for my winforms application:
Okay, we’re getting awfully close to actually writing some code…the last thing we need to do before we start doing that though is to add the ADAL NuGet package to our application. So in Visual Studio open up the NuGet package manager, search find and add the ADAL package and add it. When you’re done you should see it AND the Microsoft Office 365 Discovery Library for .NET, which was added when we configured our application previously:
Now we’ll add the last couple of helper variables we need – one which is the base Url that can be used to talk to any specific Azure tenant, and the other is a property to track our AuthenticationContext:
private const string BASE_TENANT_URL = https://login.windows.net/{0};
public static AuthenticationContext AuthContext { get; set; }
Okay, now let’s add the code that will get the access token for us, and then we’ll walk through it a little bit:
private async Task<AuthenticationResult> AcquireTokenAsync(string authContextUrl, string resourceId)
//we would have gotten this when we authenticated previously
//re-bind AuthenticationContext to the authority source of the cached token.
//this is needed for the cache to work when asking for a
//token from that authority.
//try to get the AccessToken silently using the resourceId that was passed in
//request the token using the standard oauth flow
//let the user know we just can't do it
The first thing we’re doing is creating a new AuthenticationContext for a particular authority. We use an authority for working with an AuthenticationContext so that we can manage the cache of AuthenticationResults for an authority. ADAL provides a cache out of the box, so once we get an AuthenticationResult from an AuthenticationContext, we can just pull it from that cache without having to go through the whole oAuth flow all over again. The AuthenticationResult by the way is where we get the things we need to access a resource – an access token and a refresh token.
Once we’ve created our AuthenticationContext again then we can try and acquire the AuthenticationResult out of the cache, which is done with the call to AcquireTokenSilentAsync. If there’s an AuthenticationResult in cache then it will be returned to you. If not then it throws an exception, which we catch directly below that call.
Once we get through that part of the code we look to see if we were able to get an AuthenticationResult. If not, we’ll go ahead and use the oAuth flow to obtain one. That means a dialog will pop up and the user will have to enter their credentials and approve our application to access the content we said we needed when we configured the permissions for our application.
Now that we’ve got the code out of the way to get an AuthenticationResult, we can write the code to actually go work with our data. This is where you can see this kind of interesting intersection between the o365 APIs and the Azure Service Management API that I was describing at the start of this post. As I alluded to earlier, I really put this code together and wrote this post for two reasons: 1) there are SO MANY o365 API examples that either don’t include clear instructions on how to obtain an access token, or the code they have either does not even compile or does not work. This is proof I guess that APIs change, right? Now you have an example that works (at least for today). 2) the Service Management API requires the tenant ID to work with it. Well, as it turns out, when you get your AuthenticationResult from a call to the DisoveryService through the o365 APIs, you will get the tenant ID back.
So you can use this pattern – call the DiscoveryService, use the CommonAuthority for the authContextUrl and authenticate, then you will have the tenant ID. Then take the tenant ID, use the login Url for the specific tenant as the authContextUrl and get an AuthenticationResult to use with the Service Management APIs (we’ll end up getting the access token silently). Once you have that you can take the access token from it to do whatever you’re going to do with Service Management APIs.
Now, let me explain one other behavior that you may not be aware of, and then explain how that impacts the code I described above. If you read my description of the pattern above, you may notice this: in the first call to create an AuthenticationContext I use the CommonAuthority for the authContextUrl, which is https://login.windows.net/Common. In the next call I use an authContextUrl of https://login.windows.net/myTenantIdGuid. So if I’m using two different Urls to create my AuthenticationContext, then how could there be something in the cache for me to use when as I say above, on the second call I’m going to get the AuthenticationResult out of the cache? Well, it turns out that when you create the AuthenticationContext with the CommonAuthority, after the user actually authenticates ADAL changes the Authority property of the AuthenticationContext from CommonAuthority to https://login.windows.net/myTenantIdGuid. This is actually pretty cool. The net result of all this is – I actually don’t need to authenticate into the DiscoveryService at all to get the tenant ID for the tenant being used. So what’s the net of everything I’ve given you so far? Well as I explained at the beginning, you now have some nice code that actually does work with the o365 APIs. However now that you understand how it works you can see you really don’t need to call into the DiscoveryService to get the tenant ID, so you learned a little something about how the AuthenticationContext works in the process.
Okay, so now that we know that, let’s take a look at the code to get our subscription data from Azure. You’ll see that it makes just a single call to get an AuthenticationResult for working with the Service Management APIs. Note, this code uses the new HttpClient library, which is installed as a NuGet package; here’s the package I added:
The code looks like this:
//base Url to get subscription info from
string subscriptionUrl = "https://management.core.windows.net/subscriptions";
AuthenticationResult ar =
await AcquireTokenAsync(CommonAuthority, AzureManagementResourceId);
//yeah this is me just being lazy; should really do something here
return;
string accessToken = ar.AccessToken;
if (!string.IsNullOrEmpty(accessToken))
//create an HTTP request for the subscription info
System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", accessToken);
//add the header with the version (REQ'D)
hc.DefaultRequestHeaders.Add("x-ms-version", "2014-06-01");
HttpResponseMessage hrm = await hc.GetAsync(new Uri(subscriptionUrl));
SubscriptionsTxt.Text = await hrm.Content.ReadAsStringAsync();
MessageBox.Show("Error trying to acquire access token: " + ex.Message);
So just to recap what we’ve been talking about…the first thing I do is get an AuthenticationResult using the CommonAuthority. If the person that signs in does not have rights to use the Service Management APIs for the tenant they sign into, then they won’t get the chance to grant the app permissions to do what it wants. Otherwise they person will ostensibly grant rights to the app, and you’ll get your AuthenticationResult. From that we take our access token and we add two headers to our request to the subscriptions REST endpoint: the authorization header with our access token, and a special Microsoft version header, which is configured per the Microsoft SDK. Then we just make our request, hopefully get some data back, and if we do we stick it in the text box in our application. Voila – mission accomplished! Here’s an example of the XML for a subscription (and yes, the GUIDs shown here are not the actual ones from my tenant):
My biggest regret here is that even though in some ways this is “simplified”, it still takes me nine pages of a Word document to explain it. However I do think you have some good code that you can go out and run with today to start working with both the o365 APIs as well as the Azure Service Management APIs, so I think it’s worth it. Just bookmark this post for a rainy day, and then when you’re ready to start developing on either SDK you know where to find the info to get you started. I've also attached to this post the complete source code of the application I wrote here. You'll just need to update the client ID and return URI in order to use it. I've also included the original Word document from which this somewhat ugly post was pasted.
I recently spent HOURS looking for a silly little SQL Azure connection string in my Visual Studio project. When I created the project I configured it to be deployed to an Azure web site and I also connected it with a SQL Azure server where I'm storing the data. Unfortunately I had entered an old password when I created the project, and from that point every time after I deployed to Azure web sites my project bombed with malice whenever I attempted to do anything that touched the database.
I actually tried to fix it myself in a variety of ways before stumbling across it's secret shelter. I changed the settings in the Package/Publish SQL part of the project properties; no joy. I tried adding the password to the project-dev.json settings file; no joy. I added web.config transforms for both debug and release; no joy. I even published the app to my desktop so I could make sure the web.config was getting transformed correctly. It was, the right connection string was in there, but it was not being used. Grr!
Finally a particularly brilliant friend of mine, David C., pointed me in the right direction. When you need to change that password, here's what you have to do:
1. Open up the Server Explorer in Visual Studio. Expand the Azure subscription (or connect to your subscription if you haven't already), then expand the list of Azure web sites.
2. Right-click on the Azure web site to which you are deploying and click on the View Settings menu.
3. Around the middle of the page you will see a Connection Settings section. Go in there, change your connection string value, then click the Save button at the top of the page.
You're good to go at that point. Just republish your application and give it a try - everything should be working at that point. Hopefully this saves someone some time trying to find this info.
Continuing on with the theme of SAML secured SharePoint sites and SharePoint Apps, this next posting looks at another common application model, which is using what I call a desktop app to connect to SharePoint. By “desktop”, I mean an app that doesn’t have an HttpContext, like a console application or winforms app. It could run on a desktop, on a server, in an Azure web job, etc. but the fundamental theme is that there is no HttpContext associated with the request and there may not be an active user context either (as is the case with a console app).
To create this solution we’re going to build upon two previous Share-n-Dipity posts: http://blogs.technet.com/b/speschka/archive/2013/06/12/converting-a-vs-net-web-for-a-provider-hosted-sharepoint-app-to-a-full-blown-web-or-console-application.aspx and http://blogs.technet.com/b/speschka/archive/2014/09/30/an-updated-claimstokenhelper-for-sharepoint-2013-high-trust-apps-and-saml.aspx. There are some slightly newer steps required to get your desktop application configured to be used as a SharePoint App, and then I’ve made some additions to the ClaimsTokenHelper code to facilitate this scenario. When you’re done you’ll be able to use a desktop app for the following scenarios:
High trust with an App + User context
High trust with an App Only context
Low trust with an App Only context
So in short we can address all of the primary app contexts except for low trust App + User; the reason we can’t do that one is that as described above, there is no HttpContext so we can’t get SharePoint to give us a low trust user context. There’s still a lot to work with though so let’s walk through the process.
As a starting point for this scenario, I always recommend that you build out a working “standard” provider hosted SharePoint App for your scenario. To do that, start with Visual Studio and create your application using the Visual Studio wizard. It creates the SharePoint App as well as web application project that uses IIS Express. From this point forward this web application project that VS.NET creates shall be referred to as the "original web app". Verify that everything is working correctly.
Next create the new application you are going to use - a console project or winforms project. You now need to copy over the configuration information from the web.config file of the original web app to your application. The difference of course is that neither a console nor winforms project has a web.config, so you will put the configuration information in the application settings. To do so, go into the Properties of the application, click on the Settings link on the left. If you are in a console application then you will see a link in the middle of the page that says "This project does not contain a default settings file. Click here to create one."; click on it to create a settings file. A winforms project displays the settings grid by default so you can just begin creating properties. Go ahead and create properties for each of the appSetting properties from the original web app’s web.config file - copy in both the key (as the setting Name) and value (as the setting Value). Make sure you configure each setting an Application scope property (it is User by default).
The references required for the project should be added next. Add the following references to your project:
Microsoft.IdentityModel
Microsoft.IdentityModel.Extensions
Microsoft.SharePoint.Client
Microsoft.SharePoint.Client.Runtime
System.IdentityModel
System.ServiceModel
System.Web
System.Web.Extensions
System.Configuration
Now create a new folder called “App_Code” and copy into it the TokenHelper.cs and SharePointContext.cs files from the original web project. Also copy in the ClaimsTokenHelper.cs and SharePointContextExtensions.cs files that are included with this post. After you’ve added the files change the namespace attribute in each class to match the namespace attribute of your project. For example, if you create a console project called “SamlConsoleApp”, then you should change the namespace attribute in each class to be:
Namespace SamlConsoleApp
//rest of your class here
Next you need to update the properties in TokenHelper.cs and ClaimsTokenHelper.cs that currently look to the web.config file for their values. They should instead use the application settings, since that’s where the configuration is being stored in console and winforms apps. In TokenHelper.cs look for the following properties:
ClientId
ClientSigningCertificatePath
ClientSigningCertificatePassword
IssuerId
ClientSecret
In ClaimsTokenHelper.cs find these properties:
TrustedProviderName
MembershipProviderName
Replace the value of each of these properties with a call to the application settings, like this:
private static readonly string TrustedProviderName = yourAppNamespace.Properties.Settings.Default.TrustedProviderName;
Where yourAppNamespace is the namespace for your console or winforms application.
NOTE: If you did not create each of those properties in your application’s Settings file, then you will not have corresponding properties for each item listed above in TokenHelper.cs and ClaimsTokenHelper.cs. That’s okay – just leave them as is and change the ones you DO have.
Finally, you need to change the modifier on the TokenHelper class from public to public partial. You can do that by changing the class from “public class TokenHelper” to “public partial class TokenHelper”. At this point all of the modifications are complete. You should compile your code and verify that it completes without error. If you missed any of the steps above then you will likely get compiler errors or warnings now that should direct you to the areas that need to be fixed. Now you can finally start writing some code!
The actual code to connect to SharePoint and get a ClientContext varies depending upon the scenario you are using. Here’s an example in a console app of all three use cases I described above:
//this is the SharePoint site we want to work
//with; you must provide this since there is no HttpContext
Uri hostWeb = new Uri("https://samlpnp.vbtoys.com");
//request using:
//1. High Trust
//2. App + User context
using (var userContext =
SharePointContextProvider.Current.CreateDesktopUserContext(hostWeb,
TokenHelper.IdentityClaimType.SMTP,
"sam.smith@contoso.com"))
//your code here
//2. App Only context
using (var highAppContext =
SharePointContextProvider.Current.
CreateDesktopHighTrustAppOnlyContext(hostWeb))
//1. Low Trust
using (var lowAppContext =
CreateDesktopLowTrustAppOnlyContext(hostWeb))
Now for a couple of notes on the implementation. I made changes to both the ClaimsTokenHelper and SharePointContextExtensions classes from my previous post on this topic. For ClaimsTokenHelper I modified it so that you can pass it the identity claim you want to use for the user when using an App + User context. Again, this is because there is no HttpContext so you don’t have access to things like a claims collection or an authentication process you can plug into. You can use the current process identity, a configuration file, or even just hard code the identity you want to use into your application. You aren’t send along credentials in this case, you’re just telling SharePoint the user context that should be used when it processes your CSOM calls. That is exactly what high trust was designed to do.
In addition to that, I added some additional methods to the SharePointContextExtensions class. Again, the original SharePointContextProvider class creates a SharePointContext based on an HttpContext, and then you create a ClientContext from that. Since the HttpContext doesn’t exist, none of those methods work. To work around that, we can bypass creating the SharePointContext and go straight to creating a ClientContext. The extensions class was updated to add these additional methods that you see demonstrated above: CreateDesktopUserContext, CreateDesktopHighTrustAppOnlyContext, and CreateDesktopLowTrustAppOnlyContext. As a side note, one of the other interesting “features” of doing this in a desktop app is that you can mix both low trust and high trust calls in the same application. Because of the way the SharePointContextProvider class that comes with Visual Studio uses session state to manage your context, this is not possible in a web application. I don’t know if that will ever matter to anyone, but it doesn’t hurt to have another capability in the tool belt.
Here are screenshots of the code executing successfully in a console app and then in a winforms app:
That’s it for this post. I’ve attached a zip file with the updated ClaimsTokenHelper.cs and SharePointContextExtensions.cs files, as well as two sample projects – one a console app and the other a winforms app that demonstrates using these new classes and techniques. Good luck!
Low trust provider hosted apps in a SAML secured SharePoint web application is a scenario that did not work when SharePoint 2013 was released. Things have changed fortunately, so here's a quick run down on what you need to do in order to build these apps on premises. The first thing you need to do is to apply the April 2014 CU or later.
Once you’ve applied that you’ll need to decide how you want to configure authentication on your provider hosted apps. Generally you’ll find the best approach is to use a single host for your provider hosted apps, install each app into its own subdirectory, and use an authentication mechanism that is the same as you use for your SharePoint web applications. There may be cases where you might not want to do that; for example, if your SharePoint web applications use multi-factor authentication, you may not want your users to have to enter their credentials again – twice – when they use an app that is part of a SharePoint site to which they’ve already authenticated. As explained in the beginning, that’s okay – you can use a different authentication mechanism if needed for your low trust provider hosted app because the user identity is maintained by SharePoint even after the user authenticates to the app.
When the provider hosted environment is configured, the next thing you need to do is create a ServicePrincipal with your Office 365 tenant. The Office 365 tenant uses a special version of Access Control Services (ACS) that is responsible for providing the tokens (context, refresh and access) that low trust SharePoint apps use to authenticate and authorize with SharePoint. The process overall is described in greater detail in this blog post: http://blogs.technet.com/b/speschka/archive/2013/07/29/security-in-sharepoint-apps-part-3.aspx. More importantly, MSDN has published instructions and a script you can use to create the ServicePrincipal and configure it to allow the ServicePrincipal to issue tokens to a SharePoint web application. You can find the MSDN article here: http://msdn.microsoft.com/en-us/library/dn155905.aspx. When you create your ServicePrincipal, you need to add the hostname of every SharePoint web application in which you want to use apps to a collection of SPNs on the ServicePrincipal, i.e. ServicePrincipal.ServicePrincipalNames. In the MSDN article it includes a script at the bottom that will enumerate all of the web applications in your farm and add the hostname of each one to the ServicePrincipalNames collection. However, if you add a new web application in the future, you also need to remember to go and add the hostname of the new web application to the ServicePrincipalNames collection. If you don’t do that, then apps that are installed in that web application will not be able to get a valid token.
Another option that can be used so you don’t have to revisit your ServicePrincipalNames collection is to add a wildcard. If all of your SharePoint web applications use a common domain for the host name then you can just add a wildcard for the domain. For example, if you have two web applications – portal.contoso.com and intranet.contoso.com – then you can just add a "*.contoso.com" wildcard to the ServicePrincipalNames collection. Here’s a PowerShell example of how to do that:
#you will be prompted to enter the credentials of an o365 Global Admin here
connect-msolservice
$spoid="00000003-0000-0ff1-ce00-000000000000"
$p = Get-MsolServicePrincipal -AppPrincipalId $spoid
$spns = $p.ServicePrincipalNames
$spns.Add("$spoid/*.contoso.com")
Set-MsolServicePrincipal –AppPrincipalId $spoid –ServicePrincipalNames $spns
After you’ve completed creating and configuring the ServicePrincipal, you can begin deploying your low trust provider hosted applications on SharePoint web applications that are secured with SAML authentication.
When Visual Studio 2013 came out, it introduced a new class and simplified methods for obtaining a ClientContext to use with the Client Side Object Model (CSOM) to access SharePoint 2013 sites. A new SharePointContext class was added to simplify the programming model, but internally it still called the TokenHelper class that originally shipped with Visual Studio 2012.
Shortly after SharePoint 2013 shipped, I provided an additional class – the ClaimsTokenHelper class – to be used when your SharePoint sites are secured using SAML authentication (http://blogs.technet.com/b/speschka/archive/2013/07/23/3539509.aspx). Neither the original TokenHelper class nor the new SharePointContext class provides a means to properly identify a SAML claims user. I decided to take a fresh look at the ClaimsTokenHelper implementation and see if I could find a way to update things to keep it more closely aligned with the development model used in the new SharePointContext class. What I ended up doing is creating a new SharePointContextExtensions class, and it allows you to use virtually the same exact programming model as you do now with the SharePointContext class.
Here’s an example of the code you use to access a site title using the SharePointContext class:
var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
using (var clientContext =
spContext.CreateUserClientContextForSPHost())
clientContext.Load(clientContext.Web, web => web.Title);
clientContext.ExecuteQuery();
Response.Write(clientContext.Web.Title);
Now, here’s an example of using the new SharePointContextExtensions class:
spContext.CreateUserClientContextForSPHost(TokenHelper.IdentityClaimType.SMTP))
As you can see, literally the only difference between the two now is that you need to specify which OAuth identity attribute you want to use as the identity claim – SMTP, SIP, or UPN. Also, there is a corresponding method for getting a ClientContext for an App Only token. Here’s the out of the box syntax:
spContext.CreateAppOnlyClientContextForSPHost();
And here’s the corresponding method when using SAML:
spContext.CreateAppOnlySamlClientContextForSPHost();
In this case I had to actually change the method name, because it would otherwise have the same exact method name and signature as the SharePointContext class. I’ve attached the SharePointContextExtensions and ClaimsTokenHelper classes to this posting. Using them is pretty straightforward:
That’s it – you should be up and running in no time!
NOTE: This is a sampling of some content we're preparing for working with SharePoint Apps and SAML authentication. More content will be coming, and once everything is packaged up and a distribution channel determined I'll post a general announcement on the Share-n-Dipity blog.
The concept of using SAML authentication with SharePoint-hosted apps was been a painful proposition when SharePoint 2013 first shipped. The big stumbling block has been that because of the way the host names are created for SharePoint-hosted apps - each application installed gets its own unique host name - it required an identity provider that supported a wildcard reply Url. At the time SharePoint shipped, we did not have such an identity provider. Fortunately, when ADFS 3.0 came out with Windows Server 2012 R2, it included this functionality. That now enables us to configure an environment in which we can use SharePoint-hosted apps on web applications that are secured with SAML authentication. What I'm going to show here is sort of a medium level overview of what needs to be done. I'm going to attach a Word document to this post that also includes pictures at each main step along the way, so download the document to get (literally) a clearer picture of how to do each of these things. Now, here are the steps to get this working:
POWERSHELL TO CREATE SPTRUSTEDIDENTITYTOKENISSUER:
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("c:\ adfs_tokensigning.cer")
New-SPTrustedRootAuthority -Name "ADFS Token Signing Certificate" -Certificate $cert
$map = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
$map2 = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" -SameAsIncoming
$map3 = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn" -IncomingClaimTypeDisplayName "UPN" -SameAsIncoming
$realm = "urn:sharepoint:spsamlapps"
$ap = New-SPTrustedIdentityTokenIssuer -Name "ADFS v3" -Description "ADFS v3" -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $map,$map2,$map3 -SignInUrl "https://yourAdfsFarm.yourDomain.com/adfs/ls" -IdentifierClaim "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
POWERSHELL TO HAVE SPTRUSTEDIDENTITYTOKENISSUER USE WREPLY:
$ap = get-sptrustedidentitytokenissuer -identity "ADFS v3"
$ap.UseWReplyParameter = $true
$ap.Update()
NOTE: An important distinction in the configuration for this scenario is that, unlike when you have a second content web application use an SPTrustedIdentityTokenIssuer, you do not need to add the URI for the app’s listener web application to the ProviderRealms property of the SPTrustedIdentityTokenIssuer, nor do you need to create a second relying party in ADFS.
This post is an update to the original architectural guidance I published previously at http://blogs.technet.com/b/speschka/archive/2013/10/11/architecture-design-recommendation-for-sharepoint-2013-hybrid-search-features.aspx. If you read that post then you’ll recall that we had a “scenario problem” with hybrid search when SharePoint 2013 released. The problem, which I explain more fully in that post, is that there wasn’t a good way to publish both an endpoint for hybrid search as well as users outside of your firewall to access a SharePoint farm.
IMPORTANT: The features described in this post require that you install the April 2014 CU or later for SharePoint 2013. That introduced one breaking change that you will also need to fix for everything to work. Please see this post for details and the fix: http://blogs.technet.com/b/speschka/archive/2014/08/28/you-start-getting-a-401-unauthorized-error-when-using-the-sharepoint-hybrid-features-after-applying-april-2014-cu-or-later.aspx.
The good news is that the team has been able to add some new functionality to the hybrid features such that we can now support this scenario. In short what needs to be done is:
Here’s a few more details on these steps. To help illustrate, let’s assume you have a SharePoint zone with a Url of https://portal.contoso.com and it is reachable on your corporate network at IP address 10.1.1.1. You have a reverse proxy in your DMZ and it is configured to listen for incoming requests on IP address 175.10.10.10. Now let’s see how this scenario would be implemented.
When https://portal.contoso.com was created it was added to the default zone. We’re going to add another incoming Url for the zone that will be used for hybrid search, so I’ll call it https://hybrid.contoso.com. Now in terms of how you add the incoming Url I’ll just say that there are a few ways of doing it, and a lot of documentation out there for how to do it. For my purposes I created my web application with this in mind, so I used hybrid.contoso.com as the Host Header value and https://portal.contoso.com as the Public Url. After the web app was created I had to a) go add an incoming Url for the zone of https://hybrid.contoso.com and b) add another HTTPS binding in IIS on my web application so that it listens for portal.contoso.com. Since I used hybrid.contoso.com as the Host Header value when I created the web application, that HTTPS binding was already created in IIS. I used the SNI feature in IIS so I could set both host header values and still use SSL.
Configuring DNS for users to access the Public Url of the zone can be done in one of two ways:
Configuring DNS for the incoming Url for the SharePoint zone is much easier; you’re just going to create one A record in your external DNS for “hybrid.contoso.com” and it will use the IP address of the reverse proxy server, which is 175.10.10.10.
The exact details of how you publish endpoints in a reverse proxy are going to vary by the proxy product being used. For an example of how to use WAP in Windows Server 2012 R2 you can see one of my prior posts here: https://blogs.technet.com/b/speschka/archive/2013/12/23/configuring-windows-server-2012-r2-web-application-proxy-for-sharepoint-2013-hybrid-features.aspx. At a high level though there’s really just a couple of concepts you need to know when you publish the endpoints:
The goal here is to have two unique hostnames for the same SharePoint content. By using the AAM feature, when the request comes in for hybrid.contoso.com, any search results that it returns will be rendered using the Public URL for the zone, which is portal.contoso.com. When a user clicks on a search result then they will be sent to whatever IP address resolves for portal.contoso.com and they will be able to access the SharePoint content using their credentials, without having to provide a client certificate like hybrid search does. In your Office 365 tenant that also means that when you create the Result Source for the on premises farm, you need to configure the Url to be https://hybrid.contoso.com so that it gets routed to the correct published application on the reverse proxy server.
This is the last and most important step, which was provided by the April 2014 CU. A new property was added to both the SPSecurityTokenServiceConfig as well as SPWebApplication. The property is called UseIncomingUriToValidateAudience and is set to False by default. In order to get the hybrid features to use the AAM lookup as we’ve configured above you need to set it to true. To make this change farm wide, use the SPSecurityTokenServiceConfig object; to set it on just one web application use the SPWebApplication. Here’s an example of the PowerShell needed to set it at the farm level:
$cfg = Get-SPSecurityTokenServiceConfig
$cfg.UseIncomingUriToValidateAudience = $true
$cfg.Update()
Once you’ve completed all of these steps you should be able to have Office 365 issue inbound queries to your on premises farms and get search results back that are rendered using the Public Url of your SharePoint zone. This request will be securely authenticated using the client certificate you configure the Office 365 Result Source to use. You will also be able to have users outside your corporate network access the SharePoint zone using their corporate credentials, and they will not be required to present a client certificate to get to the SharePoint farm. This is a really nice improvement in the hybrid features so I hope you find it useful. For more details on the new property that was added please see this article on TechNet: http://technet.microsoft.com/en-us/library/dn751515(v=office.15).aspx.
This sounds scarily like a KB article, which I don't do, so we'll just jump straight to the facts. I noticed that after I applied the April 2014 CU to my farm that hybrid inbound search no longer worked. I subsequently confirmed with some other folks that they were seeing the same issue as well after applying that CU. What ends up happening is that you'll see a 401 unauthorized error if you try working with the query in the query rule editor. If you look on the on prem farm you'll see the following error messages in the ULS log after you try and pull query it from o365:
Error trying to search in the UPA. The exception message is 'System.ArgumentException: Exception of type 'System.ArgumentException' was thrown.The set of claims could not be mapped to a single user identity. Exception Exception of type 'System.ArgumentException' was thrown. Parameter name: value has occured.The registered mappered failed to resolve to one identity claim. Exception: System.InvalidOperationException: Exception of type 'System.ArgumentException' was thrown. Parameter name: value
For now, some bright individual has come up with a work-around for now that will get your inbound queries working again. You just need to run the following PowerShell script once in your farm:
$config = Get-SPSecurityTokenServiceConfig$config.AuthenticationPipelineClaimMappingRules.AddIdentityProviderNameMappingRule("OrgId Rule", [Microsoft.SharePoint.Administration.Claims.SPIdentityProviderTypes]::Forms, "membership", "urn:federation:microsoftonline")$config.Update()
That should fix it for now, I don't know if all of this will rolled into some future CU, but you at least can get back to working with hybrid for now.
Today’s topic is one that came about after I heard some folks kind of unhappy about using Azure web sites as a platform for SharePoint 2013 provider-hosted apps. The unhappiness as was really just about the fact that you only get 10 Azure web sites for free and some folks were thinking that this meant they could only deploy 10 SharePoint Apps for free then. I had hoped for a better story and so I spent some time rooting around this problem until I got the right combination to fall in place to solve this issue. This post is going to ready like one big list of instructions, but don’t just glance at it and decide it’s too complicated – it’s not! There are a number of things that you need to do up front the first time, but after that (and once you get in the rhythm of doing it once or twice) I think you’ll find it’s a pretty manageable thing. Plus, it’s free – that’s gotta be worth a LITTLE investment of time to understand it. Okay, so let’s get started.
Go to the Azure Management Portal and click on the Web Sites link in the left navigation.
Click on the name of the web site where you want to deploy your SharePoint App.
Click on the "Download the publishing profile" in the right task menu and save the file to your development server.
Open the publishing file in a text editor like notepad.
Scroll to the right until you see the publishUrl that starts with "ftp://". Copy that Url, open your browser, paste it in and navigate to it.
You will get prompted to enter a username and password. Go back to the publishing profile file and copy the FTP username and password out of it. When you're done, the browser should open an FTP session with the Azure web site.
Follow the instructions in the browser window to open the site in File Explorer: press Alt, click View, and then click Open FTP Site in File Explorer.
Right click in the right pane of File Explorer and select New...Folder. Name the folder whatever your virtual directory is going to be named; for this example I'm naming the folder "azurethree".
Go back to the browser and return to the web site dashboard for the Azure Web site.
Click on the Configure link in the top navigation for the web site.
Scroll to the bottom of the page and enter the values for your new virtual path. The "VIRTUAL DIRECTORY" name should be the name you want at the end of the Url for your website. For example, if your Azure web site is at steveapps.azurewebsites.net and you want to have your application Url to be https://steveapps.azurewebsites.net/azurethree, then you would enter "/azurethree" for the VIRTUAL DIRECTORY NAME. For the "PHYSICAL PATH RELATIVE TO SITE ROOT" you should enter "site\wwwroot\yourDirectoryName". Using our "azurethree" example, you would enter "site\wwwroot\azurethree". Finally, MAKE SURE you check the Application box.
Click the SAVE button on the bottom navigation to save your changes.
Go to a site in your o365 tenant and use /_layouts/15/appregnew.aspx to create a new client ID and secret; use the domain of your Azure web site for the App Domain value.
Copy the client ID and paste it into the AppManifest.xml file in your App project and web.config of your web site project.
Copy the client secret and paste it into the web.config of your web site project.
Right click on your web project and select Publish...
Click the Import... button on the first page of the publishing wizard and import the publishing profile you downloaded.
By default it selects the web deploy option but we want to use the FTP deploy option, so click the Previous button in the wizard. We don't use Web Deploy because no matter what you select it always deploys to the root of the site and we want to deploy to a subdirectory. This is also a good reason to NOT have an application in the root site, because someone could accidentally overwrite it.
Use the drop down list of deployment profiles and select the one with "FTP" in the profile name then click Next.
In the Site path field add "/yourFolder" name at the end. For example, we're using "azurethree", so the Site path is "site/wwwroot/azurethree". In the Destination Url at the bottom of the wizard add your virtual directory to the end, i.e. "azurethree", so it looks like https://steveapps.azurewebsites.net/azurethree.
Make whatever other configuration changes you'd like to the publishing job and click the Publish button when finished. Your web app should now deploy successfully. The browser window should open up and if you get a 403 Forbidden error, don't worry, things should be okay. You can always verify by going back to your open FTP window. Double-click on the folder you created ("azurethree" in the example I've been doing). Don't panic if you don't see any files; just right click and select Refresh. You should see all of the files for your app in there.
Go back to Visual Studio and right click on your App Project and select Publish...
Click the Package the app button.
Make sure the Url for the website points to your site with the virtual directory in the Url, i.e. https://steveapps.azurewebsites.net/azurethree, paste in the client ID, then click the Finish button.
Copy the published .app file to your app catalog
Add the app to your SharePoint site and try it out.
Here's a collage of three different SharePoint Apps running in a single Azure web site; you can validate that by looking at the Url in the browser window.
In Part 5 of this series we looked at all of the different ways in which CloudTopia is integrated with Office 365 through our custom Web API REST controller. We’re going to wrap up this series with a little sizzle, and look at the custom Windows Phone app we wrote to work with CloudTopia using voice recognition, speech, and Cortana.
Here's some quick links to the whole series:
You heard it here first – a picture’s worth a thousand words. J To start with I’d recommend watching the demo video of the Cortana Social Events app that you will find here: http://1drv.ms/1kPjjKf.
Okay, hopefully you watched the video and now your appetite has been whetted a little. So let’s talk Windows Phone 8.1, using voice and integrating with Cortana. The implementation itself consists of two parts:
Your custom app that does voice recognition and optionally speech.
Integration with Cortana
All of the coding and logic happens in your app. Integration with Cortana actually just happens via an Xml file, that is both surprisingly powerful and easy. In fact that’s my theme for this entire post – the folks that have been doing the work on speech in Windows Phone need to take a friggin’ bow: their stuff works pretty dang good and the implementation is much simpler than I expected.
So how do we do it? Well first and foremost I would recommend that you download the MSDN sample app that demonstrates all this functionality from http://aka.ms/v4o3f0. When I first started looking around I didn’t find any kind of handy tutorial to get me on my way so I relied heavily on this app. With that as a resource for you to lean on, here are the basic steps to integrating voice recognition, speech and Cortana into your Windows Phone apps:
Create a new Windows Phone Silverlight App
Add a SpeechRecognizer instance in code to listen; add a SpeechSynthesizer to speak
Initialize the Recognizer
Tell the Recognizer to start listening
When the Recognizer completed action fires, take the recognized words and do “something”
Create New Windows Phone Silverlight App
This should be pretty obvious but just want to make sure I call this out. As of the time I am writing this post all of this voice goodness is not integrated with Windows Universal Apps, so when you create a new project create a Windows Phone Silverlight app project type as shown here:
Add Speech Recognizer and Synthesizer
Start by adding instances of SpeechRecognizer, SpeechSynthesizer, plus a few other classes that are used for maintaining state to the “main” page of your application (typically mainpage.xaml). This is the page where you’re going to do your voice recognition, and eventually connect to Cortana. The instances you add should look like this:
// State maintenance of the Speech Recognizer
private SpeechRecognizer Recognizer;
private AsyncOperationCompletedHandler<SpeechRecognitionResult> recoCompletedAction;
private IAsyncOperation<SpeechRecognitionResult> CurrentRecognizerOperation;
// State maintenance of the Speech Synthesizer
private SpeechSynthesizer Synthesizer;
private IAsyncAction CurrentSynthesizerAction;
In order to get these all to resolve you may need to add some using statements for Windows.Phone.Speech.Recognition, Windows.Phone.Speech.Synthesis, and Windows.Phone.Speech.VoiceCommands. Okay, step 2 complete.
The next step is to initialize the Recognizer – this is what’s going to do the speech recognition for us. To start that process we’ll create a new instance of the Recognizer like so:
this.Recognizer = new SpeechRecognizer();
Once the Recognizer has been created we need to add grammar sets. The grammar sets are the collection of words the Recognizer is going to use to recognize what it hears. While this could potentially be a daunting task, it is – surprise – pretty easy in most cases. Here’s what I did to populate my grammar set:
this.Recognizer.Grammars.AddGrammarFromPredefinedType("search", SpeechPredefinedGrammar.WebSearch);
await this.Recognizer.PreloadGrammarsAsync();
Boom – there you go – done adding my grammar set. There are a couple of grammar sets that ship out of the box, and so far I have found the WebSearch set to do everything I need. You can also create your own grammar sets if you like; dev.windowsphone.com has documentation on how to do that. The last thing you want to do for the initialization is to define what to do when speech is actually recognized. To do that we’re going to add a handler for when speech recognition is completed. Remember above where I made this declaration – private AsyncOperationCompletedHandler<SpeechRecognitionResult> recoCompletedAction? We’re going to use that recoCompletedAction variable now, like this:
recoCompletedAction = new
AsyncOperationCompletedHandler<SpeechRecognitionResult>
((operation, asyncStatus) =>
Dispatcher.BeginInvoke(() =>
this.CurrentRecognizerOperation = null;
switch (asyncStatus)
case AsyncStatus.Completed:
SpeechRecognitionResult result =
operation.GetResults();
//use the recognized text here
LaunchSearch(result.Text);
break;
case AsyncStatus.Error:
//respond to error; often user
//hasn’t accepted privacy policy
});
So what we’re saying here is that when the speech recognition event happens, if we completed recognition successfully then we’re going to extract the collection of recognized words by getting a SpeechRecognitionResult and looking at its Text property. If there was an error, then we’ll need to do something else. Where I found the error condition happening was after building my app, when I tried the app out on the emulator. You always have to accept the privacy policy before it will do voice recognition and it’s an easy thing to forget when you start up a new emulator session. Other than that though, you are set – that’s all you need to do to initialize the Recognizer before you start listening for speech.
Use the Recognizer to Start Listening
Now that the Recognizer is configured you can start listening. It’s pretty easy – just start listening asynchronously and configure the listening completed handler you want to use, which will be the recoCompletedAction variable I was describing above:
this.CurrentRecognizerOperation = this.Recognizer.RecognizeAsync();
this.CurrentRecognizerOperation.Completed = recoCompletedAction;
As far as speech recognition goes, that’s it – you’re done! Pretty easy, huh? It’s amazingly simple now to add voice recognition to your apps so I recommend you go out and give it a whirl. For the CloudTopia application, when speech recognition was completed you’ll notice that I invoked a method called LaunchSearch. In that method I create a query term that I want to use for CloudTopia and then I send it off to another REST endpoint I created in my SharePoint App project. See, those Web API REST endpoints are very valuable!
The way I decided to implement the search capability was to make the spoken query search for Twitter tags that have been configured for an event. So after recognizing some spoken words I get a list of all the words that I think are not noise words. To do that, I just keep a list of words that I consider to be noise and I load those into a List<string> when my Windows Phone app starts up. It includes words like “event”, “Cortana”, “find”, etc. After I eliminate all the noise words I see what’s left and assuming there is still one or more words there, I concatenate them together. They get concatenated because we’re going to be searching for hashtags, and hashtags are always going to be a single word. That makes it easy to say things like “yammer of july” and then have it concatenated into a the hashtag “yammerofjuly” (if you read about this series on Twitter you’ll know exactly what I’m talking about).
If I have a hashtag to search for I use the SpeechSynthesizer to speak back to the person what I’m going to search for (I’ll cover that in more detail in just a bit) and then I go send off my query to my REST endpoint. The chunk of code to do all of that is here:
//we really only want to look for a single phrase, which could be several words
//but in a tag are represented as a single word
//in order to do that we'll take all the words we're given,
//extract out the noise words, and then create a single word
//that's a concatenation of all the remaining non-words into a single phrase
string searchTerm = GetSearchTerm(queryTerms);
if (!string.IsNullOrEmpty(searchTerm))
//create the template for speaking back to us
string htmlEncodedQuery = HttpUtility.HtmlEncode(searchTerm);
StartSpeakingSsml(String.Format(
AppResources.SpokenSearchShortTemplate, htmlEncodedQuery));
//update UI
WaitTxt.Text = "Searching for \"" + searchTerm + "\" events...";
//execute the query
QueryEvents(searchTerm);
WaitTxt.Text = "Sorry, there were only noise words in your search request";
That code sets us up to send the query to the REST endpoint, and here’s where we actually do the query; one of the things it hopefully demonstrates is just how easy it is to use a REST endpoint. Within the REST controller it takes the search term that was passed in and looks for a match against any of the Twitter tags in SQL Azure. It uses a LIKE comparison so you don’t need to have the exact tag to find a match. If it does find one or more events then it also uses the Yammer Search REST endpoint to query Yammer for matches as well. It then sends back both the events and hits from Yammer for us to display in our Windows Phone app. This is coolness…querying the Yammer cloud service by talking to my phone. Love it.
string searchUrl =
"https://socialevents.azurewebsites.net/api/events/search?tagName=" + searchTerms;
string data = await hc.GetStringAsync(searchUrl);
//if we got some data back then try and load it into our set of search results of
//social events and Yammer messages
if (!string.IsNullOrEmpty(data))
CortanaSearchResult csr = CortanaSearchResult.GetInstanceFromJson(data);
//if we found some events plug them
//into our UI by databinding to the
//lists in the Panorama control
if ((csr != null) && ((csr.Events.Count > 0) ||
(csr.YammerMessages.Count > 0)))
EventsLst.DataContext = csr.Events;
YammerLst.DataContext = csr.YammerMessages;
//update the UI to show that there were no search results found
WaitTxt.Text = "Sorry, I couldn't find any results for \"" +
searchTerms + "\"";
Add Speech to Your App
Adding speech to your application is even easier than adding voice recognition. The first thing you’re going to do is create a new instance of the SpeechSynthesizer, like this:
this.Synthesizer = new SpeechSynthesizer();
That’s all you do to set it up. To actually have my app say something I created a simple method for it that looks like this:
private void StartSpeakingSsml(string ssmlToSpeak)
//Begin speaking using our synthesizer, wiring the
//completion event to stop tracking the action
//when it finishes.
this.CurrentSynthesizerAction = this.Synthesizer.SpeakSsmlAsync(ssmlToSpeak);
this.CurrentSynthesizerAction.Completed = new AsyncActionCompletedHandler(
(operation, asyncStatus) =>
{ this.CurrentSynthesizerAction = null; });
There are different methods and overloads that you can use to have the phone speak. I chose to use SpeakSsml because of the control it gives you over the text that’s spoken. SSML stands for Speech Synthesis Markup Language, and it’s really just Xml that uses a schema to control things like pitch and rate at which words are spoken. Here’s an example of an Xml template I use to say back to the user what it is we are searching for:
<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang="en-US"> <prosody pitch='+35%' rate='-10%'> Searching </prosody> <prosody pitch='-15%'> for </prosody> {0} events </speak>
The way I use that is to create the string that it will say like this:
AppResources.SpokenSearchShortTemplate, searchTerm));
So if the search term is “yammerofjuly” what is spoken back to the user is “Searching for yammerofjuly”, but it’s said with different pitch and rate around the words “Searching” and “for”. VERY cool stuff.
Cortana Integration
Finally the last thing I did was “integrate” my app with Cortana. What does that mean exactly? Well I wanted someone to be able to use Cortana in Windows Phone 8.1 to execute a query using my Social Events app. The way you do that is you create a VoiceCommandDefinition file, which is just another Xml file. In the file you can configure things like the name of your app, examples that can be shown users who aren’t sure how to use voice recognition with your app, the things to listen for with your app, etc.
The first and most important thing I defined in my file is the CommandPrefix; this is really just the name by which my app will be known. The Xml looks like this:
<!-- The CommandPrefix provides an alternative to your full app name for invocation -->
<CommandPrefix> Social Events </CommandPrefix>
What this means now is when someone says something to Cortana that starts with “Social Events”, Cortana knows that it needs to use my app to get the results. For example, if I say “Social Events find yammer of july events”, Cortana will figure out that “Social Events” means my app, and it’s going to let my app know that the recognized words were “find yammer of july events”. The way it lets my app know is that it’s going to launch my app and redirect me to a page in the application (you can configure in the VoiceCommandDefinition file the page in your app where it redirects to as well; mine gets sent to mainpage.xaml). In the code behind for your page then you can override the OnNavigatedTo event and look at the query string.
If Cortana sent the user to your app, it will have used a query string variable that you have also defined in your Xml file, in the ListenFor element:
<!-- ListenFor elements provide ways to say the command as well as [optional] words -->
<ListenFor> Search [for] {dictatedSearchTerms} </ListenFor>
Note the “dictatedSearchTerms” in the curly brackets. There are actually multiple uses for it, but for now I’ll just explain the use case I have. When Cortana redirects a request to my application it will use “dictatedSearchTerms” as the query string variable that contains the recognized words. That means in my override of OnNavigatedTo I can look to see if the QueryString collection contains the key “dictatedSearchTerms”. If it does, I’ll extract the value and then just call my same LaunchSearch method that I described earlier, and pass in the value from the query string.
One of the other really useful aspects of the Xml file is the ability to create examples of how your app can be used with Cortana. When you say “what can I say” to Cortana, it will display a list of all the apps that it knows about that can use voice recognition; if you’ve registered your VoiceCommandDefinition file (more on how to do that in a moment), then your app will be display along with an example of what can be said with your app. The example that it displays is what you configured in your VoiceCommandDefinition file, like this:
<!-- The CommandSet Example appears in the global help alongside your app name -->
<Example> find blazer party events </Example>
So in this case it will show the small icon for my application, it display my application name in big bold letters, and then it will say find blazer party events underneath it. Again, very cool!
If you click the icon of the application it brings it all together and will display something like this:
The last step now is to install your definition file. You can install it as many times as you want, and it just overwrites whatever copy it had previously. Because of that, I just install my definition file every time my Windows Phone app starts up. It’s one simple line of code:
await VoiceCommandService.InstallCommandSetsFromFileAsync("ms-appx:///MyVoiceDefinitionFile.xml");
Well there you go – that’s it. That is a wrap on this six part series on the CloudTopia app. Don’t forget to go grab the code from GitHub.Com (the exact location is included in Part 1 of this series). I hope you can use this to help navigate your way through some of the very interesting connections you can make in a pure cloud-hosted world with SharePoint Apps and many other cloud services. When you get the chance, you can also add some dazzle to your apps with some voice recognition and Cortana integration. Finally…time to stop writing blog posts and go back to building apps.
In Part 4 of this series we looked at the integration with various Azure services in the CloudTopia app. In this part we are going to explore all of the integration that was done with Office 365 and how we did it.
Let’s start by looking at all of the different integration that was done with o365 and then we’ll examine each one in more detail:
SharePoint App that runs in the Events host site
Create hidden SharePoint list to track all of the events
Create new sites for each new event
Remove the out of box Site Feed web part as each new event site is created
Add script editor web part with JS to render Yammer Open Graph item discussion
Create, read, update and delete entries from the hidden SharePoint list of events
SharePoint App
Of course the piece that drives CloudTopia is the SharePoint App. Let’s look first at how everything got package, deployed and installed. I started out by going to one of the existing o365 sites I had and just navigating to the /_layouts/15/appregnew.aspx page to create a new client ID and secret for my application. I then updated my AppManifest.xml file with the client ID, and added the client ID and secret to the web.config file for my web project.
With that configuration data in hand, I deployed my web project to its Azure web site using the publishing profile, as I described in Part 4 of this series. I then published my SharePoint App, which really just created a .app file for me. I uploaded my .app file to the App Catalog for my o365 tenant and installed into my Events site. My permissions in the app were for the full enchilada – Full rights in the Tenant. This is because I need to be able to create new site collections in the tenant, add and remove web parts, etc.
Create Hidden List
When the app installed it fired a remote event receiver that created the list for tracking events, made it hidden, and removed it from the left Quick Nav bar in o365. The complete steps and relevant code for doing that were described in my previous post here: http://blogs.technet.com/b/speschka/archive/2014/05/07/create-a-list-in-the-host-web-when-your-sharepoint-app-is-installed-and-remove-it-from-the-recent-stuff-list.aspx.
Create New Sites
The code for creating new sites borrowed liberally from the Office365 Development Patterns and Practices project (OfficeDev PnP) at https://github.com/OfficeDev/PnP. It’s part of the bigger process that I alluded to in Part 1 of this series that occurs when someone clicks a couple of buttons and says create me a new event site. Under the covers that makes a request to the REST endpoint we created for this action and these four activities happen:
Create a new site
Get the Url and create a new Open Graph item
Record the Url, event name, event date, and TwitterTag data in the SharePoint list
Record the Yammer Open Graph ID, o365 Url, and Twitter tags data in SQL Azure
I’ve already covered creating the new Open Graph item in Part 2 of this series, so I’ll just focus on the other items in the list. In terms of creating a new site, it’s probably better to just review the content from the OfficeDev PnP repo. In CloudTopia the primary modification I made to their code was to check to ensure that the Url was available, and if not add an incrementally larger number to the end of it until I found a Url that is available. For example if you had 10 “Crazy Sell-a-thon” events they couldn’t all have the same Url. In addition to that, even after a site is deleted it’s not really deleted – not at first – so you need to check deleted sites for Url availability as well. Here’s the code I used to ensure a uniquely available Url:
//create the client context for the admin site
//the token is obtained in code not shown here,
//webUrl is a parameter to this method, and
//tenantAdminUri is from code in this method
//that is not shown here, but represents the admin
//site for the tenant, i.e.
// https://yourTenantName-admin.sharepoint.com
//uniqueUrl is just an integer initialized to 0
//baseUrl is initialized to webUrl
using (var adminContext =
TokenHelper.GetClientContextWithAccessToken(
tenantAdminUri.ToString(), token))
var tenant = new Tenant(adminContext);
//look to see if a site already exists at that Url; if it does then create it
bool siteExists = true;
while (siteExists)
//look for the site
Site s = tenant.GetSiteByUrl(webUrl);
adminContext.Load(s);
adminContext.ExecuteQuery();
//if it exists then update the webUrl and
//do the while loop again;
//if it doesn’t exist it will throw an exception
uniqueUrl += 1;
webUrl = baseUrl + uniqueUrl.ToString();
catch
//doesn't exist, need to check deleted sites too
DeletedSiteProperties dsp =
tenant.GetDeletedSitePropertiesByUrl(webUrl);
adminContext.Load(dsp);
//if it exists then update the webUrl
//and do the while loop again
//okay it REALLY doesn't exist, so go ahead
//and grab this url and set the flag to
//exit the while loop
siteExists = false;
//now we can create the site using the webUrl
//follow OfficeDev PnP and use
//SiteCreationProperties here
Once the site is created I can go ahead and add the Url and other information to the hidden list in SharePoint so that it shows up in my list of events. That code is pretty simple and looks like this:
using (ClientContext ctx = TokenHelper.GetClientContextWithAccessToken
(hostUrl, accessToken))
//get our event list
List eventsList = ctx.Web.Lists.GetByTitle(LIST_NAME);
//create the list item
ListItemCreationInformation ci = new ListItemCreationInformation();
ListItem newItem = eventsList.AddItem(ci);
newItem["Title"] = eventName;
newItem["SiteUrl"] = siteUrl;
newItem["EventName"] = eventName;
newItem["EventDate"] = eventDate;
newItem["TwitterTags"] = twitterTags;
newItem["ObjectGraphID"] = objectGraphID;
//update the list item
newItem.Update();
//add the item to the list
Now, finally, I’ll go ahead and add the data to SQL Azure. There’s absolutely nothing new here, this is basically ADO.NET code from earlier this century…but, for completeness here you go:
//record the OG ID, OG URL, and TwitterTag data in SQL
using (SqlConnection cn = new SqlConnection(conStr))
cn.Open();
SqlCommand cm = new SqlCommand("addEvent", cn);
cm.CommandType = CommandType.StoredProcedure;
cm.Parameters.Add(new SqlParameter("@ObjectGraphID", Double.Parse(gi.object_id)));
cm.Parameters.Add(new SqlParameter("@ObjectGraphUrl", newUrl));
cm.Parameters.Add(new SqlParameter("@TwitterTags", se.twitterTags));
cm.Parameters.Add(new SqlParameter("@EventName", se.eventName));
cm.Parameters.Add(new SqlParameter("@EventDate", se.eventDate));
cm.ExecuteNonQuery();
cn.Close();
Remove Site Feed Web Part
The code to remove the Site Feed web part was really pretty much just pulled from the OfficeDev PnP project. I’ll include an abbreviated version of it here so you get an idea of how it looks:
//create the client context
using (ClientContext ctx =
TokenHelper.GetClientContextWithAccessToken(SiteUrl, token))
ctx.Load(ctx.Web, w => w.RootFolder, w => w.RootFolder.WelcomePage,
w => w.ServerRelativeUrl);
Microsoft.SharePoint.Client.File webPage =
ctx.Web.GetFileByServerRelativeUrl(ctx.Web.ServerRelativeUrl +
ctx.Web.RootFolder.WelcomePage);
ctx.Load(webPage);
ctx.Load(webPage.ListItemAllFields);
string wikiField = (string)webPage.ListItemAllFields["WikiField"];
LimitedWebPartManager wpm =
webPage.GetLimitedWebPartManager(
Microsoft.SharePoint.Client.WebParts.PersonalizationScope.Shared);
//remove the OOB site feeds web part
WebPartDefinitionCollection allParts = wpm.WebParts;
ctx.Load(allParts);
for (int i = 0; i < allParts.Count; i++)
WebPart almostDeadPart = allParts[i].WebPart;
ctx.Load(almostDeadPart);
if (almostDeadPart.Title == "Site Feed")
allParts[i].DeleteWebPart();
Add Script Editor Web Part to Render Yammer Open Graph Discussion
This chunk of code was pretty nice and really speaks to the flexibility that you get with the Yammer JavaScript Embed library. I actually created the code for it in a simple HTML page, and then copied into a script editor web part I added to a page in a SharePoint site. Once I validated that it was all working there I exported the web part and copied everything out and plugged it into my CloudTopia code. This code runs in the same abbreviated code block I showed above for removing the web part, so the steps are remove the Site Feed web part, then add the Script Editor web part and populate it with JavaScript that pulls from the Open Graph item discussion. Here’s what that looks like (it got it’s ClientContext from the code shown above):
const string URL_REPLACE = "$URL$";
string partTxt = @"<webParts>
<webPart xmlns=""http://schemas.microsoft.com/WebPart/v3"">
<metaData>
<type name=""Microsoft.SharePoint.WebPartPages.ScriptEditorWebPart, Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"" />
<importErrorMessage>Cannot import this Web Part.</importErrorMessage>
</metaData>
<data>
<properties>
<property name=""ExportMode"" type=""exportmode"">All</property>
<property name=""HelpUrl"" type=""string"" />
<property name=""Hidden"" type=""bool"">False</property>
<property name=""Description"" type=""string"">Allows authors to insert HTML snippets or scripts.</property>
<property name=""Content"" type=""string""> <script type=""text/javascript""
src=""https://assets.yammer.com/assets/platform_embed.js""></script>
<div id=""embedded-feed"" style=''height:400px;width:500px;''></div>
<script>
yam.connect.embedFeed({
container: ""#embedded-feed"",
network: ""yammo.onmicrosoft.com"",
feedType: ""open-graph"",
objectProperties: {
url: ""$URL$""
</script>
</property>
<property name=""CatalogIconImageUrl"" type=""string"" />
<property name=""Title"" type=""string"">Script Editor</property>
<property name=""AllowHide"" type=""bool"">True</property>
<property name=""AllowMinimize"" type=""bool"">True</property>
<property name=""AllowZoneChange"" type=""bool"">True</property>
<property name=""TitleUrl"" type=""string"" />
<property name=""ChromeType"" type=""chrometype"">None</property>
<property name=""AllowConnect"" type=""bool"">True</property>
<property name=""Width"" type=""unit"" />
<property name=""Height"" type=""unit"" />
<property name=""HelpMode"" type=""helpmode"">Navigate</property>
<property name=""AllowEdit"" type=""bool"">True</property>
<property name=""TitleIconImageUrl"" type=""string"" />
<property name=""Direction"" type=""direction"">NotSet</property>
<property name=""AllowClose"" type=""bool"">True</property>
<property name=""ChromeState"" type=""chromestate"">Normal</property>
</properties>
</data>
</webPart>
</webParts>";
partTxt = partTxt.Replace(URL_REPLACE, SiteUrl);
//add the new part
WebPartDefinition wpDef = wpm.ImportWebPart(partTxt);
WebPartDefinition wp = wpm.AddWebPart(wpDef.WebPart, "wpz", 1);
ctx.Load(wp);
//run some other clean up code from OfficeDev PnP to get the
//web part displaying correctly; see their project for those
//details
Read, Update and Delete Items from the Hidden SharePoint List
The code in the other REST controller methods are really so generic and mirror the code I’ve already shown so closely that there’s not really a point to show them as well. If you’re really interested then please go to the repo on GitHub for this project and review it up there.
Okay, we’ve now wrapped up all of the code in the CloudTopia application. If you’ve read everything so far, congratulations – good job! As it turns out though there is one other piece of application integration I did for CloudTopia, but it’s not in the cloud – it’s on a phone. In the next and final part of this series we’ll look at writing a Windows Phone app that uses speech, voice recognition, and of course, Cortana!
In Part 3 of this series we looked at the plumbing required to add support for Web API 2.x to your SharePoint Apps, as well as some of the integration needed to have it work with SharePoint and CSOM. In Part 4 we’re going to look at the integration with various Azure services in the CloudTopia app.
To begin with I started the CloudTopia project like any other SharePoint App – I cracked open Visual Studio 2013 Update 2, and then I created a new SharePoint App. I made it a provider-hosted app, and it created two projects for me – one with the application manifest needed for app registration and the other a web site project. Deployment was straightforward but I’ll cover that in the next post; I just wanted to set the stage for how we get things started. Now let’s look at how we took this application and integrated with a variety of Azure services.
Azure Web Sites
CloudTopia is deployed to an Azure web site. You get 10 free web sites with an Azure subscription so I used one of those to deploy my app. The process for doing so is quite simple of course – you go to the Azure Management Portal and click the Web Sites link in the navigation and Add a new one. There is some subtlety to adding a new one for this project, but I’ll cover that in the SQL Azure section (it will make sense why when we get there). Once my web site is created I just downloaded the publishing profile from the Azure management portal page for the web site, and then in Visual Studio I chose the option to publish my site. When the publish wizard ran I gave it the location of the publishing profile file that I had downloaded and away we went. Whenever I made changes I just republished the site and about 30 seconds later my latest code was up and running in the Azure web site. Good stuff.
One other thing worth noting here is debugging. Debugging is possible for web sites hosted in Windows Azure, you just do it a little differently than if you were running your web site locally. I’ve previously posted about this process for debugging your SharePoint Apps that are hosted in an Azure web site – you can find that post here: http://blogs.technet.com/b/speschka/archive/2013/11/25/debugging-sharepoint-apps-that-are-hosted-in-windows-azure-web-sites.aspx.
SQL Azure
I used SQL Azure in the CloudTopia app primarily to simplify the process of the daily task to go out and find matching tweets for social events. As I described earlier, I was able to take advantage of a free 20MB SQL Azure database that I get with my Azure subscription. You actually create and/or connect it to your application at the time you create your Azure web site – that’s why we’re covering site creation here in the SQL Azure topic. To connect these up you want to first do a custom create for your new Azure web site:
When you do that you’ll have the option of selecting a database. Click the drop down and if you have an MSDN subscription you should see an option to create a free 20MB database (assuming you have not created it already; if you have then you can just select the instance you already created):
Now I’m going to take what might seem like a brief detour but I’ll bring it back around when I’m done. One of the features of the CloudTopia app is that it will take a set of Twitter tags that have been defined for an event and go do a search to find tweets in the previous 24 hours that have used them. Every tweet that is found is added to the discussion on the Yammer Open Graph item that’s associated with the event. That’s how we get this nice integrated discussion in our events:
We’re just running this code once a day, so the process to gather these matching tweets is kicked off when an Azure Scheduler job makes a GET request to our REST endpoint that runs this code. So why am I sharing this information here? Because way back in Part 1 of this series I mentioned that I was using SQL Azure to store some of the CloudTopia data versus just keeping everything in a SharePoint list. Understanding this piece of functionality should help explain why SQL Azure.
As I also mentioned previously in this series, you always need the ID of a Yammer Open Graph item in order to read or write to the discussion that’s associated with it. Also, as I described above, this process kicks off once a day from an Azure Scheduler job. The distinction in this scenario is that there is no human present. That means that I don’t have a user context in order to make a call back into SharePoint. So if I wanted to store ALL of the CloudTopia metadata in a SharePoint list, I would need to configure my app to use an app only request. While I could certainly do that, it requires an elevated level of permissions versus a simple user-driven request and that was something I did not want to do. That’s how I landed on using SQL Azure for this purpose. Not only is it free for my application, I’m able to use it without any user context at all – I just use a connection string with a set of credentials for a SQL Azure user that has rights to my CloudTopia database. It’s also significantly easier for most developers to create SQL queries then the “sometimes mystical, sometimes magical, sometimes maddening” world of SharePoint CAML queries. SQL Azure makes it easy to retrieve the information needed for the CloudTopia app, and also doesn’t require a high level of permission from the application itself. Score one for SQL Azure!
Azure Scheduler
The final Azure service I used on CloudTopia is the Azure Scheduler service. This is a pretty straightforward service to use and configure so I’m not going to spend a ton of time talking about it. There are always several options when you are looking to schedule tasks; for CloudTopia though, as the name implies, I wanted it to be 100% hosted in the cloud – and cheap. The Azure Scheduler service is a great solution for these requirements. You get a set number of job iterations for free, and since I’m only making one job run a day – to get the tweets from the last 24 hours – this fits the bill perfectly. When you create your job you can choose an HTTP or HTTPS endpoint to invoke, and you can define whether you want to do a GET, POST, PUT or DELETE. For POST and PUT you can optionally provide a Body to send along with the request; for all of them you can add one to many custom Http headers to send as well. After you configure your job endpoint you set up your schedule – a one time run to something that reoccurs on a regular basis. That is basically it, but here’s some pictures to show you the UI that was used to create the CloudTopia Scheduler job:
When the Scheduler job runs here’s the REST endpoint that it invokes:
public async Task<HttpResponseMessage> Get()
HttpResponseMessage result = Request.CreateResponse(HttpStatusCode.OK);
await Task.Run( () => UpdateYammerWithTwitterContent());
result = Request.CreateErrorResponse(
HttpStatusCode.BadRequest, ex.Message, ex);
return result;
So I just go off and run my code to update the Yammer Open Graph item with any tweets from the last 24 hours. If it works I return an HTTP status code 200, and if it fails I decided to return an HTTP status code 400. Yeah, it’s not really a bad request, but I’ve always wanted to return that to someone else for a change.
Here’s an abbreviated version of the code to actually go out and get the tweets and write them to Yammer. First I connect to SQL Azure and get the list of events and their associated Twitter tags and Open Graph IDs:
SqlCommand cm = new SqlCommand("getAllEvents", cn);
SqlDataAdapter da = new SqlDataAdapter(cm);
DataSet ds = new DataSet();
da.Fill(ds);
With my dataset of events I enumerate through each one and go get the event tweets. I start out by getting an access token for Twitter:
if (string.IsNullOrEmpty(TWT_ACCESS_TOKEN))
//create the authorization key
string appKey = Convert.ToBase64String(
System.Text.UTF8Encoding.UTF8.GetBytes(
(HttpUtility.UrlEncode(TWT_CONSUMER_KEY) + ":" +
HttpUtility.UrlEncode(TWT_CONSUMER_SECRET))));
//set the other data for our post
string contentType = "application/x-www-form-urlencoded;charset=UTF-8";
string postData = "grant_type=client_credentials";
//need to get the oauth token first
response = MakePostRequest(postData, TWT_OAUTH_URL, null,
contentType, appKey);
//serialize it into our class
TwitterAccessToken accessToken =
TwitterAccessToken.GetInstanceFromJson(response);
//plug the value into our local AccessToken variable
TWT_ACCESS_TOKEN = accessToken.AccessToken;
Now that I’m sure I have a Twitter access token I can go ahead and query twitter for the tags I’m interested in:
//now that we have our token we can go search for tweets
response = MakeGetRequest(TWT_SEARCH_URL +
HttpUtility.UrlEncode(query), TWT_ACCESS_TOKEN);
//plug the data back into our return value, which is just a
//custom class with a list of SearchResult so I can work
//with it easily from my code
results = SearchResults.GetInstanceFromJson(response);
//trim out any tweets older than one day, which is how frequently
//this task should get invoked
if (results.Results.Count > 0)
//retrieve items added in the last 24 hours
var newResults = from SearchResult oneResult in results.Results
where DateTime.Now.AddDays(-1) <
DateTime.Parse(oneResult.Published)
select oneResult;
results.Results = newResults.ToList<SearchResult>();
Once I get my search results back, I can add each one to the discussion on the Yammer Open Graph item:
foreach (SearchResult sr in queryResults.Results)
string newPost = "From Twitter: " +
sr.User.FromUser + " says - " + sr.Title + ". See the post and more at " +
"https://twitter.com/" + sr.User.FromUser + ". Found on " +
DateTime.Now.ToShortDateString();
CreateOpenGraphPost(objectGraphID, newPost);
You may notice that I’m calling the same CreateOpenGraphPost that I described earlier in this series – I used it previously to create the initial post for new Open Graph items.
That’s it for this post. In Part 5 of the series we’ll look at all of the integration that was done with Office 365. It was a lot, so stay tuned.
In Part 2 of this series we looked at some of the details of working with Yammer Open Graph items in the CloudTopia app. In Part 3 we’re going to talk about adding and using Web API 2.1 functionality to a standard out of the box SharePoint App, and then look at what we do with that in CloudTopia.
In CloudTopia I use a Web API REST endpoint for my implementation of virtually everything. It allows me to create an end user experience free of postbacks, but still access all the functionality of CSOM as well as .NET. I can use JavaScript and jQuery to manage the front end interface because the code in my Web API controller is doing all the work. In addition to that, by adding a REST endpoint to my application I open it up to be consumed and/or provide services to other applications. A good example of this is the Twitter integration. That process is kicked off by an Azure Scheduler job that makes a GET request on a REST endpoint I set up. That’s a pretty key use case for Web API in your SharePoint Apps – without it, all of your application functionality is wrapped up in the fairly narrow confines of some browser based application. By adding a Web API endpoint on top of it, now I can integrate that same functionality into many other applications across my organization or even, as demonstrated with the Azure Scheduler, outside my organization if I wish.
Now, adding the plumbing to support Web API 2.1 is not necessarily easy to find, so let me give you the steps here:
Add the following two NuGet packages to your web application project: Microsoft ASP.NET Web API 2.2 Core Libraries and Microsoft ASP.NET Web API 2.2 Web Host (NOTE: the “2.2” refers the current version at the time of this blog post; you may find a more current version).
Add a new class file to the root of your project and call it WebApiConfig.cs.
Add the following code to your WebApiConfig.cs file (NOTE: You can add a different default route for your REST endpoints if you wish, I’m just following common convention here):
public class WebApiConfig
public static void Register(HttpConfiguration config)
config.MapHttpAttributeRoutes();
config.Routes.MapHttpRoute("API Default", "api/{controller}/{id}",
new { id = RouteParameter.Optional });
Add this code to Application_Start in Global.asax:
protected void Application_Start(object sender, EventArgs e)
//for WebApi support
GlobalConfiguration.Configure(WebApiConfig.Register);
UPDATE 12/29/2014: IMPORTANT! Make sure you call the GlobalConfiguration.Configure method BEFORE you do any other configuration calls in the Start event. For example, if you are adding this to a full blown MVC app then you will already have this in you Start code: RouteConfig.RegisterRoutes(RouteTable.Routes);. If you add the call for WebApiConfig.Register AFTER RouteConfig.RegisterRoutes then your Web API routes will not be found and your controller actions not hit.
Once you’ve done the configuration above, you can add a new Web API Controller Class v2 to your project and start creating your REST endpoints. Before you get started writing code here a few tips:
Make sure the name of your class contains the word “Controller” (i.e. EventsController.cs); otherwise it won’t be found. That one can drive you a little crazy if you don’t write much Web API and so aren’t familiar with this little nuance.
To create an overload inside your controller add a Route attribute and optionally an HTTP verb; that is how I added multiple POST endpoints to my single Web API controller class. For example I do a POST request to api/events/currentevents to get the list of current events, a POST to api/events to create a new social event, etc. Here’s an example of what the currentevents route looks like:
[Route("api/events/currentevents")]
[HttpPost]
public List<SocialEvent> Get([FromBody]string value)
//code goes here
Make sure your Route attribute uses the same path as a route you defined in WebApiConfig.cs. For example, if your route definition uses “api/{controller}/{id}, your Route attribute should also start with “api/”.
Once you have your controller added and configured as described above, calling it from jQuery is quite simple. Here’s an abbreviated look at the jQuery in my SharePoint App that gets the list of events:
//some SharePoint vars I get and will explain next
formData = JSON.stringify(formData);
$.post("/api/events/currentevents", { '': formData })
.success(function (data)
}).fail(function (errMsg)
alert("Sorry, there was a problem and we couldn't get your events: " +
errMsg.responseText);
One of the biggest challenges when using a REST endpoint as part of your SharePoint App is getting an access token to work with (assuming you are doing more than just app only calls). I looked three different options for this when I was writing CloudTopia:
Persist token info to storage, like we recommend for the CAM “permissions on the fly” scenario
Persist token info to ViewState
Write out tokens to hidden fields on page
Of these options #1 is the safest and most enterprise-worthy of the bunch. It also takes the most time and investment to do it right. Because of the limited time I had to build the CloudTopia application I did not take this approach. If I were though, I would consider having a method in my REST endpoint called something like RegisterRequest. I could imagine calling a method like that and passing in a SharePointContextToken as I described in this post here: http://blogs.technet.com/b/speschka/archive/2013/07/30/security-in-sharepoint-apps-part-4.aspx. With the SharePointContextToken I have a) a guaranteed unique cache key for it and b) a refresh token that I can use to obtain an access token. So when my RegisterRequest method was called I could then store that somewhere, like SQL Azure or whatever. Then I could require that any calls into my REST endpoints provide the cache key, and all I have to do is look up what I’ve stored and if there’s something there, use the refresh token to get an access token and go to work. This is just one idea I had, you may have others of your own.
Given my limited time, I chose to try both option 2 and 3. Option 2 was really something I just wanted to play with to see if there would be any issues in using it. So in that case I took the access token I got and wrote it to ViewState, and then I used it for one button in the page, which cleans the app up (i.e. deletes the hidden list I use to track new events). I’m happy to report that it worked fine so if you want to go that way and you are doing your code in post back events it should work fine. That’s really the key – you switch to a post back model if you want to use values from ViewState.
Primarily what I did was option 3. I wrote both the SharePoint site Url as well as the access token to the page to some hidden fields, and then I pass those variables in to my REST calls. It all travels over SSL, the access token has a limited lifetime, etc. so it was a reasonable choice given what I had to work with.
The end-to-end implementation of if then went something like this:
I had client side script that looks like this:
//get the hiddens
var hostUrl = $("#hdnHostWeb").val();
var accessToken = $("#hdnAccessToken").val();
//create the JSON string to post
var formData = "{hostUrl:" + hostUrl + "," +
"accessToken:" + accessToken + "}";
//make it POST ready
//call my REST endpoint
In my REST endpoint I parsed out the JSON that was sent in like this (NOTE: ParseJson is just a custom method I wrote for my SocialEvent class):
SocialEvent se = SocialEvent.ParseJson(value);
The SocialEvent class has a hostUrl and accessToken property, so I just used them when calling methods that used CSOM to work with SharePoint:
List<SocialEvent> results = GetEventListItems(se.hostUrl, se.accessToken);
In my code that uses CSOM I created the ClientContext using the hostUrl and accessToken like this:
using (ClientContext ctx = TokenHelper.GetClientContextWithAccessToken(hostUrl, accessToken))
There you have it – that’s a roadmap for how to add Web API 2.x support to your SharePoint Apps. I’m becoming a pretty big fan of this approach because it provides so much flexibility for making a client experience exactly like you want it, plus it opens up the capability to integrate all of your application functionality across other applications. Now that we’ve wrapped up this piece of plumbing, in Part 4 of this series we’ll take a look at the Azure integration points.
In Part 1 of this series, I introduced you to the CloudTopia app. In Part 2 we’re going to look at some of the work we did with Open Graph items in CloudTopia.
As I described in Part 1, Yammer Open Graph (OG) items are used in CloudTopia in a couple of different ways: 1) to provide a forum for discussion for the team working on an event and 2) as a way to bring in external discussions from Twitter to the OG item so the team can get a good sense of the external buzz that’s happening around their event…like #yammerofjuly (inside joke for those of you who have been following me on Twitter). As I also described in Part 1, I covered the basic details of working with OG items in a previous post here: http://blogs.technet.com/b/speschka/archive/2014/05/29/using-yammer-open-graph-in-net.aspx. I won’t be covering that all over again, but I will be covering some of the other implementation details and things you should be aware of when working with your own OG items. And that’s really the point of this entire series – not how to write this application per se, but cover some general purpose implementation details and things you should be aware of.
Okay, so we’ll use the blog post above as a starting point for working with OG items, now let’s look at some of the implementation details. In my previous posts on OG I talked about the object model that I created over an OG item and how to use it to create a new OG item in Yammer. For review, here’s what my object model looks like:
To use it you need to send a chunk of JSON to Yammer to create the OG item. What I did to facilitate this is let you create an OG item using my object model, and then when you call the ToString() method I’ve overridden that so that it produces the JSON you need. With that in hand you can use one of the methods I included in my original Yammer and .NET posting to create the item, by calling the MakePostRequest method. The net of this is that your code looks pretty straightforward:
YammerGraphObject go = new YammerGraphObject();
go.Activity.Action = "create";
go.Activity.Actor = new YammerActor("Steve Peschka", "speschka@yammo.onmicrosoft.com");
go.Activity.Message = "This is the discussion page for the " + eventName + " event on " + eventDate;
go.Activity.Users.Add(new YammerActor("Anne Wallace", "annew@yammo.onmicrosoft.com"));
go.Activity.Users.Add(new YammerActor("Garth Fort", "garthf@yammo.onmicrosoft.com"));
YammerGraphObjectInstance jo = new YammerGraphObjectInstance();
jo.Url = Url;
jo.Title = eventName;
jo.Description = "This is the discussion page for the " + eventName + " event on " + eventDate;
jo.Image = "https://socialevents.azurewebsites.net/images/eventplanning.png";
jo.Type = "document";
go.Activity.Object = jo;
string postData = go.ToString();
string response = MakePostRequest(postData, graphPostUrl, yammerAccessToken, "application/json");
The other thing I mentioned in part 2 of that series on OG items is how important it is to have the OG ID; it is required whenever you want to read from or write to the discussions for the OG. As I mentioned in that post there are basically three ways to get the OG ID:
Capture it when the OG item is created. You’ll get some JSON back if the create is successful and we can use that to extract out the ID of the newly created item.
Search for an OG item. You can search using the Url property of the OG item; however this only returns results if there is at least one discussion item created for the OG. This is another reason why the CloudTopia app creates an initial discussion item when creating the OG.
Do a “fake” update to the OG item. What I mean by fake is that I just pass in my own username and email address as the actor, I change the Action to “follow”, and I set the Private property of the Activity to true. When I do that I get back the same chunk of JSON as I do when I create an OG, so again I can extract out the ID from there.
In the case of CloudTopia I simply capture the ID at the time I create the item; that’s clearly the best option if you can do so. In CloudTopia I take the ID of the newly created OG item and I save it to SQL Azure so I can use it later when creating new discussion items for the OG based on tweets I found for the event. To extract the ID I built another class to serialize the JSON into and it looks like this:
Using it is quite simple:
//create the OGO
if (!string.IsNullOrEmpty(response))
YammerGraphObjectItem gi = JsonConvert.DeserializeObject<YammerGraphObjectItem>(response);
Finally, remember that you always need the OG ID when reading or writing discussion items for it. Also, the Url to use to read OG discussion items is currently undocumented “officially”, but I have covered all of this in greater detail in part 2 of my series on working with Yammer Open Graph from .NET (http://blogs.technet.com/b/speschka/archive/2014/05/29/using-yammer-open-graph-in-net-part-2.aspx).
Now let’s bring all the discussion of this back around to a concrete example – what we did in the CloudTopia app. Remember that we create a new OG item when a new o365 site is created, and we use the Url for the new o365 site as the key for the OG item. Here’s what that code looks like in CloudTopia:
//NOTE: WILL EXPLAIN MORE OF THIS CODE
//IN A SUBSEQUENT PART OF THIS SERIES
//create a new site
string newUrl = AddNewSite(se.accessToken, se.hostUrl, se.eventName, se.eventDate);
//if it works, plug in the new site url
if (!string.IsNullOrEmpty(newUrl))
//create a new GraphObject item
YammerGraphObjectItem gi = CreateOpenGraphItem(newUrl, se.eventName, se.eventDate, se.twitterTags);
The CreateOpenGrahpItem method looks like this:
YammerGraphObjectItem gi = null;
//serialize the results into an object with the OG ID
gi = JsonConvert.DeserializeObject<YammerGraphObjectItem>(response);
string newMsg = "Welcome to the Yammer discussion for the " + eventName +
" event, happening on " + eventDate + ". We'll also be tracking external " +
"discussions about this event on Twitter by using the tags " + twitterTags + ".";
CreateOpenGraphPost(gi.object_id, newMsg);
One quick note about the code above. You may notice that I made the OG object Type a “document”. You can find the complete list of Types that are supported at http://developer.yammer.com/opengraph/#og-schema. There isn’t a type for “web site” so I just used document, but you can obviously choose one that works for you. Finally, assuming our OG was created successfully then we serialize the return JSON to get the YammerGraphObjectItem so we have the object_id we need to update it later on. Then we create the first discussion post to the OG item in the CreateOpenGraphPost method. It is extraordinarily simple, just the way I like it:
string msg = "body=" + Message + "&attached_objects[]=open_graph_object:" +
ID + "&skip_body_notifications=true";
//try adding the message
string response = MakePostRequest(msg, messageUrl + ".json", yammerAccessToken);
That’s it! We’ve talked about the semantics of working with Yammer Open Graph items, and we’ve looked at the specific implementation we used in the CloudTopia application. In the next post in this series we’ll look at adding a Web API component to your standard out of the box SharePoint App project. This can be extremely simple, I regularly use them in my SharePoint Apps now so I hope you’ll tune in and read more about it.
This is going to be a multi-part series to dissect and tear apart an application I tweeted about using the #yammerofjuly hashtag. This is an application I developed a couple of months ago as a means of illustrating some of the different complexities and options when building applications that span several cloud services. Through the course of this series I’ll start out by looking at the application overall, and then start breaking down some of the different components of it and how everything is tied together in the hopes that you can use it to integrate some of these same service connection points within your own applications.
NOTE: You can find ALL of the source code for everything in this series on GitHub at https://github.com/OfficeDev/CloudTopia-Code-Sample.
We’ll start by looking at the business requirements for the application. In our scenario we are working with a company that manages public events for their customers. They have the basic collaboration needs – for each event they create brochures, flyers, calendars, budgets, schedules, etc. In addition to that they want to be able to have a discussion forum around the event so they can talk through different aspects of the event with the team. They also want to have some way to get a feel for the buzz about the event that’s happening out in the public. They’re just looking for some way to get a feel for how their marketing of the event is working.
Given those requirements, I built the application I call CloudTopia. It uses several services and technologies that will be described in subsequent posts:
Office 365
Yammer Open Graph
SharePoint Cloud App Model in a provider hosted app
JavaScript and jQuery
Web API 2.1
Azure web sites
SQL Scheduler service
Twitter
So as you can see, a lot of things working together here to make our solution. To start things off, I created a demo video of the application – you can watch it here: https://onedrive.live.com/redir?resid=96D1F7C6A8655C41!18616&authkey=!AC4csVj2oAU6B7M&ithint=video%2cmp4. After you’ve watched the video come back here and let’s talk a little bit about what you saw and how it was built.
Design and Architecture
Here’s a screenshot of the home page of the CloudTopia application with callouts for the different technologies and services being used:
Here’s a brief summary of each component:
Office 365 – the whole application starts out in an Office 365 site. I just created a new site in o365 tenant and made it the home page for the application.
Cloud App Model – the SharePoint 2013 Cloud App Model (CAM) was the starting point that I used to create this application. It ultimately relied upon several other services and technologies, but it was all delivered through CAM. One of the main features of the application – the UI you see in the o365 site – is just the standard iFrame that you get with a provider hosted application. From where it says “Upcoming Social Events” on down is all part of the app.
Azure Web Sites – one of the great things about CAM and provider hosted apps is that you can host them pretty much anywhere. In my case I decided to use one of the free Azure web sites that I get with my Azure subscription to be the “host” in my provider hosted application. It also underscores an important theme with this application: it’s called “CloudTopia” because it is 100% running in the cloud; there are NO on premises components to this application.
SharePoint List Data – all of the events and some related metadata is stored in a hidden list in the o365 Events site. The list is created by a remote event receiver when the application is installed. The event receiver creates the list, makes it hidden, and removes the list from the quick navigation links on the left side of a standard team site page. For more information on this process of creating a list this way in the host web of an application see my previous post here: http://blogs.technet.com/b/speschka/archive/2014/05/07/create-a-list-in-the-host-web-when-your-sharepoint-app-is-installed-and-remove-it-from-the-recent-stuff-list.aspx.
More o365 sites and Yammer Open Graph Items – each time a new event is added, a new o365 site is created for the event and a Yammer Open Graph item is created that is used or discussions on the event. More details follow below where I describe what all happens when you create a new site. In terms of working with Yammer Open Graph from .NET, I have already covered that in previous posts on my blog. You can go to the Open Graph specific post at http://blogs.technet.com/b/speschka/archive/2014/05/29/using-yammer-open-graph-in-net.aspx; it’s also part of the bigger Yammer toolkit for .NET that I’ve been building over time that started with this post: http://blogs.technet.com/b/speschka/archive/2013/10/05/using-the-yammer-api-in-a-net-client-application.aspx.
SQL Azure – the application uses SQL Azure to store certain data that is necessary to query twitter and add discussion items to Yammer Open Graph objects. In a later part in this series I’ll explain why I chose to use SQL Azure instead of SharePoint list data to store this information. As an added bonus, I used a free SQL Azure data instance that comes with an MSDN subscription. I’ll also cover that a little later.
Azure Scheduler – as you saw in the demo video, one of the things the application does is go find tweets that match the hashtags for our event and then add those tweets to the Open Graph discussion for the event. That work happens once a day and is triggered using a job in the Azure Scheduler service. Due to the low volume of these requests it falls within the number of free jobs that can be scheduled with Azure.
Web API 2.1 – a key component to the entire CloudTopia application is the use of Web API. It’s primary use case is that it does all of the hard work for all of the features of the application – creating o365 sites, creating Yammer Open Graph items, searching Twitter, etc. One of the big reasons why it’s so valuable for SharePoint Apps is that it allows me to control the UI entirely client side, but still use all the power of CSOM and any other .NET API. You saw a lot of activity in the demo video and all of that was done through JavaScript and jQuery calling the custom REST endpoint that was created for the CloudTopia application. In addition to that, it allowed me to create a REST endpoint just for the Azure Scheduler job. A job can call an HTTP or HTTPS endpoint to do something, but there needs to be a listener that can “do something” when invoked. Adding an endpoint in my REST controller provides the interface for just that kind of automated integration with other applications.
What Happens When You Create a New Event
There are quite a few things that get triggered when you go through the seemingly simple process of creating a new event. Let’s take a look at what happens.
A new o365 site is created for the event.
The out of the box Site Feed web part is removed from the new o365 site.
A new Open Graph (OG) item is created in Yammer.
The Url link for the OG item is set to be the Url of the new o365 site
This also means you can go into Yammer and just search for your o365 site Url and you’ll find the Open Graph object. Pretty cool.
A welcome post is added to the Open Graph discussion.
A script editor web part is added to the new o365 site; it uses Yammer Embed to display the feed for the Open Graph item.
A new item is added to the hidden SharePoint list in the host site where the App part is installed; that’s how it shows up in the event list in the host site.
A new item is added to SQL Azure with the Open Graph ID along with the new Site Url and Twitter tags.
That’s a lot of stuff! That’s also good for the intro of this series. In Part 2 of this series I’ll talk about some of the work I did around Yammer Open Graph objects in CloudTopia.
As more folks are deploying the SharePoint 2013 Hybrid features we continue to pick up little tidbits that help make the journey easier. A couple of new ones have come up recently that are worth sharing at this point, so here goes:
Hi all, this post is really just for awareness about some fairly new Yammer content that was recently published. Our documentation team has prepared a poster for Yammer on Mobile Devices, which covers both applications and their features that we currently offer for mobile devices, but also has a nice section on authentication options with Yammer. There is still a little fleshing out needed around the single sign on option for Yammer, but I'm hoping that it will be further updated in the next couple of months or so. In the meanwhile it's a great starting point to learn quickly and simply what you can do with Yammer on a variety of different mobile devices and operating systems. You can check it out at http://technet.microsoft.com/en-us/library/dn635312(v=office.15).aspx.
This is a topic that seems to come up with some frequency and when I needed to do it recently I could not find a good working sample of doing this from server-side code. The scenario here is imagine you want to upload some very large files to SharePoint via CSOM. You have some code running "somewhere" - could be in your own Web API controller or something similar to it that can tolerate a long upload time. The upload time could be long because you want to upload large files, i.e. larger than the 1.5MB support from CSOM alone when uploading files to SharePoint. In that case you need to use the REST interface into SharePoint. In pulling together the code to do this I didn't find a complete sample anywhere, but I managed to cobble together the code I wrote along with random pointers here and there from about four other TechNet articles. Blech! I feel like I should get some kind of finder's fee for figuring this out and pulling it together. But I digress... ;-) Trust me when I say you're probably going to want to bookmark this posting because I think you will find it handy.
So let me start I suppose by hitting up some of the main challenges I hit when doing this and how I tackled each one, and then I'll finish with a fairly complete code sample (i.e. it will work for you when you plug in your own file you want to upload). Here we go:
Challenge #1 - I Need an App Only Token and I'm using ACS
Virtually every sample I've ever seen for developing SharePoint apps using ACS (i.e. low trust) assume that there is some browser and user context present. Well if you are really doing this via some code running "somewhere" then chances are it may not be triggered directly from a browser request. Maybe it's a scheduled job, who knows, but for now let's just say that it was part of my scenario so I needed to solve it. The problem gets more complex when you are doing this from something like a WCF or REST solution because they don't expose an HTTP Context object. That means that you can't use the method in TokenHelper to get an access token because it requires the HTTP Context. The good news is there is another method in TokenHelper that we can use to get the access token that we'll use to talk to SharePoint. It looks like this:
//start out by getting a client context for o365
string o365ClientId = ConfigurationManager.AppSettings["o365ClientId"];
string o365ClientSecret = ConfigurationManager.AppSettings["o365ClientSecret"];
//get the Uri for the personal site
Uri siteUri = new Uri(siteUrl);
//this makes a connection to o365 using an App Only token since the user is not currently connected to o365
//NOTE: The 5 parameter version of this method is something custom I wrote, but not needed for the general
//use case; just use the standard 3 parameter version of this method that comes out of the box. Thanks!
var token = TokenHelper.GetAppOnlyAccessToken(TokenHelper.SharePointPrincipal, siteUri.Authority, TokenHelper.GetRealmFromTargetUrl(siteUri),
o365ClientId, o365ClientSecret).AccessToken;
So what I've done here is I had the ClientId and ClientSecret in my web.config for my SharePoint App anyways. For reasons that aren't important to this scenario, I copied those values into two new settings in web.config - one called o356ClientId and the other called o365ClientSecret. I take those values along with the Url to the site where I'm going to upload the file (which I put the siteUri variable) and let SharePoint and ACS go do their authentication oauth token thing. When I'm done I have an access token I can use for an App Only call into the SharePoint site.
Challenge #2 - I need to get a Form Digest Value
Almost all of the example code you finds assumes that you are in a SharePoint hosted app and you are running client side code. That's great because SharePoint emits a form digest value into every SharePoint page. However in our scenario we're not working with a SharePoint page so how do we get this? Well you can just take the siteUri you created above, and go make a POST to that site in the _api/contextinfo path and you will get a bunch of JSON back that includes a form digest value. You don't include any content when you make your POST request however. Here's an example of what that looks like:
HttpWebRequest restRqst = (HttpWebRequest)HttpWebRequest.Create(siteUrl + "_api/contextinfo");
restRqst.Method = "POST";
restRqst.Accept = "application/json;odata=verbose";
restRqst.Headers.Add("Authorization", "Bearer " + token);
restRqst.ContentLength = 0;
//get the response so we can read in the request digest value
HttpWebResponse restResponse = (HttpWebResponse)restRqst.GetResponse();
Stream postStream = restResponse.GetResponseStream();
StreamReader postReader = new StreamReader(postStream);
string results = postReader.ReadToEnd();
So you can see I use the access token I got in the previous chunk of code and use that in an authorization header to get access to the SharePoint site. I then make my 0 byte POST request and I get back the JSON string, which in my code above has been stuck in the "results" variable.
Challenge #3 - How Do I Dig out the Form Request Digest
This of course is not a huge problem, but I did come across an interesting solution so I'm sharing here. Typically when I work with JSON on the server side I'll design some classes into which I'll serialize the JSON so I have a nice object model that I can use to work with the results. Well some very sharp folks that know ASP.NET much better than me turned me on to a much simpler way of digging out that data than trying to do some string parsing. Here's what that code looks like:
JavaScriptSerializer jss = new JavaScriptSerializer();
var d = jss.Deserialize<dynamic>(results);
string xHeader = d["d"]["GetContextWebInformation"]["FormDigestValue"];
I really haven't deconstructed the "<dynamic>" feature enough to explain it well. For now I will just say "hey, look at my cool code sample!" The way I figured out the hierarchy to get down to FormDigestValue was just to look at the "d" variable in the debugger after it had been populated. It essentially has a series of Dictionary objects that it builds on the fly to map a pseudo object model the JSON that was consumed. So for example it had a Dictionary with a key of "d". The value for that item was another Dictionary that had a key of "GetContextWebInformation". And so on, and so on it goes, until it maps out the JSON that was returned. It's pretty cool.
Challenge #4 - Where In the Blazes Do I Upload by File
This may have actually been the most challenging aspect of all. Pretty much every single upload example I saw for using the REST endpoint tells you to upload it to "/_api/web/GetFolderByServerRelativeUrl('" + serverRelativeUrl + "')/Files/Add...blah... No joy. Never. Ever. To be clear, all I was trying to do was to upload the file into the Documents library in a user's OneDrive site. Wasted LOTS of time trying different tweaks around that Url and got no where. Finally found kind of a random sample in one of the TechNet docs that got me pointed to the correct location, so here is where I POST'ed my file upload: siteUrl + "_api/web/lists/getByTitle('Documents')/RootFolder/Files/Add(url='" + fileName + "', overwrite=false)". Yeah...get the list by title and then go into its RootFolder. Yahooo, order is restored in my world.
So...assuming you have a stream of data from somewhere (like opening a local file, taking a file that's been uploaded, etc.) and you've stuck it in a byte array (I put mine in fileBytes), here is the complete chunk of code to do an upload for completeness; apologies in advance for the absurd formatting this blog site will put on the code:
string uploadUrl = siteUrl + "_api/web/lists/getByTitle('Documents')/RootFolder/Files/Add(url='" + fileName + "', overwrite=false)";
bool fileUploadError = false;
//now that we have the Url and byte array, we can make a POST request to push the data to SharePoint
//need to get the X-RequestDigest first by doing an
//empty post to http://<site url>/_api/contextinfo
//get the FormDigestValue node
//now create a new request to do the actual upload
restRqst = (HttpWebRequest)HttpWebRequest.Create(uploadUrl);
restRqst.Headers.Add("X-RequestDigest", xHeader);
restRqst.ContentLength = fileBytes.Length;
//take the document and get it ready to upload
postStream = restRqst.GetRequestStream();
postStream.Write(fileBytes, 0, fileBytes.Length);
postStream.Close();
//do the upload
restResponse = (HttpWebResponse)restRqst.GetResponse();
//assuming it all works then we'll get a chunk of JSON back
//with a bunch of metadata about the file we just uploaded
postStream = restResponse.GetResponseStream();
postReader = new StreamReader(postStream);
results = postReader.ReadToEnd();
There you go, hope you can find a good use for this.
This is yet another rather strange error that I ran across and couldn't find any info out on the interwebs about it so I though I would document it here. Suppose you have a SharePoint App that needs to access some User Profile information. You will probably use the PeopleManager class and ask for user profile properties using the PersonProperties class or one of the methods off of the PeopleManager class. You write your code up using the standard TokenHelper semantics to get a user + app key to retrieve this information, i.e. something like var clientContext = spContext.CreateUserClientContextForSPHost(). In your AppManifest file you ask for (at a minimum) Read rights to User Profiles (Social). Works great, okay, good start.
Now you determine that you need to retrieve that same information but use an App Only token. So you use whatever method you want to get an App Only token. You use the same code but now you get an Access Denied error message. Why is that - App Only tokens are supposed to have the same or greater rights than user + app tokens. Well...for right now...I don't know why not. NOTE: I DO understand needing to be a tenant admin to install an app that requires access to User Profiles, but this is different; it happens after the app is installed. But I do know how I fixed it. I added Tenant...Read rights to my AppManifest file. Now my App Only token is able to read properties from the User Profile in o365. Just thought I would share this "not at all obvious" tip so that if you get stuck hopefully your favorite search engine will find this post. Happy coding!
I've been working recently with some Microsoft Services and their customers who are changing the SharePoint STS token signing certificate. We are doing this as part of the set of steps required to set up the SharePoint Hybrid features for integrating search and BCS between on premises SharePoint farms and Office 365. I've had a couple of these folks get a little concerned because after making this change they had a seemingly random scattering of people that were unable to authenticate into SharePoint afterwards. The answer as it turns out was just that those users still had a valid fedauth cookie from a previous authentication into SharePoint and that cookie became invalid when the STS token signing certificate was changed. So just in case, here are a couple of things to remember before you undertake this operation:
I spent waaayyyyy too much time trying to resolve this problem so am capturing it here in case any of the rest of you run up against this. I installed a new ADFS 3.0 on Windows Server 2012 R2 machine in my environment, and then configured a new SharePoint SPTrustedIdentityTokenIssuer for it. Every time I tried to authenticate to it I entered my credentials, and then I would get a 400 bad request back and the whole thing came to a grinding halt. I was getting no errors in any of the event logs on the ADFS server. What was also weird is that if I configured ADFS to use forms based authentication instead of Windows, I could log in just fine.
I suspected Kerberos SPN issues, but when I had tried to set it after setting up ADFS (using setspn) it said that the SPN was set. Well, guess what - turns out that was not true. I finally just went in to adsiedit.msc on my domain controller and looked at my service account. If you go into the properties you can scroll down to servicePrinicpalName and see exactly what's configured for it, and sure enough, my ADFS server was not listed there. So, I just added the SPN needed for it - http/yourFqdnAdfsServer - saved it, and authentication started working then. As always, note that the SPN is NOT a Url, like http://myserver, it's just the protocol and host name, so http/myserver.
Hopefully this will save you some time, I know a lot of folks build all this out in their labs at home so start by double-checking your service account SPNs.
Working with attachments to message postings in Yammer has been something that I've had a few questions on over time and just recently had a chance to take a look at it. I decided to blog about it only because there appears to be such paucity of information around folks that have actually done this successfully. Unfortunately the Yammer developer documentation is skimpier than ever in providing you useful information and examples for getting this done. So...I've again updated my original .NET library for working with Yammer that I described here: http://blogs.technet.com/b/speschka/archive/2013/10/05/using-the-yammer-api-in-a-net-client-application.aspx. I've modified the MakePostRequest method so that you can pass in a local file name and content type, and it will upload the file to Yammer and add it as an attachment to a message posting. All this for the unbelievably low price of "free". :-)
So to include an attachment with your post in Yammer now, calling the MakePostRequest method looks something like this:
response = MakePostRequest(msg, messageUrl + ".json", accessToken, "", "C:\\Users\\speschka\\Desktop\\Yammer Sales Handbook.docx", "application/vnd.openxmlformats-officedocument.wordprocessingml.document");
In this case I got the long funky attachment content type from looking at a Fiddler trace while I was figuring this out. In practice, I haven't really determined how important it is to get this correct at all. It doesn't block the file upload itself, which uses a multipart/form-data content type for the upload. In terms of getting it to work, what I've found the easiest to implement is using the pending attachment method that Yammer describes in their documentation. This allows me to operate on the file upload(s) separately from posting to a group or whatever. When you do a pending attachment upload you send the file up to Yammer, and as you might expect, it sends you a big chunk of JSON back to tell you what it did with it. As with other features in my .NET examples, I wrote up some classes to deserialize the data that Yammer sends back so you can easily refer to properties of the upload. In this case, when you upload a pending attachment Yammer sends you back an ID, and then you need to include that ID when you make your message post. Here's a brief example of adding that to the body that I'm going to post to Yammer:
//upload the file so we can get the attachment ID
YammerFileUpload ya = UploadFile(attachmentName, authHeader, uploadContentType);
//add the attachment info to the form post message body
postBody += "&pending_attachment1=" + ya.id;
The other perfectly delicious part to this whole puzzle is the URL where you upload your files. This part I found NO WHERE in the Yammer documentation, so Fiddler to the rescue again. Turns out you can send a pending upload to https://files.yammer.com/v2/files and then just add your access token to the query string, i.e. ?access_token=blah.
In any case, this should be enough to get you uploading files to Yammer in most cases, and serve as a code example for other scenarios not covered here (like upload multiple files). I've broken the file upload process itself into a separate method to make it more easily extendible should you need to do so. Enjoy!
Today's post combines a few different topics, that dare I say, started as a dare (I think). The gauntlet thrown out was whether or not Remote SharePoint Index would work for SAML users as well as Windows users. For those of you not completely familiar with Remote SharePoint Index, I covered it in some detail in my blog post here: http://blogs.technet.com/b/speschka/archive/2012/07/23/setting-up-an-oauth-trust-between-farms-in-sharepoint-2013.aspx. The gist of it is that you set up an OAuth trust between two SharePoint farms, and it lets you get search results from both farms into a single search center results page. All of our examples in this area have been around using Windows users, but when the question was posed my best guess was "yeah, seems like that should work". As it turns out, it's true, it really does. In order to get this to work it requires the following:
After you've set all of that up, you also need to into Manage User Properties in the UPA and map both the Claim User Identifier and (usually) the Work email properties. In my case I used email address as the identity claim, so I mapped both of those properties to the "mail" attribute in Active Directory. If you are using a different identity claim then you would map the Claim User Identifier accordingly and would not need to map Work email. By default, the Claim User Identifier is mapped to samAccountName, so you'll want to Remove that mapping, then add one for mail. There's nothing to helpful about doing this, you literally just type in "mail" in the Attribute field and then click the Add button. When you're done it will look like this: .
You'll probably want to do a full import after that (unless you have a really huge directory). The biggest gotcha here is that you can really only set up one profile connection per Active Directory domain (at least when using Active Directory import instead of FIM). The reason that may be a problem is that most folks do an import to get all their Windows users into the UPA. However if you want to use the same domain to import users for both Windows and SAML auth into the UPA, it won't work. You'll get an error when you try to create a second connection to the same domain. So you basically have to be all in - all SAML or all Windows; otherwise you'll end up getting all sorts of seeming random results.
Once you've get everything configured and you've run a profile import in both farms, then you can set up the first part of this scenario, which is doing cross farm searches by SAML users using Remote SharePoint Index. I literally used the same exact steps that I described in my previous blog post I referenced at the start of this posting (setting up an oauth trust between farms). Once that trust is set up then it's just a matter of creating a new result source that is configured as a Remote SharePoint Index, using the Url to the web application in the other farm (the one you used when creating the trust between the farms), and then creating a query rule to execute your query against the remote farm. When it's all said and done you end up being able to authenticate into your local farm as a SAML user and search results from the remote farm. In the screenshot here I'm logged in as a SAML user; you see his full display name because it was brought in when I did the profile sync:
Now, the real beauty of this solution is that we can actually get this to work when using the SharePoint 2013 hybrid search features with Office 365. In this case I have a farm where I already had the hybrid features configured and working. When I created my SPTrustedIdentityTokenIssuer, I made sure to include 3 of the 4 claim types (the only ones I populate) that are used in rehydrating users for OAuth calls - email, Windows account, and UPN. Since those values are also synchronized up to my Office 365 tenant (and thus put into the UPA for my tenant up there), I'm able to rehydrate the user in Office 365 and get search results from there as well, even though I'm logged in locally as a SAML user. Very cool!! Here's a screen shot of the same SAML user getting both local search results as well as hybrid search results from Office 365:
Hopefully this helps to clear up some of the mystery about SAML users and OAuth, as well as how it interacts with some of the built-in features of SharePoint 2013 like Remote SharePoint Index and Hybrid Search. Clearly the complicating factor in all of this is the UPA and managing your user profile imports. That would be the first planning item I would be thinking about if you want to use SAML authentication on site, in addition to SharePoint Hybrid. It's also clearly a sticking point if you wanted to use both Windows and SAML auth in the same farm with the same users from the same domain.
Those of you who follow the Share-n-Dipity blog know that I don’t really do much in the way of product endorsements. However, you have probably also figured out that I’m a big SAML fan, so when a friend of mine recently released a new product for this market it really caught my eye. If you’ve followed the blog for a while you may have seen me reference a friend mine by the name of Israel Vega, Jr. He worked with me at Microsoft for many years and last year decided to head out on his own. “Iz”, as we call him, has done a ton of work with SharePoint over the years and was really one of the go to guys for it in Microsoft Services when he left.
He has just released a new product called CloudExtra.NET, which is obviously a play on “cloud extranet”. What I really like about his solution is that it’s SAML in a box – basically a turn-key product that brings a lot of things customers frequently ask me about to a packaged product. He uses SAML authentication and a custom claims provider to enable you to use any number of cloud-based directories out of the box. That includes Azure Active Directory for internal users, and anything you can connect to ACS for your external users. That part is pretty cool because he has basically built “invited users” for on-premises SharePoint farms, a feature that currently is only available to Office 365. For those of you who aren’t ready to do that yet, this is a pretty slick way to do it with your own farms now too.
He’s added some nice features that round out his solution. He has a Terms and Conditions agreement for it so you can make invited users agree to your terms of use before gaining access to your farm. He also has a product that’s cool in its own right and is bundled with CloudExtra.NET, which is a business impact application. It lets you assign an impact rating to sites (like High, Medium and Low) and then also use that to apply a site policy to it when you do so. He ties it all together with farm-wide reporting on sites based on business impact. There are also a bunch of reporting and management features built in for invited users that make it a very solid offering.
I gleaned all this from about a 30 minute demo that Iz gave me, and I’m hoping to spend some time in the not too distant future doing a hands-on review of the product to check it out for myself. If it sounds like something that might be interested to you then you should go visit his web site at http://cloudextra.net. Thanks Iz for the SAML love!