• SharePoint 2013 - Image Rendition

    Image rendition is a new capability in SharePoint 2013 helps you to optimize images in your SharePoint site through having the same image but in different sizes. This will enhance the user experience through browsing the SharePoint site from different devices, like laptop or mobile.

    How does it work?

    Simply when you upload an image, SharePoint will give you the ability to resize the image.

    Implementation:

    1- Create a publishing site collection.

    2- Click on site contents

    1

    3- Click on SETTINGS

    2

    4- Under Look and Feel Section, click on Image Rendition

    3

    5- You will notice an error on the top of the page. This is because the BLOB Cache is not configured

    Untitled

    6- Enable BLOB Cache:

    a. Go to web.config

    b. Search for BlobCache Section

    c. Change the value of the attribute enabled to “true”

    4

    d. Save and close web.config

    7- Refresh Image Rendition.

    8- By default, there will be 4 predefined rendition configured on your SharePoint site.

    9- Create a new rendition, by adding new item link

    5

    10- Enter the Name, width and height, and then click Save.

    11- Each rendition type has a unique ID. This ID will be used as a URL parameter to render the required image size.

    12- Upload an image to Image Library

    13- Hover the cursor on the uploaded image and then click on …, then on Edit Rendition

    6

    14- Click on “Click to Change” for the display template 100*100

    7

    15- Move the selected area for the needed space and click save

    8

    16- Create a new Article Page

    9

    17- Edit the page and add an image to on the left

    18- Select the image on the left

    19- On the top bar, click on Image, Click on Pick Rendition.

    20- Choose the required setting to show the image.

    10

     

  • Error While Running Content Deployment Job in SharePoint 2010

    During configuring content deployment between pre-production environment and live environment we faced the following issue that caused a failure in content deployment job.The list of content deployment jobs is accessed from central administration in general application settings section

    image

    During troubleshooting we have noticed that some features with prefix PWS that are related to project server are displaying in $Resources format instead of clear names as indicated below:

    clip_image004[6]

    another note is that In top level site collection there is a list created by default as part of the publishing site template called Issues , all fields are displaying in $Resources format as well as indicated below

    clip_image006[6]

    By clicking the columns in the Issues list one corrupted column is causing a problem called $Resources:FieldTitle_Links, by clicking on “delete this invalid field” error page is getting displayed with internal error without having any details in SharePoint logs or in event viewer!

    This common name for the field that causing the problem seems to be “related issues”, the current  environment is upgraded from SharePoint 2007 version.

    clip_image008[6]

    we found that this could be fixed by deleting the Issues list and recreating it using Issue Tracking List as indicated below

    image

    The new list has everything working fine and column names are displaying correctly ,the list were empty in our case so no information is lost

    clip_image012[6]

    Now re running content deployment succeeded from pre-production to production environment

    clip_image014[6]

  • FAST Search for SharePoint 2010 - FQL and its Impact on Relevancy Ranking (Using Full Text Index Mapping)

    Introduction

    In this post I will describe a common mistake that occurs when you write your own FQL that impacts the search results that depends on Relevancy Ranking (Using Full Text Index Mapping).

     

    Scenario

    Lets consider that we have a list of products that we would like to search through that have a custom managed property weights defined to control Relevancy; The Full Text Index Mapping is configured as following:

    • ProductName: Level 7 (High Importance)
    • ProductDesc: Level 1 (Low Importance)

    and lets say that other Dynamic Ranking options are disabled (such as Freshness, Boosts, Authority Weight ... etc.) in this scenario to make things easier. So what happens if I searched for the product "XBox" using the following FQL:

    (

        (

            ProductName:string(

                "XBox",

                 mode="simpleany"

            )

        )

    ) or (

           ProductDesc:string(

                "XBox",

                 mode="simpleany"

            )

    )

     

    What I expect to see is that the search results will bring the result items that has the word "Xbox" in the ProductName first due to my relevancy configuration, but that doesn't happen, why?

     

     

    The Problem:

    To understand the problem lets check the official definition of FQL from Microsoft MSDN Site:

    The FAST Query Language (FQL) is a powerful query language that enables developers to perform exact searches and to narrow the scope of your search to values that belong to a specific managed property or a full-text index.

    So what happened here is that the Exact Managed Property Search kicked in and disabled the Dynamic Ranking (using Full Text Index Mapping).

     

    Solution 

     To solve this issue you need to remove the managed properly names from your query and rephrase it as following:

     

    (

        (

            string(

                "XBox",

                 mode="simpleany"

            )

        )

    )

     

    This will enable the Full Text Index Mapping on your reach results and brings in the results that are ordered in the ProductName first.

     

    Note:

    For the sake of this scenario I disregarded the number of occurrence of the word "Xbox" in the search results to make it easier to understand, please note that this issue will also affect the order of the search results.

     

     

     

    Additional References

    More useful information can be found in the below links:

  • Diagnostics of Cloud Services – Custom Trace Listener

    In this part of the series, I am going to talk about instrumenting diagnostic features for cloud services (web and worker roles) with a custom trace listener implementation in respect to new Windows Azure SDK for .NET (April 2013, version 2.0).

    Before starting the implementation, I would suggest you to get an overview of the new SDK. A perfect resource for this would be Scott Guthrie’s blog here. As you can see, one of the areas improved noticeably since v1.8 is diagnostics and key items of which are:

    • Built-in diagnostic features for Azure web site enhancements
    • Being able to diagnose cloud services directly from Visual Studio
    • Being able to change diagnostic configuration settings while application running.

     

    In .NET world, to diagnose and monitor an application, you need to implement a listener that listens the source which is the application itself for events (debug, trace, etc.) and a writer that persist the messages that captured through the listener to intended path (file, database table, storage table, etc.).

    Listener

    All (custom or built-in) listeners are derived from System.Diagnostics.TraceListener. I am going to implement a table storage listener, as name implies, it will outputs the captures to an Azure storage table. Here is the whole class implementation:

     

       1: using System;
       2: using System.Collections.Generic;
       3: using System.Data.Services.Client;
       4: using System.Text;
       5: using System.Diagnostics;
       6: using Microsoft.WindowsAzure.Storage;
       7: using Microsoft.WindowsAzure.Storage.Table;
       8:  
       9: namespace ToDoCommon
      10: {
      11:     public class TableTraceListener : TraceListener
      12:     {
      13:         #region Fields      
      14:         readonly string _connectionString;
      15:         readonly string _diagnosticsTable = "CustomLogTable";
      16:  
      17:         [ThreadStatic]
      18:         static StringBuilder _messageBuffer;
      19:  
      20:         readonly object _initializationSection = new object();
      21:         bool _isInitialized;
      22:  
      23:         CloudTableClient _tableStorage;
      24:         readonly object _traceLogAccess = new object();
      25:         readonly List<LogEntry> _traceLog = new List<LogEntry>();
      26:         #endregion
      27:  
      28:         #region Constructors
      29:  
      30:         public TableTraceListener()
      31:             : this("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString")
      32:         {
      33:         }
      34:  
      35:         public TableTraceListener(string connectionString)
      36:             : base("TableTraceListener")
      37:         {
      38:             _connectionString = connectionString;
      39:         }
      40:  
      41:         public TableTraceListener(string connectionString, string tableName)
      42:             : base("TableTraceListener")
      43:         {
      44:             _connectionString = connectionString;
      45:             _diagnosticsTable = tableName;
      46:         }
      47:         #endregion
      48:  
      49:         #region Methods
      50:  
      51:         /// <summary>
      52:         /// Flushes the entries to the storage table
      53:         /// </summary>
      54:         public override void Flush()
      55:         {
      56:             if (!_isInitialized)
      57:             {
      58:                 lock (_initializationSection)
      59:                 {
      60:                     if (!_isInitialized)
      61:                     {
      62:                         Initialize();
      63:                     }
      64:                 }
      65:             }
      66:  
      67:             var context = _tableStorage.GetTableServiceContext();
      68:             context.MergeOption = MergeOption.AppendOnly;
      69:             lock (_traceLogAccess)
      70:             {
      71:                 _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry));
      72:                 _traceLog.Clear();
      73:             }
      74:  
      75:             if (context.Entities.Count > 0)
      76:             {
      77:                 context.BeginSaveChangesWithRetries(SaveChangesOptions.None, 
      78:                     (ar) => context.EndSaveChangesWithRetries(ar), null);
      79:             }
      80:         }
      81:         /// <summary>
      82:         /// Creates the storage table object
      83:         /// </summary>
      84:         private void Initialize()
      85:         {
      86:             var account = CloudStorageAccount.Parse(_connectionString);
      87:             _tableStorage = account.CreateCloudTableClient();
      88:             _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists();
      89:             _isInitialized = true;
      90:         } 
      91:         
      92:         public override bool IsThreadSafe
      93:         {
      94:             get
      95:             {
      96:                 return true;
      97:             }
      98:         }
      99:  
     100:         #region Trace and Write Methods
     101:         /// <summary>
     102:         /// Writes the message to a string buffer
     103:         /// </summary>
     104:         /// <param name="message">the Message</param>
     105:         public override void Write(string message)
     106:         {
     107:             if (_messageBuffer == null)
     108:                 _messageBuffer = new StringBuilder();
     109:  
     110:             _messageBuffer.Append(message);
     111:         }
     112:  
     113:         /// <summary>
     114:         /// Writes the message with a line breaker to a string buffer
     115:         /// </summary>
     116:         /// <param name="message"></param>
     117:         public override void WriteLine(string message)
     118:         {
     119:             if (_messageBuffer == null)
     120:                 _messageBuffer = new StringBuilder();
     121:  
     122:             _messageBuffer.AppendLine(message);
     123:         }
     124:         /// <summary>
     125:         /// Appends the trace information and message
     126:         /// </summary>
     127:         /// <param name="eventCache">the Event Cache</param>
     128:         /// <param name="source">the Source</param>
     129:         /// <param name="eventType">the Event Type</param>
     130:         /// <param name="id">the Id</param>
     131:         /// <param name="message">the Message</param>
     132:         public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message)
     133:         {
     134:             base.TraceEvent(eventCache, source, eventType, id, message);
     135:             AppendEntry(id, eventType, eventCache);
     136:         }
     137:  
     138:         /// <summary>
     139:         /// Adds the trace information to a collection of LogEntry objects
     140:         /// </summary>
     141:         /// <param name="id">the Id</param>
     142:         /// <param name="eventType">the Event Type</param>
     143:         /// <param name="eventCache">the EventCache</param>
     144:         private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache)
     145:         {
     146:             if (_messageBuffer == null)
     147:                 _messageBuffer = new StringBuilder();
     148:  
     149:             var message = _messageBuffer.ToString();
     150:             _messageBuffer.Length = 0;
     151:  
     152:             if (message.EndsWith(Environment.NewLine))
     153:                 message = message.Substring(0, message.Length - Environment.NewLine.Length);
     154:  
     155:             if (message.Length == 0)
     156:                 return;
     157:  
     158:             var entry = new LogEntry()
     159:             {
     160:                 PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30),
     161:                 RowKey = string.Format("{0:D19}", eventCache.Timestamp),
     162:                 EventTickCount = eventCache.Timestamp,
     163:                 Level = (int)eventType,
     164:                 EventId = id,
     165:                 Pid = eventCache.ProcessId,
     166:                 Tid = eventCache.ThreadId,
     167:                 Message = message
     168:             };
     169:  
     170:             lock (_traceLogAccess)
     171:                 _traceLog.Add(entry);
     172:         }
     173:  
     174:         #endregion
     175:  
     176:         #endregion
     177:     }
     178: }
       1: using Microsoft.WindowsAzure.Storage.Table.DataServices;
       2:  
       3: namespace ToDoCommon
       4: {
       5:     public class LogEntry : TableServiceEntity
       6:     {
       7:         public long EventTickCount { get; set; }
       8:         public int Level { get; set; }
       9:         public int EventId { get; set; }
      10:         public int Pid { get; set; }
      11:         public string Tid { get; set; }
      12:         public string Message { get; set; }
      13:     }
      14: }

    What we do here, in summary:

    1. created a storage table
    2. converted captured trace information(trace event cache, type, message, etc.) to a TableServiceEntity object called LogEntry,
    3. then overrided Flush functionality that persists the trace information in table service  entity collection to the storage table created in step 1

    And now time to add the listener:

       1: public static void ConfigureTraceListener(string diagConn, string tableName)
       2: {
       3:     Trace.Listeners.Clear();
       4:     string traceName = typeof(TableTraceListener).Name; 
       5:     var listener =
       6:         new TableTraceListener(diagConn, tableName)
       7:         {
       8:             Name = traceName
       9:         };
      10:  
      11:     Trace.Listeners.Add(listener);
      12:     Trace.AutoFlush = true;
      13: }

    In this code block, we do construct the TableTraceListener with 2 strings:

    • connection string: set in the service configuration file pointing to the storage table created step 1 above. This is set to “UseDevelopmentStorage=true” when run locally (pointing to local development storage).
    • table name: the name of the table created in step 1 above.

     

    Logger

    Nothing much to say here.

       1: using System;
       2: using System.Text;
       3: using System.Diagnostics;
       4:  
       5: namespace ToDoCommon
       6: {
       7:     public class Logger:ILogger
       8:     {
       9:         static Logger _instance;
      10:         public static ILogger Instance
      11:         {
      12:             get { return _instance ?? (_instance = new Logger()); }
      13:         }
      14:  
      15:         public void LogError(Exception ex)
      16:         {
      17:             var sb = new StringBuilder();
      18:             sb.AppendLine();
      19:             var innerEx = ex.InnerException;
      20:             while (innerEx != null)
      21:             {
      22:                 sb.AppendFormat("\t Inner Exc Message: {0}\r\n", innerEx.Message);
      23:                 innerEx = innerEx.InnerException;
      24:             }
      25:  
      26:             sb.AppendLine("StackTrace: " + ex.StackTrace);            
      27:             LogEntry(sb.ToString(), EventLogEntryType.Error);
      28:         }
      29:  
      30:         public void LogEntry(string message, EventLogEntryType type)
      31:         {
      32:             switch (type)
      33:             {
      34:                 case EventLogEntryType.Error:
      35:                     Trace.TraceError(message);
      36:                     break;
      37:                 case EventLogEntryType.Information:
      38:                     Trace.TraceInformation(message);
      39:                     break;
      40:                 case EventLogEntryType.Warning:
      41:                     Trace.TraceWarning(message);
      42:                     break;
      43:             }
      44:         }
      45:     }
      46: }

    Please note that web role and associated web application run on different processes. Therefore, you need to add the listener in both places.

    My Solution

    One picture says a million words, so does here:

    image

    … and Show Time

    All I need to do now, in order to have diagnostic feature enabled in my solution, is to call ToDoCommon.Util.ConfigureTraceListener(diagConnection, tableName) from the method “OnStart” of the role entry point classes of each role (web and worker) and from global.asax. Below, global.asax is shared:

       1: using System;
       2: using System.Web;
       3: using System.Web.Http;
       4: using System.Web.Mvc;
       5: using System.Web.Optimization;
       6: using System.Web.Routing;
       7: using Microsoft.WindowsAzure.ServiceRuntime;
       8: using ToDoCommon;
       9:  
      10: namespace ToDoListWeb
      11: {
      12:     // Note: For instructions on enabling IIS6 or IIS7 classic mode, 
      13:     // visit http://go.microsoft.com/?LinkId=9394801
      14:  
      15:     public class MvcApplication : HttpApplication
      16:     {
      17:         protected void Application_Start()
      18:         {
      19:             AreaRegistration.RegisterAllAreas();
      20:  
      21:             WebApiConfig.Register(GlobalConfiguration.Configuration);
      22:             FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
      23:             RouteConfig.RegisterRoutes(RouteTable.Routes);
      24:             BundleConfig.RegisterBundles(BundleTable.Bundles);
      25:             AuthConfig.RegisterAuth();
      26:  
      27:             var diagConnection = RoleEnvironment.GetConfigurationSettingValue("DiagConnection");
      28:             var tableName = RoleEnvironment.GetConfigurationSettingValue("DiagTableName");
      29:  
      30:             Util.ConfigureTraceListener(diagConnection, tableName);
      31:         }
      32:  
      33:         protected void Application_OnError(object sender, EventArgs e)
      34:         {
      35:             var ex = Server.GetLastError();
      36:             Server.ClearError();
      37:  
      38:             Logger.Instance.LogError(ex);
      39:         }
      40:     }
      41: }

     

    To test the solution, I have created a dummy page and called the trace functionality from a method:

       1: public ActionResult Details(int id = 0)
       2: { 
       3:     Logger.Instance.LogEntry("Dummy Informative log entry", EventLogEntryType.Information);
       4:     Logger.Instance.LogEntry("Dummy Warning log entry", EventLogEntryType.Warning);
       5:     Logger.Instance.LogEntry("Dummy Error log entry", EventLogEntryType.Error);
       6:  
       7:     var todoitem = _context.ToDoItems.Find(id);
       8:     if (todoitem == null)
       9:     {
      10:         throw new ArgumentOutOfRangeException();
      11:         //return HttpNotFound();
      12:     }
      13:     Trace.Flush();
      14:     return View(todoitem);
      15: }

     

    And the result confirms the success!

    image

     

    Conclusion

    In this post, I have implemented a custom trace listener that captures the diagnostic events (trace, debug, and trace source) in the application and writes to a Windows Azure storage table. Advantages of this custom tracing are being able to implement the instrumentation the way you want (lighter perhapsWinking smile) and being able to define the table name yourself.

    Please note that the code blocks shared here are for demonstration purposes; I would suggest you to review your needs (trace switch, buffering, application monitoring capability, etc.) before using them in alive systems. Additionally, I do recommend investigate the configure before customization approach which I will be talking in my next post. Stay tunedSmile

  • Install Active Directory in Windows Azure in four simple steps

    Introduction:

    There is a very long and complex way to install Active Directory in Windows Azure environment described in details in Windows Azure documentation here http://www.windowsazure.com/en-us/manage/services/networking/active-directory-forest/, what I will propose in this post is a simple way to install Active Directory in a new Windows Azure without the need to go through all steps listed in Microsoft Windows Azure documentations that include creation of the virtual machine for Domain Controller using PowerShell.

    Install Steps:

    Installation steps are simple as follows:

    1. Create a storage account that is in the same region as the affinity group, more details about how to create a storage in Windows Azure can be found here http://www.windowsazure.com/en-us/manage/services/storage/how-to-create-a-storage-account/.

    2. Create a new Network associated with the storage account created in step-1, create a Network subnet (for example 172.16.0.0/24), and select DNS to be same Network VLAN IP Address end with “.4” (follow the same example 172.16.0.4), more details about how to create a virtual network in Windows Azure can be found here http://www.windowsazure.com/en-us/manage/services/networking/create-a-virtual-network/.

    3. Create the first virtual machine and associate it with the storage account and network created in steps-1 & step-2, this virtual machine automatically will take IP Address from network VLAN end with “.4” (follow the same example 172.16.0.4), more details about how to create a virtual machine in Windows Azure can be found here http://www.windowsazure.com/en-us/manage/windows/tutorials/virtual-machine-from-gallery/.

    4. Install Active Directory (Including DNS Service) on the first created virtual machine in step-3.

    Now your Windows Azure environment is ready with Active Directory and all new virtual machines that will be created later will automatically point to the IP Address of the Active Directory Domain Controller IP Address as their primary DNS server in network cards.

    Conclusion:

    As a conclusion, you have the long professional way to install Active Directory, and in this post, you get how to install Active Directory in your new Windows Azure environment with simple way and very fast.

    Hope that this will accelerate building your new Windows Azure environment in less than an hour.