MEA Center of Expertise

We are a 120+ technology enthusiasts helping Microsoft customers around Middle-East & Africa region. We bridge Microsoft tools & technologies to their businesses.

November, 2010

  • Get Manager approval in SharePoint Designer 2010 - Step by Step

    A fundamental condition that I always encountered when gathering workflow requirement is to get the user's manager approval, a tedious amount of coding to connect to active directory retrieve user profile information, get his manager login name, pass it as a parameter to the workflow, create a custom task for the manager with a notification. 

    SharePoint 2010 Designer to the rescue, you can do all the above in 15 clicks, with ZERO code involved. Just by doing the following:

     

    1. Open SharePoint Designer 2010 and connect to your SharePoint site.
    1. Click Workflows and select the workflow type you need, for this presentation I'll use reusable workflow with All content types as my scope. But this doesn't affect the following logic, the same steps applies for the List and Site workflows.
    1. From the Workflow ribbon select Actions > Collect Data from User, you can also select assign a to-do task but collect action from user allows you to create a custom task for the manager

    1. This will show you the Action in the workflow editor.

    1. The action is constructed from three parts:
      1. The data, which is the custom task that will be collected
      2. This user, which in our case the manager
      3. Output to variable: collect. Collect is  the task ID which you can change it, use it to refer to the task when you need to pass a variable through this task.
    1. Clicking on data will allow you to create the custom task by starting the task wizard, open the wizard and click Next.
    2. You will need to specify a Task name, and you can also specify a description. For now lets call the task "Review Task". And click Next

    1. Now you can Specify the Task custom field by clicking Add and select the field type, I will specify two field:
      1. Approved as a choice with Yes/No.
      1. Comment as a text area.

     

      9. Provide the choices values, and hit Finish

    1. Now it should be looking like this, with both fields listed

    1. After finishing creating the Task, the condition will change in the editor to reflect the task name.

    1. Now to the Fun part, getting the user manager. Simply click this user, Select "Workflow lookup for a user...", and click Add

    1. Now you need to specify data source, in other situation you will need to specify the active directory, create a connection and the rest of this headache. All you need to do now is to select "User Profile" from the end of the list.
    2. This will display all the user profile attribute that you can select from, such as manager. Select "Manager" and specify the out as "login name". This will lookup for the user manager and return his login name as a parapater in order to assign the task for this user's manager

    1. Find the list item section allows you to select the user which you need his manager approval. In our case we the manager for the user who started the workflow.  So I will select the Field as "Account Name" and click on the formula button in the value field.

     

     16. Specify the data source as workflow Context, the Field source is the Workflow "Current User" , and Return Field as "Login Name". Click OK, OK and OK.

     17. The Action should be looking like this now.

       

       

      Voila, Now you have your custom task that will be sent to the user's manager with a notification in his task list.

       

      Enjoy.

  • The Diagonal Warehouse Design

    Have you ever been in a situation when designing your warehouse where you had some measures that could not be grouped together due to dimensionality differences? A few measures that you can’t put into one Fact and of course, like me, you hate the idea of a Fact for each measure. I’ve been there and I found a simple effective design that you can use to combine all the unrelated measures together into one Fact. And “Let the Analysis Services Art handle the rest”.

    Diagonal warehouse is based on a very known fact that

    NULL Aggregation is 0

    Yes, nothing new. So why not use it to fill in the spaces between the measures? Weird, let’s see an example:

     

    Measure 1  uses Time and Geography

    Measure 2 uses Time, Geography and Dimension 2

    Measure 3 uses Time and Dimension 3

     

    How can we combine this Non-homogenous combination above into 1 Fact?

    Simply put the dimensions data into diagonal form and fill the spaces with Nulls, keeping in mind of course to make the unknown member hidden in the dimension property … We’ll get to that later, now let’s look at the Fact table.

     

    TimeID

    GeographyID

    Dimension2

    Dimension3

    Measure1

    Measure2

    Measure3

    Time1

    Geo1

    NULL

    NULL

    Value11

    NULL

    NULL

    Time2

    Geo2

    NULL

    NULL

    Value12

    NULL

    NULL

    Time3

    Geo3

    Dim1

    Null

    NULL

    Value21

    NULL

    Time4

    Geo4

    Dim2

    NULL

    NULL

    Value22

    NULL

    Time5

    Geo5

    Dim3

    NULL

    NULL

    Value23

    NULL

    Time6

    NULL

    NULL

    Dim1

    NULL

    NULL

    Value31

    Time7

    NULL

    NULL

    Dim2

    NULL

    NULL

    Value32

     

    Looking at the above table, it will seem like strange input fields into the Fact. But come to think of it you’ll find that each and every dimension will drilldown correctly on its associated measure neglecting the other non-related measures due to the NULLs filled in.

    When you build the cube in the Analysis Services, go to each and every dimension related the above Fact.  Don’t forget any dimensions and assign the unknown member as hidden like the below screenshot.

     

     

    So think of it as a diagonal and start putting as much measures as you can to be combined. This will save you a lot of time and design headache.

     

  • Desktop Virtualization

    Microsoft Desktop Virtualization solutions help companies to reduce their total cost of ownership, increase business agility and continuity, enable anywhere access, and improve security and compliance. For companies new to desktop virtualization, deploying Microsoft Application Virtualization as a first step can provide immediate cost savings.

    Desktop Virtualization offers a broad portfolio of solutions that empower companies to choose the technologies that best address their unique business and IT challenges while preserving their existing IT investments. Microsoft delivers desktop virtualization offerings for a wide range of situations—from always connected workers to those requiring more flexibility such as mobile workers. For connected workers, Microsoft and its partners, including Citrix, deliver a Virtual Desktop Infrastructure (VDI) solution that allows organizations to centrally manage desktops in the datacenter while providing a personalized desktop experience for end users

    Virtual Desktop Infrastructure

    Virtual Desktop Infrastructure (VDI) is an alternative desktop delivery model that allows users to access desktops running in the datacenter.

    Microsoft offers comprehensive and cost effective technology that can help customers deploy virtual desktops in the datacenter. The Microsoft VDI Suites allow customers to manage their physical and virtual desktops from a single console, while providing great flexibility for deploying server-hosted desktops and applications.

    The Benefits of VDI include:

    • Integrated Management
    • Enhanced security and compliance
    • Anywhere access from connected devices
    • Increase business continuity


    Session Virtualization

    Session Virtualization with Remote Desktop Services delivers session-based desktops or applications and is suitable for low complexity or task worker scenarios. It allow for high user density with a limited degree of personalization or isolation.

    Microsoft Enterprise Desktop Virtualization

    Microsoft Enterprise Desktop Virtualization (MED-V) removes the barriers to Windows upgrades by resolving application incompatibility with Windows Vista or Windows 7. MED-V delivers applications in a virtual PC that runs a previous version of the operating system (for example: Windows XP). It does so in a way that is completely seamless and transparent to the user. Applications appear and operate as if they were installed on the desktop, so that users can even pin them to the task bar. For IT administrators, MED-V helps deploy, provision, control, and support the virtual environments.

    Microsoft Application Virtualization

    In a physical environment, every application depends on its OS for a range of services, including memory allocation, device drivers, and much more. Incompatibilities between an application and its operating system can be addressed by either server virtualization or presentation virtualization; but for incompatibilities between two applications installed on the same instance of an OS, you need application virtualization.

    Microsoft Application Virtualization 4.6 is now available! App-v 4.6 with Windows 7, Windows Server 2008 R2 and Office 2010 delivers a seamless user experience, streamlined application deployment and simplified application management.

    • Broaden the windows platform coverage. App-V now supports 32-bit and 64-bit applications on 32-bit and 64-bit operating systems.
    • Office 2010 virtualized with App-V 4.6 delivers key productivity enhancements & a seamless user experience with Sharepoint, Outlook and more.
    • Optimize Server Disk Storage when using App-V in VDI with a shared cache reduces storage requirements on SAN’s.
    • Increase IT control, user productivity and security with key Windows 7 features. Integrates seamlessly with AppLocker, BranchCache, AppLocker and BitLockerToGo.

    Application Virtualization allows you to isolate a specific application from the OS and other applications, eliminates conflicts between applications and removes the need to install applications on PCs.

    Benefits

    • Streams applications on-demand over the internet or via the corporate network to desktops, Terminal servers and laptops
    • Automates and simplifies the application management lifecycle
    • Accelerates OS and application deployments
    • Reduces the end user impacts associated with application upgrades/patching and terminations
    • Enables controlled application use when users are completely disconnected

     

    User State Virtualization

    User state virtualization isolates user data and settings from PCs and enables IT to store them centrally in the datacenter while also making them accessible on any PC by using Windows Roaming User Profiles, Windows Folder Redirection, and Offline Files. Using each of these technologies or a combination of them enables easily replaceable PCs for business continuity, centralized backup and storage of user data and settings, end user access to their data from any PC, and simplified PC provisioning and deployment.The rich client advantage: user data can be cached for offline access and then automatically synchronizes with datacenter servers upon re-connection to the network.

     

     

  • MEA Center of Expertise presenting in Microsoft Egypt Open Door 2-3 November

    Microsoft Egypt is organizing its Open Door event on 2nd and 3rd of November 2010, it is big event around Microsoft Technologies and targeting developers and ITPro.

    Microsoft MEA Center of Expertise will participate in the event with a bunch of brilliant speakers covering different technologies

     

    Corinne Wasfi - Optimized Desktop Solution Specialist

    Sessions

    “Optimized Desktop As Enterprises See It”     Tuesday 2nd November at  11:30 to 12:30

     

    Mostafa Fathi - Optimized Desktop Specialist in Microsoft. Coming from an IT/Telecom background that is a result of 6 years of local/regional customer-facing experience, Mostafa specializes in utilizing virtualization technologies to find IT solutions that overcome business challenges in the enterprises.

    Sessions

    "Windows 7 + MDOP: Optimized Desktop Solutions"    Tuesday, 2nd of Nov     4:15 PM : 5:15 PM
    "Deploying Microsoft Enterprise Desktop Virtualization v1 to Solve Windows 7 Application Compatibility"    Wednesday, 3rd of Nov   10:00pm : 11:00pm

     

    Khaled Hnidk - Technology Specialist  Identity and Private cloud, Khaled is a veteran IT Architect with more than 12 years experience in the software industry .

    Sessions

    “Economics of Cloud Computing”          Tuesday, 2nd of Nov.           5:30 PM : 6:30 PM
    “Business Ready Security: Exploring the Identity and Access Management Solution”     Wednesday, 3rd of Nov       11:15 AM : 12:15 PM

     

     

    Ayman El-Hattab - Regional Technology Solution Professional, SharePoint MVP, Author & Speaker. He is the co-founder of MS3arab community, SharePoint4Arabs.com and he is always an active participant in Microsoft communities. Blog: http://www.aymanelhattab.com , Twitter: @aymanelhattab

    Sessions

    "Developing Office Business Applications with Microsoft Office 2010 and Microsoft SharePoint Server 2010"  Wednesday, 3rd of Nov    2:30 PM :  3:30 PM

     

     

    Marwan Tarek - SharePoint Technology Specialist Worked, SharePoint MVP since 2008, working with SharePoint technologies since 2005.  I am the founder of the Egypt SharePoint User Group and leading all SharePoint events in Egypt and co-founder of SharePoint4Arabs.com first website publishes Arabic screencasts for SharePoint. Blog: http://www.marwantarek.com Twitter: @marwantarek

    Sessions:

    "Microsoft Office 2010: Developing the Next Wave of Productivity SolutionsWednesday, 3rd of Nov   10:00 AM : 11:00 AM

  • Windows 7 Features - Direct Access (Series 1)

    What is DirectAccess?
    DirectAccess allows remote users to securely access intranet shares, Web sites, and applications without connecting to a virtual private network (VPN). DirectAccess establishes bi-directional connectivity with a user’s intranet every time a user’s DirectAccess-enabled portable computer connects to the Internet, even before the user logs on. Users do not have to take any action to be connected to the intranet, and information technology (IT) administrators can manage remote computers outside the office, even when the computers are not connected to the VPN, anytime the computer has internet access

    How is DirectAccess different from current VPN solutions?
    Virtual private networks (VPNs) securely connect remote users to their network. While DirectAccess can also do that, it is only one of the many things that DirectAccess can perform well. Additionally, DirectAccess can ensure that users are connecting to the exact server to which they think they are connecting (end-to-end authentication) and provide data encryption all the way to the server (end-to-end encryption). DirectAccess also allows IT professionals to service remote computers whenever the DirectAccess client has Internet connectivity. Additionally, working together with Network Access Protection (NAP), DirectAccess can ensure that the clients are always compliant with system health requirements to ensure a secure and healthy IT environment

    What are minimum operating system requirements for DirectAccess?
    DirectAccess clients must run Windows 7 Enterprise Edition, Windows 7 Ultimate Edition, or Windows Server 2008 R2 and be joined to an Active Directory Domain Services (AD DS) domain. DirectAccess servers must run Windows Server 2008 R2 and be joined to an AD DS domain.

    Are there built-in limitations on the number of simultaneous DirectAccess connections that a DirectAccess server supports?
    No. Unlike built-in connection limits for the Routing and Remote Access service, DirectAccess has no built-in connection limitations.

    What gets installed on the client to enable DirectAccess?
    DirectAccess does not require any client-side installation. DirectAccess clients use Active Directory domain membership and Group Policy settings for their configuration. Once the Group Policy settings are applied while connected to the local area network (LAN) or through a VPN connection, there is no user interface on the DirectAccess client. When DirectAccess is operating effectively, it is transparent to the end user

    How much does it cost to buy DirectAccess?  
    DirectAccess requires two components: DirectAccess clients and a DirectAccess server. DirectAccess clients need to run Windows 7 Enterprise, Windows 7 Ultimate, or Windows Server 2008 R2. DirectAccess server functionality is included with Windows Server 2008 R2. There are no additional products or licenses that are required.
    Forefront Unified Access Gateway (UAG) extends the benefits of DirectAccess in the platform across the infrastructure by enhancing scalability and simplifying deployments and ongoing management.

    How does DirectAccess work?
    DirectAccess uses a combination of Internet Protocol version 6 (IPv6) end-to-end connectivity, Internet Protocol security (IPsec) protection of intranet traffic, separation of Domain Name System (DNS) traffic with the Name Resolution Policy Table (NRPT), and a network location server that DirectAccess clients use to detect when they are on the intranet

     

  • Fun with Pipelining, Chaining & Linq – P1

    This is a long series of entries around rules processing & chaining. Bear with me :-)

     

    Pipelines are among my favorite method to express sequence of logic components to execute in response to business events like “save new customer” button or “Process New Customer” service method. Developing pipelines was always about creating a pipeline interface (preferably an abstract class) that expresses the following:

    • Component name
    • Execute method that has a known parameter.

     

    Then developing a pipeline that stages these components in a sequence with a  mechanism to start executing so your pipeline will probably have method around:

    • Add Pipeline component
    • Remove Pipeline component
    • Execute (this is where the real fun, to look across all components and execute them one by one)

     

     

    This was fun to develop, and always interesting to start optimizing and adding feature like persisting the structure of the pipeline (to DB or file), or adding shared services like DB connectivity, service URL etc.

     

    Check out BTS pipeline interfaces they are a well implemented example of the above.

     

    With .NET anonyms methods, extension methods this has become really easy to accomplish like this sample pseudo code:

     

    var ActionsArray = {Action1, Action2, Action3}

    ActionArray.Execute(CustomerInstance)

     

     

    Now enough talk and let us jump to code (all code is attached to this post)

     

     

    I have a customer type that looks like this

     

      class Customer

        {

            public string Name { get; set; }

            public Address Address {get;set;}

            public readonly string Id;

     

        }

     

        class Address

        {

            public string  Street{ get; set; }

            public string City { get; set; }

            public string ZipCode { get; set; }

        }

     

    A sample initialized customer will probably looks like this

     

        Customer newCustomer = new Customer

                {

                    Name = "Khaled Hnidk",

                    Address = new Address

                    {

                        Street = "No Where Land",

                        City = "Cairo, Egypt",

                        ZipCode = "00000"

                    }

     

                };

     

     

    I have a series of methods I want to chain the execution like the following

     

    public static void SaveCustomer (Customer c)

            {

                Console.WriteLine("SAVE CUSTOMER");

           

            }

     

            public static void ValidateCustomer(Customer C)

            {

                //throw exception here if customer is not valid

                Console.WriteLine("Validate Customer");

           

            }

     

            public static void RefreshCache()

            {

                Console.WriteLine("REFRESH Cache");

            }

     

     

     

     

    Here is how I am going to build my pipeline

     

    Action<Customer>[] pipeline =

                {

                   c => ValidateCustomer(c),

                    c => SaveCustomer(c),

                   c => RefreshCache()

                };

     

    Discussion: what I am doing is building an array of Action delegate available in System namespace which is nothing other than a function pointer. And I am using anonymous methods to execute my validation methods in sequence, this array can be built using configuration

    Table in your DB or configuration file. The interesting thing about this approach is, it isolates the processing sequence from your code

     

     

    Then I am executing the pipeline like this

     

    pipeline.ExecuteForEach(act => act(newCustomer));

     

     

     

    How did this happen, I am using a mechanism inside .NET that was built organically to support Linq, Arrays by definition implements IEnumerable interface, which I have extended (using extension methods)  as the following:

     

    public static void ExecuteForEach<t>(this IEnumerable<t> IEnum, Action<t> toExecute)

            {

                foreach (var item in IEnum)

                    toExecute(item);   

            }

     

     

     

     

    Next we discuss how to load these actions from configuration files.

     

    Find me on Twitter: http://twitter.com/khnidk

  • Windows PowerShell: Why You Should Care?

    If you're one of the IT Pro guys whose their daily tasks are the administration of a set of desktops, servers, printers and other IT infrastructure components whether its software or hardware, or you're a software developer who develops hundreds of lines of code daily then of course you would always be trying to find a way to automate most of those tasks as much as you can. Actually, when we talk about automation of tasks from IT Professionals' perspective we are talking about the automation of tasks such as managing and troubleshooting Active Directory, DNS, MS Exchange Server 2007, MS Windows Server and the other tools and Products. And from Software Developers' perspective we would be talking about creating a User Interface (UI) test automation scenarios to test your own application easily and faster.

    So, how do you think we can do it?? Yes, you're right.  The most appropriate ways for this kind of automation is scripting. Let's remember what script is. Script is a set of computer instructions written using one of the scripting languages (e.g. VBScripts, JavaScript, Perl, and of course PowerShell) that is executed on a sequential basis to do a specific task, for example you can write a script to join a computer domain and create a computer account in an Active Directory for that computer or whatever task. What I want to say is that scripting is a very powerful and important tool for any System administrator who is searching for a way to complete his tasks in less time with less efforts and best performance.

    What is PowerShell (previously named Monad)?

    Windows PowerShell is a new command-line interface, Dynamic Scripting Environment, new programming language, and an interactive shell which is designed specifically for Windows Platform and Microsoft technologies in order to provide a common management automation platform. Also, Windows PowerShell is designed to drastically reduce the time-to-results with consistent vocabulary and syntax. Moreover, reducing the semantic gap as Jeffrey Snover - PowerShell Architect - said "We wanted to close the semantic gap between what admins thought and what they had to type."

    Windows PowerShell is a powerful automation technology designed not only for IT Professionals but also Developers. Windows PowerShell is available in Windows Server 2008, Windows 7 by default and in the previous versions of windows (XP, Vista, and Server 2003) through optional windows updates.Unlike the other traditional text-based shells that deal with every output as string which requires to parse the output in order to use it as input to another command, PowerShell deals with everything as an object (.NET object) since its built-on .NET Framework and .NET Common Language Runtime (CLR).  Windows PowerShell is now part of Microsoft Common Engineering Criteria (CEC) since 2009 which means that any release for any of Microsoft product will support Windows PowerShell and have PowerShell built-in by default.

    Why new scripting language? [Limitations of the current scripting languages]

    • Not providing too much component like Programming.
    • Not supplying Graphical Interface elements (Forms, textboxes, comboboxes , buttons, etc).
    • Not providing advanced programming features like Object Oriented, data binding, threading.
    • Using a lot of third party tools in order to deal with WMI, COM+, and ADSI components.

    Why Windows PowerShell?

    Since the powerfulness of any shell is measured by its support for pipelining, conditional, and recursive programming (conditional like IF...else statement, recursive like for Loop) and in compare to UNIX shell, Windows Shell was very limited and weak-just command-line console that perform some basic tasks- which doesn't fit administrators' needs and systems complications.

    In the past; before windows PowerShell, since MS-DOS (the first MS Shell) the number of technical users was also limited compared to their number today and of course the growth of technology and users needs wasn't too complex like today. For this reason Microsoft started to focus on the non-technical users -the most of computer users - and tended to develop and enhance the Graphical User Interface (GUI). The GUI was introduced in Windows 3.1x with MS-DOS console included in order to provide DOS-Shell in addition to GUI. By the time, the users' needs increased and the IT problems complexity is growing up day to day. As we mentioned, since the basic shell and GUI doesn't fit administrators' needs, the administrators started to find an alternative ways to solve their problems via Scripts and Scripting Languages.

    Finally, Microsoft decided that the GUI and the limited shell "CMD.exe" is not enough and their platform need a powerful shell to be designed specifically for Windows environments. Accordingly, they had introduced Windows PowerShell.

     

    Why should you (developer) care?

    • PowerShell is based on .net framework and .net Common Language Runtime (CLR).
    • PowerShell is object-based shell.
    • PowerShell can deal easily with WMI, COM+, XML, and ADSI.
    • PowerShell is part of Microsoft Common Engineering Criteria (CEC).
    • Can run .Net code inside PowerShell
    • Can run PowerShell inside managed code
    • Can extend PowerShell and write your own Cmdlets, Modules, and Providers via PowerShell APIs

    What makes PowerShell a different shell?  (Key features)

    • PowerShell Remoting: Remoting feature allows the execution of PowerShell cmdlets on remote systems which help to manage a set of remote computers from one single machine. Remote execution feature relay-on Windows Remote Management (Win-RM) technology.
    • Background Job: PowerShell introduced the concept of Background Jobs which runs cmdlets and scripts asynchronously in the background without affecting the interface or interacting with the console. Background jobs can run on local and remote machines.
    • Steppable-Pinpline: Allows splitting of script-blocks into a steppable pipeline in order to ease the control of execution sequence.
    • Script debugging: As in Visual Studio now you can set breakpoints on lines, columns, functions, variables, and commands. You can also specify actions to run when the breakpoint is hit. Stepping into, over, or out of functions is also supported. You can also get the call stack.
    • Error-Handling: PowerShell provide Error handling mechanism through Try{ }, Catch{ }, Finally { } statements as in .NET  languages.
    • Constrained Runspace: Constrained Runspaces allows creation of PowerShell Runspaces with a set of Constraints. Constraints include the ability to restrict access and execution of Commands, scripts, and language elements when using the Constrained Runspace.
    • Tab-Expansion: This feature is an implementation of autocompletion that is completing the cmdlets, properties, and parameters once tab button pressed. It is similar to intellisense feature in Visual Studio.
    • Integrated Scripting Environment (ISE): It's a graphical version of PowerShell console that with Graphical User Interface (GUI) that provides a set of features:
      • Syntax Color Highlighting and Unicode Support
      • Open multiple runspace, up to eight runspaces in the same time

    Windows PowerShell is now part of:

    • Windows 7
    • Windows Server 2008 including:
      • Active Directory (AD)
      • Active Directory Rights Management Services (AD RMS)
      • AppLocker
      • Best Practice Analyzer (BPA)
      • Background Intelligent Transfer Service (BITS)
      • Failover Cluster
      • Group Policy
      • Server Manager
      • Windows Server Backup
      • Windows Server Migration Tools
      • Internet Information Services (IIS)
      • DNS
      • Remote Registry
      • Terminal Services
      • Hyper-v
    • Windows Embedded
    • Exchange Server 2007, 2010.
    • LYNC
    • SQL Server 2008
    • SharePoint Server 2010
    • Microsoft FxCop
    • Team Foundation Server 2010
    • System Center suite
    • Forefront
    • VMware (PowerCLI)
    • Citrix Workflow Studio

    Conclusion

    Of course Windows PowerShell has a lot of new features that makes it a different shell. In this short article I just wanted to focus on the story of Windows PowerShell, why is it created, and it's most important features. Now, it's your turn to visit PowerShell official website, Team BlogPowerShell Survival Guide , to download, install, learn more, practice, and enjoy.

  • Fun with Pipelining, Chaining & Linq – P2 (Actions from Text)

    In a previous post I created a pipeline & executed it using extension methods & Action<t> function pointer. From quickly reading the code you will realize that you are bound only by 2 things

    1)      You are bound by design time. All the pipelines has to be structured during development.

    2)     You cannot dynamically load actions from text (configuration files, database etc..)

     

     

    I modified the code as the following:

     

    Created a class for validation methods (just to give a bit of a structure to my code)

     

    class CustomerValidationMethods

        {

            public static void SaveCustomer(Customer c)

            {

                Console.WriteLine("SAVE CUSTOMER");

     

            }

     

            public static void ValidateCustomer(Customer C)

            {

                //throw exception here if customer is not valid

                Console.WriteLine("Validate Customer");

     

            }

     

            public static void RefreshCustomerCache(Customer c)

            {

                Console.WriteLine("REFRESH Cache");

            }

        }

     

     

    Now in addition to classical way of creating the pipeline as discussed in part 1

     

       Action<Customer>[] normalPipeline =

                {

                   c => CustomerValidationMethods.ValidateCustomer(c),

                    c => CustomerValidationMethods.SaveCustomer(c),

                   c => CustomerValidationMethods.RefreshCustomerCache(c)

                };

     

     

     

    You can also create the pipeline using strings as the following

     

                Action<Customer>[] strBasedPipeline =

                {

                   "ValidateCustomer".ToAction<Customer>(typeof(CustomerValidationMethods)),

                   "SaveCustomer".ToAction<Customer>(typeof(CustomerValidationMethods)),

                   "RefreshCustomerCache".ToAction<Customer>(typeof(CustomerValidationMethods))

                };

     

    These strings can be loaded from any external configurable source, you can have 2 tables in database one defining pipelines, the other defines the stages. You can also load assemblies on the fly and call validation methods on them.

     

    How did it work:

    when the .NET team created Linq it was not enough to use lambda expressions, a mean by which a downstream query evaluator can walk through the query to translate it to native data source calls had to be created. Think of how your Linq queries are translated to highly efficient where statement on SQL when you use Linq to XML or when you use Linq to EF. This mechanism is called Linq Expressions &  Expressions Trees, encapsulated all in System.Linq.Expressions. I am piggybacking on the mechanism to use for something else.

     

    check out http://msdn.microsoft.com/en-us/library/bb397951.aspx http://blogs.msdn.com/b/charlie/archive/2008/01/31/expression-tree-basics.aspx

     

     

     

    in order to encapsulate the code I created another extension method for type String which creates an expression tree representing our action

     

     

      public static Action<tActionTarget>  ToAction<tActionTarget>(this string str, Type ActionContainer)

            {

                //we know that we will recieve our actions as one method call that takes one parameter as customer

                // we are creating this paramter in our expression

                ParameterExpression pExp = Expression.Parameter(typeof(tActionTarget));

     

                // our Expresion is one method call, as static method on a specific type (action Container)

                // the string we are extending is the method name

                MethodCallExpression mcExp = Expression.Call(ActionContainer, str, null, pExp);

     

                // generate our lamda

                Expression<Action<tActionTarget>> exp = Expression.Lambda<Action<tActionTarget>>(mcExp, pExp);

     

     

                return exp.Compile();

            }

     

     

     

    Both pipeline can be executed using the extension method I created previously

     

                normalPipeline.ExecuteForEach(act => act(newCustomer));

                strBasedPipeline.ExecuteForEach(act => act(newCustomer));

     

     

    in this discussion and the previous I skipped on other parts a typical pipeline would have like a results objects being moved from call to call or a property bag that has some sort of context, this is to focus on the core idea I am trying to discuss. The above implementation can be generalized for to include results objects, prop bags & log writers as needed.

     

    Code files attachde

    Next enter the Rules World!

     

     

     

     Find me on Twitter http://twitter.com/khnidk

  • Windows 7 Features - BranchCache (Series 2)

    Driven by challenges of reducing the costs and complexity of Branch IT, organizations are seeking to centralize applications. However, as organizations centralize applications, the dependency on the availability and quality of the wide-area network (WAN) link increases. The increased utilization of the WAN link is a direct result of centralization, as is the degradation of application performance. Recent studies have shown that despite the reduction of costs associated with WAN links, WAN costs are still a major component of enterprises’ operational expenses.

    BranchCache in the Windows 7 and Windows Server 2008 R2 operating systems can help increase network responsiveness of centralized applications when accessed from remote offices, giving users in those offices the experience of working on your local area network. BranchCache also helps reduce WAN utilization.

    When BranchCache is enabled, a copy of data accessed from intranet Web and file servers is cached locally within the branch office. When another client on the same network requests the file, the client downloads it from the local cache without downloading the same content across the WAN.

    Watch a video about BranchCache

    BranchCache can operate in one of two modes:

    • Distributed Cache. Using a peer-to-peer architecture, Windows 7 client computers cache copies of files and send them directly to other Windows 7 client computers, as needed. Improving performance is as easy as enabling BranchCache on your Windows 7 client and Windows Server 2008 R2-based computers. Distributed Cache is especially beneficial for branch offices that do not have a local server.

    • Hosted Cache. Using a client/server architecture, Windows 7 client computers cache content to a computer on the local network running Windows Server 2008 R2, known as the Hosted Cache. Other clients who need the same content retrieve it directly from the Hosted Cache. The Hosted Cache computer can run the Server Core installation option of Windows Server 2008 R2 and can also host other applications.

    The following diagram illustrates these two modes:

    Hosted cache and distributed cache

    BranchCache can improve the performance of applications that use one of the following protocols:

    • HTTP and HTTPS. The protocols used by Web browsers and many other applications, such as Internet Explorer or Windows Media, among others

    • SMB (including signed SMB traffic). The protocol used for shared folders

    BranchCache only retrieves data from a server when the client requests it. Because it is a passive cache, it will not increase WAN utilization. BranchCache only caches read requests and thus will not interfere with a user saving a file.

    BranchCache improves the responsiveness of common network applications that access intranet servers across slow links. Because it does not require any infrastructure, you can improve the performance of remote networks simply by deploying Windows 7 to client computers, deploying Windows Server 2008 R2 to server computers, and enabling BranchCache.

    BranchCache works seamlessly alongside network security technologies, including SSL, SMB Signing, and end-to-end IPsec. You can use BranchCache to reduce network bandwidth utilization and to improve application performance, even if the content is encrypted

  • Microsoft leading the Magic Quadrant for Social Software in the Workplace through SharePoint

    Microsoft moved forward considerably on the 2010 Social Software for the Workplace Magic Quadrant.  Microsoft maintained its #1 position on the Ability to Execute axis and moved forward rather substantially on the Completeness of Vision axis, overtaking IBM and several other competitors there and coming very close to Jive.

    Also it’s worth mentioning that there are two competitors who are NOT present in the 2010 Social Software for the Workplace MQ: Google and Cisco.

    Google was dropped from this Magic Quadrant for 2010 due to the demise of Wave and Gartner’s lack of confidence in Google’s commitment to the enterprise.  Cisco, on the other hand, despite a year of heavy marketing of Quad to analysts and customer elites failed to meet the bar for market presence.

    Click here to download the full report.