• Desktop Virtualization

    Microsoft Desktop Virtualization solutions help companies to reduce their total cost of ownership, increase business agility and continuity, enable anywhere access, and improve security and compliance. For companies new to desktop virtualization, deploying Microsoft Application Virtualization as a first step can provide immediate cost savings.

    Desktop Virtualization offers a broad portfolio of solutions that empower companies to choose the technologies that best address their unique business and IT challenges while preserving their existing IT investments. Microsoft delivers desktop virtualization offerings for a wide range of situations—from always connected workers to those requiring more flexibility such as mobile workers. For connected workers, Microsoft and its partners, including Citrix, deliver a Virtual Desktop Infrastructure (VDI) solution that allows organizations to centrally manage desktops in the datacenter while providing a personalized desktop experience for end users

    Virtual Desktop Infrastructure

    Virtual Desktop Infrastructure (VDI) is an alternative desktop delivery model that allows users to access desktops running in the datacenter.

    Microsoft offers comprehensive and cost effective technology that can help customers deploy virtual desktops in the datacenter. The Microsoft VDI Suites allow customers to manage their physical and virtual desktops from a single console, while providing great flexibility for deploying server-hosted desktops and applications.

    The Benefits of VDI include:

    • Integrated Management
    • Enhanced security and compliance
    • Anywhere access from connected devices
    • Increase business continuity


    Session Virtualization

    Session Virtualization with Remote Desktop Services delivers session-based desktops or applications and is suitable for low complexity or task worker scenarios. It allow for high user density with a limited degree of personalization or isolation.

    Microsoft Enterprise Desktop Virtualization

    Microsoft Enterprise Desktop Virtualization (MED-V) removes the barriers to Windows upgrades by resolving application incompatibility with Windows Vista or Windows 7. MED-V delivers applications in a virtual PC that runs a previous version of the operating system (for example: Windows XP). It does so in a way that is completely seamless and transparent to the user. Applications appear and operate as if they were installed on the desktop, so that users can even pin them to the task bar. For IT administrators, MED-V helps deploy, provision, control, and support the virtual environments.

    Microsoft Application Virtualization

    In a physical environment, every application depends on its OS for a range of services, including memory allocation, device drivers, and much more. Incompatibilities between an application and its operating system can be addressed by either server virtualization or presentation virtualization; but for incompatibilities between two applications installed on the same instance of an OS, you need application virtualization.

    Microsoft Application Virtualization 4.6 is now available! App-v 4.6 with Windows 7, Windows Server 2008 R2 and Office 2010 delivers a seamless user experience, streamlined application deployment and simplified application management.

    • Broaden the windows platform coverage. App-V now supports 32-bit and 64-bit applications on 32-bit and 64-bit operating systems.
    • Office 2010 virtualized with App-V 4.6 delivers key productivity enhancements & a seamless user experience with Sharepoint, Outlook and more.
    • Optimize Server Disk Storage when using App-V in VDI with a shared cache reduces storage requirements on SAN’s.
    • Increase IT control, user productivity and security with key Windows 7 features. Integrates seamlessly with AppLocker, BranchCache, AppLocker and BitLockerToGo.

    Application Virtualization allows you to isolate a specific application from the OS and other applications, eliminates conflicts between applications and removes the need to install applications on PCs.

    Benefits

    • Streams applications on-demand over the internet or via the corporate network to desktops, Terminal servers and laptops
    • Automates and simplifies the application management lifecycle
    • Accelerates OS and application deployments
    • Reduces the end user impacts associated with application upgrades/patching and terminations
    • Enables controlled application use when users are completely disconnected

     

    User State Virtualization

    User state virtualization isolates user data and settings from PCs and enables IT to store them centrally in the datacenter while also making them accessible on any PC by using Windows Roaming User Profiles, Windows Folder Redirection, and Offline Files. Using each of these technologies or a combination of them enables easily replaceable PCs for business continuity, centralized backup and storage of user data and settings, end user access to their data from any PC, and simplified PC provisioning and deployment.The rich client advantage: user data can be cached for offline access and then automatically synchronizes with datacenter servers upon re-connection to the network.

     

     

  • The Diagonal Warehouse Design

    Have you ever been in a situation when designing your warehouse where you had some measures that could not be grouped together due to dimensionality differences? A few measures that you can’t put into one Fact and of course, like me, you hate the idea of a Fact for each measure. I’ve been there and I found a simple effective design that you can use to combine all the unrelated measures together into one Fact. And “Let the Analysis Services Art handle the rest”.

    Diagonal warehouse is based on a very known fact that

    NULL Aggregation is 0

    Yes, nothing new. So why not use it to fill in the spaces between the measures? Weird, let’s see an example:

     

    Measure 1  uses Time and Geography

    Measure 2 uses Time, Geography and Dimension 2

    Measure 3 uses Time and Dimension 3

     

    How can we combine this Non-homogenous combination above into 1 Fact?

    Simply put the dimensions data into diagonal form and fill the spaces with Nulls, keeping in mind of course to make the unknown member hidden in the dimension property … We’ll get to that later, now let’s look at the Fact table.

     

    TimeID

    GeographyID

    Dimension2

    Dimension3

    Measure1

    Measure2

    Measure3

    Time1

    Geo1

    NULL

    NULL

    Value11

    NULL

    NULL

    Time2

    Geo2

    NULL

    NULL

    Value12

    NULL

    NULL

    Time3

    Geo3

    Dim1

    Null

    NULL

    Value21

    NULL

    Time4

    Geo4

    Dim2

    NULL

    NULL

    Value22

    NULL

    Time5

    Geo5

    Dim3

    NULL

    NULL

    Value23

    NULL

    Time6

    NULL

    NULL

    Dim1

    NULL

    NULL

    Value31

    Time7

    NULL

    NULL

    Dim2

    NULL

    NULL

    Value32

     

    Looking at the above table, it will seem like strange input fields into the Fact. But come to think of it you’ll find that each and every dimension will drilldown correctly on its associated measure neglecting the other non-related measures due to the NULLs filled in.

    When you build the cube in the Analysis Services, go to each and every dimension related the above Fact.  Don’t forget any dimensions and assign the unknown member as hidden like the below screenshot.

     

     

    So think of it as a diagonal and start putting as much measures as you can to be combined. This will save you a lot of time and design headache.

     

  • Microsoft leading the Magic Quadrant for Social Software in the Workplace through SharePoint

    Microsoft moved forward considerably on the 2010 Social Software for the Workplace Magic Quadrant.  Microsoft maintained its #1 position on the Ability to Execute axis and moved forward rather substantially on the Completeness of Vision axis, overtaking IBM and several other competitors there and coming very close to Jive.

    Also it’s worth mentioning that there are two competitors who are NOT present in the 2010 Social Software for the Workplace MQ: Google and Cisco.

    Google was dropped from this Magic Quadrant for 2010 due to the demise of Wave and Gartner’s lack of confidence in Google’s commitment to the enterprise.  Cisco, on the other hand, despite a year of heavy marketing of Quad to analysts and customer elites failed to meet the bar for market presence.

    Click here to download the full report.

  • Windows 7 Features - BranchCache (Series 2)

    Driven by challenges of reducing the costs and complexity of Branch IT, organizations are seeking to centralize applications. However, as organizations centralize applications, the dependency on the availability and quality of the wide-area network (WAN) link increases. The increased utilization of the WAN link is a direct result of centralization, as is the degradation of application performance. Recent studies have shown that despite the reduction of costs associated with WAN links, WAN costs are still a major component of enterprises’ operational expenses.

    BranchCache in the Windows 7 and Windows Server 2008 R2 operating systems can help increase network responsiveness of centralized applications when accessed from remote offices, giving users in those offices the experience of working on your local area network. BranchCache also helps reduce WAN utilization.

    When BranchCache is enabled, a copy of data accessed from intranet Web and file servers is cached locally within the branch office. When another client on the same network requests the file, the client downloads it from the local cache without downloading the same content across the WAN.

    Watch a video about BranchCache

    BranchCache can operate in one of two modes:

    • Distributed Cache. Using a peer-to-peer architecture, Windows 7 client computers cache copies of files and send them directly to other Windows 7 client computers, as needed. Improving performance is as easy as enabling BranchCache on your Windows 7 client and Windows Server 2008 R2-based computers. Distributed Cache is especially beneficial for branch offices that do not have a local server.

    • Hosted Cache. Using a client/server architecture, Windows 7 client computers cache content to a computer on the local network running Windows Server 2008 R2, known as the Hosted Cache. Other clients who need the same content retrieve it directly from the Hosted Cache. The Hosted Cache computer can run the Server Core installation option of Windows Server 2008 R2 and can also host other applications.

    The following diagram illustrates these two modes:

    Hosted cache and distributed cache

    BranchCache can improve the performance of applications that use one of the following protocols:

    • HTTP and HTTPS. The protocols used by Web browsers and many other applications, such as Internet Explorer or Windows Media, among others

    • SMB (including signed SMB traffic). The protocol used for shared folders

    BranchCache only retrieves data from a server when the client requests it. Because it is a passive cache, it will not increase WAN utilization. BranchCache only caches read requests and thus will not interfere with a user saving a file.

    BranchCache improves the responsiveness of common network applications that access intranet servers across slow links. Because it does not require any infrastructure, you can improve the performance of remote networks simply by deploying Windows 7 to client computers, deploying Windows Server 2008 R2 to server computers, and enabling BranchCache.

    BranchCache works seamlessly alongside network security technologies, including SSL, SMB Signing, and end-to-end IPsec. You can use BranchCache to reduce network bandwidth utilization and to improve application performance, even if the content is encrypted

  • Fun with Pipelining, Chaining & Linq – P2 (Actions from Text)

    In a previous post I created a pipeline & executed it using extension methods & Action<t> function pointer. From quickly reading the code you will realize that you are bound only by 2 things

    1)      You are bound by design time. All the pipelines has to be structured during development.

    2)     You cannot dynamically load actions from text (configuration files, database etc..)

     

     

    I modified the code as the following:

     

    Created a class for validation methods (just to give a bit of a structure to my code)

     

    class CustomerValidationMethods

        {

            public static void SaveCustomer(Customer c)

            {

                Console.WriteLine("SAVE CUSTOMER");

     

            }

     

            public static void ValidateCustomer(Customer C)

            {

                //throw exception here if customer is not valid

                Console.WriteLine("Validate Customer");

     

            }

     

            public static void RefreshCustomerCache(Customer c)

            {

                Console.WriteLine("REFRESH Cache");

            }

        }

     

     

    Now in addition to classical way of creating the pipeline as discussed in part 1

     

       Action<Customer>[] normalPipeline =

                {

                   c => CustomerValidationMethods.ValidateCustomer(c),

                    c => CustomerValidationMethods.SaveCustomer(c),

                   c => CustomerValidationMethods.RefreshCustomerCache(c)

                };

     

     

     

    You can also create the pipeline using strings as the following

     

                Action<Customer>[] strBasedPipeline =

                {

                   "ValidateCustomer".ToAction<Customer>(typeof(CustomerValidationMethods)),

                   "SaveCustomer".ToAction<Customer>(typeof(CustomerValidationMethods)),

                   "RefreshCustomerCache".ToAction<Customer>(typeof(CustomerValidationMethods))

                };

     

    These strings can be loaded from any external configurable source, you can have 2 tables in database one defining pipelines, the other defines the stages. You can also load assemblies on the fly and call validation methods on them.

     

    How did it work:

    when the .NET team created Linq it was not enough to use lambda expressions, a mean by which a downstream query evaluator can walk through the query to translate it to native data source calls had to be created. Think of how your Linq queries are translated to highly efficient where statement on SQL when you use Linq to XML or when you use Linq to EF. This mechanism is called Linq Expressions &  Expressions Trees, encapsulated all in System.Linq.Expressions. I am piggybacking on the mechanism to use for something else.

     

    check out http://msdn.microsoft.com/en-us/library/bb397951.aspx http://blogs.msdn.com/b/charlie/archive/2008/01/31/expression-tree-basics.aspx

     

     

     

    in order to encapsulate the code I created another extension method for type String which creates an expression tree representing our action

     

     

      public static Action<tActionTarget>  ToAction<tActionTarget>(this string str, Type ActionContainer)

            {

                //we know that we will recieve our actions as one method call that takes one parameter as customer

                // we are creating this paramter in our expression

                ParameterExpression pExp = Expression.Parameter(typeof(tActionTarget));

     

                // our Expresion is one method call, as static method on a specific type (action Container)

                // the string we are extending is the method name

                MethodCallExpression mcExp = Expression.Call(ActionContainer, str, null, pExp);

     

                // generate our lamda

                Expression<Action<tActionTarget>> exp = Expression.Lambda<Action<tActionTarget>>(mcExp, pExp);

     

     

                return exp.Compile();

            }

     

     

     

    Both pipeline can be executed using the extension method I created previously

     

                normalPipeline.ExecuteForEach(act => act(newCustomer));

                strBasedPipeline.ExecuteForEach(act => act(newCustomer));

     

     

    in this discussion and the previous I skipped on other parts a typical pipeline would have like a results objects being moved from call to call or a property bag that has some sort of context, this is to focus on the core idea I am trying to discuss. The above implementation can be generalized for to include results objects, prop bags & log writers as needed.

     

    Code files attachde

    Next enter the Rules World!

     

     

     

     Find me on Twitter http://twitter.com/khnidk