GD Bloggers

This is the blog site for Microsoft Global Delivery Communities focused in sharing the technical knowledge about devices, apps and cloud.
Follow Us On Twitter! Subscribe To Our Blog! Contact Us

May, 2012

  • How to add Desktop Experience feature on Windows Server 2012 “Server 8 Beta”

    As desktop Experience feature is being relocated to be under User interface and infrastructure many people might believe it’s no longer there ! but it’s and below screenshot shows the exact location of desktop experience feature.

    5-1-2012 3-18-02 PM

  • Windows Server 2012 Direct Access – Part 1 What’s New

    Direct Access feature was introduced with Windows Server 2008 R2 and Windows 7 Client computers. Direct Access overcomes the limitations of VPNs by automatically establishing a bi-directional connection from client computers to the corporate network so users never have to think about connecting to the enterprise network and IT administrators can manage remote computers outside the office, even when the computers are not connected to the VPN.

    In this blog post series I’ll cover Direct Access Feature in Windows Server 2012 and design a complex Windows Server 2012 lab to implement Direct Access feature. In my next post I will outline the lab requirements and design considerations. With 3rd post we will be able to install direct access feature and then cover details and troubleshooting steps.

    In this first post, let’s look at the Direct Access feature on Windows 7 /2008 R2 and compare with Windows Server 2012.

    Direct Access feature in Windows Server 2008 R2 had following goals for organizations;

    • Direct Access enhances the productivity of mobile workers by connecting their computers automatically and seamlessly to their intranet any time Internet access is available
    • With Direct Access, IT staff can manage mobile computers by updating Group Policy settings and distributing software updates any time the mobile computer has Internet connectivity
    • Direct Access separates intranet from Internet traffic.
    • When an application on a Direct Access client attempts to resolve a name, it first compares the name with the rules in the NRPT (Name Resolution Policy Table )
      If there are no matches, the Direct Access client uses Internet DNS servers to resolve the name

    From connectivity perspective;

    Direct Access clients create two tunnels to the Direct Access server. The first tunnel, the infrastructure tunnel, provides access to intranet Domain Name System (DNS) servers, Active Directory Domain Services (AD DS) domain controllers, and other infrastructure and management servers. The second tunnel, the intranet tunnel, provides access to intranet resources such as Web sites, file shares, and other application servers.

    Direct Access feature relies on IPv6 network infrastructure. For those who have not a native IPv6 network infrastructure, ISATAP can be used to make intranet servers and applications reachable by tunneling IPv6 traffic over your IPv4-only intranet. Computers running Windows 7 or Windows Server 2008 R2 support ISATAP host functionality.

    To configure ISATAP you have to put ISATAP host ( A ) record in your DNS and all machines can then resolve this name to configure their ISATAP adapters.

    By default, Windows 2003 SP2 and above DNS servers do not answer queries for the names WPAD and ISATAP. That means you will need to enable queries for the ISATAP name on these servers.

    Here is a simple diagram that shows how direct access feature works on Windows Server 2008 R2;

    image

    Notice that the Direct Access client establishes two IPsec tunnels:

          IPsec Encapsulating Security Payload (ESP) tunnel with IP-TLS (Transport Layer Security) encryption using the machine certificate. This tunnel provides access to the DNS server and domain controller, allowing the computer to download Group Policy objects and to request authentication on the user’s behalf.

          IPsec ESP tunnel with IP-TLS encryption using both the machine certificate and user credentials. This tunnel authenticates the user and provides access to internal resources and application servers. For example, this tunnel would need to be established before Microsoft Outlook could download e-mail from the internal Microsoft Exchange Server.

    Also Direct Access servers running Windows Server 2008 R2 requires two network adapters, one that is connected directly to the Internet and one that is connected to the intranet. Also this server should have two consecutive and public IPv4 addresses.

    One another major challenges for IT administrators to deploy Direct Access in Windows Server 2008 R2 was the need of a PKI environment to issue computer certificates.

    And if you have not Forefront UAG, an optional NAT64 device is a requirement to provide access to IPv4-only resources for Direct Access clients.

    image

    As you noticed with above complex requirements, implementing Direct Access feature was not a easy task for IT Departments.

    Direct Access feature has been designed again with Windows Server 2012 and now  addresses better connectivity with better manageability.

    In brief, Windows Server 2012 includes following improvements over Windows Server 2008 Direct Access and RRAS features;

    • Direct Access and RRAS coexistence

    In Windows Server 2008 R2, combining RRAS and Direct Access might cause some conflicts for the remote client connectivity. Since Direct Access relies on IPv6 and RRAS implements IKEv2 IPSEC, this results in Direct Access traffic being blocked if RRAS is installed and VPN access is deployed with IKEv2. Now in Window Server 2012, Direct Access and RRAS are combined within a new unified server role.

    • Simplified Direct Access management for small and medium organization administrators

    One of the most important simplicity in Windows Server 2012 is removal of the need for a full PKI deployment. As you know that one major deployment blocker for Windows 7 Direct Access is the requirement of a Public Key Infrastructure (PKI) for server and client certificate-based authentication. Now in Windows Server 2012, client authentication requests are sent to a Kerberos proxy service running on the DA server. Then Kerberos proxy sends requests to domain controllers on behalf of the client.

    And also new getting started wizard which will be covered on next posts allows for an automated setup in a few simple steps.

    • Built-in NAT64 and DNS64 support for accessing IPv4-only resources

    In Windows Server 2008 R2, UAG might be used for NAT64 and DNS64 translations;

    image

    Now Windows Server 2012 Direct Access server includes native support for NAT64 and DNS64 translations that convert IPv6 communication from the client to IPv4 internal resources.

    • Support for Direct Access server behind a NAT device

    The Teredo IPv6 transition technology is used typically when the client system is assigned a private IP address (and for modern Windows clients, will be used when the client is assigned a public IP address and 6to4 isn’t available). A Windows Server 2008 R2 Direct Access server requires two network interfaces with two consecutive public IPv4 addresses assigned to the external interface. This is required so that it can act as a Teredo server.

    Now in Windows Server 2012 direct access server can be deployed behind a NAT device with support for only one single network interface and removes the public IPv4 address prerequisite.

    • Load balancing support

    One of the most important  enhancement is the chance to design a fully high available direct access solution. Now in Windows Server 2012, Direct Access has  built-in Windows Network Load Balancing support to achieve high availability and scalability. And this configuration can be configured within new deployment wizard interface with a couple of clicks.

    • Support for multiple domains

    Now you can configure Direct access server to allow remote clients located in different domains.

    • Support for OTP (token based authentication)

    For organizations that needs a security level with OTP vendor solutions such as RSA SecurID, Windows Server 2012 supports two factor authentication with smart cards or OTP token based solutions.

    • Automated support for force tunneling

    image

    http://blogs.technet.com/b/tomshinder/archive/2011/04/19/url-and-antivirus-filtering-for-directaccess-clients.aspx

    By default only specific network traffic (defined by DNS records) will go through direct access tunnel. But if you want to route all traffic from client computer to the intranet resources over Direct Access tunnel, you can configure it with Force Tunneling.

    Force tunneling is a feature in Windows Server 2008 R2 that forces all network traffic to be routed over Direct Access IPSEC tunnel. But it requires manual steps to enable via group policy. In Windows Server 2012, direct access has integrated force tunneling with the setup wizard.

    • Multisite support

    Now in Windows Server 2012, you can configure multiple Direct Access  entry points across remote locations. This makes sure the client locates the closest IP-HTTPS server, Teredo Server, DNS Server etc. regardless of their physical location.

    • Windows PowerShell support

    Direct Access in Windows Server 2008 R2 lacks a complete scripting and command line interface for configuration options. Windows Server 2012 provides full Windows PowerShell support for the setup, configuration, management, monitoring and troubleshooting of the Remote Access Server Role.

    • User and server health monitoring

    The Monitoring Dashboard shows a summary of remote client connectivity status for the following items. The information is generated from the relevant Performance Monitor counters and accounting data.

    • Total number of active remote clients connected – includes both Direct Access and VPN remote clients
    • Total number of active Direct Access clients connected – only the total number of clients connected using Direct Access
    • Total number of active VPN clients connected – only the total number of clients connected using VPN
    • Total unique users connected – includes both Direct Access and VPN users, based on the active connections
    • Total number of cumulative sessions – the total number of connections serviced by the Remote Access Server since the last server restart
    • Maximum number of remote clients connected – the maximum concurrent remote users connected to the Remote Access Server since the last server restart
    • Total data transferred – the total inbound and outbound traffic from the Remote Access Server for both Direct Access and VPN since the last server restart

    With the above enhancements, it’s now much easier to implement this great remote access feature in your organization.

    In second blog post of this series, I will discuss to design a Windows Server 2012 Direct Access lab that will guide us for next posts.

  • Prepare SharePoint Farm – Part 3 Prepare NLB for SharePoint Web Front End WFE Servers

    Part 1 - Prepare Windows Cluster

    Part 2 - Install and configure SQL Cluster

    part 4 - Install and configure SharePoint farm (3-tier)

     

    This post demonstrate  a step by step NLB configuration , how to prepare a NLB cluster to be used later on as SharePoint WFE servers .

    however through this walkthrough of NLB configuration , I tried to pause on some steps and explain the NLB configuration steps as much as possible, although this series is to configure the NLB for SharePoint WFE Servers, I tried to make this post as generic as possible to accommodate any NLB configuration for any purpose.

    Before you start

     IP addresses :

    • You need to have a Virtual IP, this IP is called Cluster(Public) IP and must be the same among all Cluster nodes.
    • In the other hand each cluster node would have Dedicated (Private) IP address which must be different among other nodes.
    • How to configure these IPs to interact with NLB is differ depending on Single Network adapter or Multiple Network Adapter is used.
    • In case of using Single Adapter : The dedicated IP address is always entered first so that outgoing connections from the cluster host are sourced with this IP address instead of a virtual IP address. Otherwise, replies to the cluster host could be inadvertently load-balanced by Network Load Balancing and delivered to another cluster host.
    • Private(dedicated) IPs and Cluster IP must be on the same Subnet Mask (Network) to function properly.

     

    General Observations

    • Multicast is slower than Unicast
    • As a general rule use Unicast for two adapters, Multicast for a single adapter
    • Best Performance is obtained from either Multiple Unicast & Multiple Multicast, although multiple multi cast needs complex configuration

     

    1- If you are using virtual machines on hyper-v it’s important to enable the IP spoofing

    image

    2- NLB configuration , from Server manger –> add NLB feature on all WFE Servers:

    image

    3- Open NLB manager from administrative tools , Cluster –>new , or from start –> run –> NlbMgr

    image

    4- in this step we are going to add the first server node to the NLB (which is the first web front end server WFE01) ,enter the server name then connect ,

    5- Select the network adapter which you wish to participate in the NLB Cluster , then Click next:

    image

     

    5- in this screen the dedicated IP (private IP) is displayed , with the ability to add more private IPs, leave as default and click next.

    image

    • Priority (Unique ID)
      • each Host takes a Unique ID.
      • The host with lowest Priorities (1) is called the master host and manage all cluster traffic that is not handled by Port's Rule.
    • Dedicated IP address :
      • Must be configured in TCP/IP properties first.
      • Must be identical for the IP entered in TCP/IP properties.
    • Initial state : determine whether the node will join the cluster when operating system is started

     

     

    5- in this screen you will be prompted to add the NLB cluster IP (Public IP) that will be used to communicate with the front end server, click Add

    image 

    enter the Cluster IP as required , then click next

    image

    6- in this screen you will enter the NLB Cluster name, by selecting the NLB Cluster IP then entering Name (SPSFENLB) , this IP will be used as cluster NLB IP that will be accessed by external traffic , and traffic will be routed accordingly to the host node with the least network traffic

    image

    IP address: virtual IP address(Public IP) is set for cluster, must be identical on all cluster hosts, all applications are going to use this IP to communicate with cluster

    Full internet name: ClusterName.DomainName , must be identical for all cluster hosts, users type this name in their browsers to access web server cluster, This name must be registered and mapped in DNS with its Cluster IP.

    Cluster Operation Mode

    · Multicast:

    • Choose this option if you want cluster nodes to be accessed through both their Public IP and Private IP address.
    • This option is optimal if you have one Network Card installed because Private IP would be functional as well as no application using this Private IP would be affected
    • AC address is changed into multicast MAC address.
    • If clients are accessing Cluster through a Router (in another LAN) make sure that the router support ARP (mapping more than IP address to one MAC address).
    • IGMP can be enabled which eliminate switch flooding (only cluster ports can pass)

    · Unicast

    • Choose this option if you want cluster nodes to be accessed only by Public IP, If you had one Network card and you chose Unicast then your server would not be accessed through its Private IP anymore. You would access it only through its public IP.
    • his option is optimal if you have two network cards, where you can configure one as public IP and the other as Private IP.
    • Using Unicast would provide more performance gain than Multicast.
    • Cluster MAC address overrides built-in MAC address (some adapters don’t allow this ,in this case you need to replace it with another one).

    When you use the unicast method, all cluster hosts share an identical unicast MAC address. Network Load Balancing overwrites the original MAC address of the cluster adapter with the unicast MAC address that is assigned to all the cluster hosts.

    When you use the multicast method, each cluster host retains the original MAC address of the adapter. In addition to the original MAC address of the adapter, the adapter is assigned a multicast MAC address, which is shared by all cluster hosts. The incoming client requests are sent to all cluster hosts by using the multicast MAC address.

    As a rule of thumb Select the unicast method for distributing client requests, unless only one network adapter is installed in each cluster host and the cluster hosts must communicate with each other from inside

    For more info refer to Multicast vs Unicast section at the end of this post.

    7- Define Port Rules , this is an optional step , where in default all rules are enabled,

    image

    how ever if you wanted to limit the traffic on this NLB Cluster select the port rule –> Edit

    image

    · Filtering Modes : There are three Filtering Modes which determines the host responsible to handle network traffic for this rule (this helps to distribute network traffic among hosts):

    o Multiple Host : both hosts will handle network traffic over specified port range, This filtering mode provides scaled performance in addition to fault tolerance by distributing the network load among multiple hosts

    •  Affinity :
      • None:
        • allows multiple connections from the same client IP to be handled by different Cluster Hosts.
        • although disabling Affinity would improve performance since it allows connections from the same client to be handled concurrently by different Cluster hosts, Don’t choose none when UDP or Both is chosen this confuse NLB from handling IP fragments properly.
      • Single :
        • Directs multiple connection from the same client IP to the same Cluster Host
        • This option is efficient when you have clients that access NLB cluster through multiple proxies which might cause requests from single client to appear originated from different computers.
      • Class C: Similar to Single
        • Directs multiple connection from the same client IP to the same Cluster Host
        • This option is efficient when you have clients that access NLB cluster through multiple proxies located within the same Class C Address range.
    • Single Host: only single host will handle network traffic according to host's priority.
    • Disable the port range : all network traffic for the associated port rule will be blocked

     

    Notes:

    • To improve Load balancing choose set affinity to None when possible. Bearing in mind "none" can't be chosen when UDP or Both option is chosen in Protocols.
    • Single affinity obtains more performance than class C affinity.
    • When Single Host is selected , host with the highest priority(1) will handle all network traffic and load weight option is then disabled.
    • To determine one port in port rule , place the same port in both: From , To options.
    • The same ports rules must be placed in all involved hosts or error will be generated when trying to add another host to cluster.
    • You can determine Network load weight between hosts when multiple hosts option is chosen, Load weight is determined through Host Properties from add/edit port rules. After finishing configuration go to Host Name—>Host Properties—>Port Rules –>Uncheck Equal option—>choose proper load weight with in this rule.

    TCP Vs UDP :

    TCP: connection between sender & Receiver persist until sending is finished then connection is closed. (sender can guarantee delivery, some how heavy on network).

    UDP: Sender package the data and release it in the network to reach receiver (no guarantee for delivery, very light on network).

    Application

    Application-layer protocol

    Underlying Transport Protocol

    electronic mail

    SMTP

    TCP

    remote terminal access

    Telnet

    TCP

    Web

    HTTP

    TCP

    file transfer

    FTP

    TCP

    remote file server

    NFS

    typically UDP

    streaming multimedia

    proprietary

    typically UDP

    Internet telephony

    proprietary

    typically UDP

    Network Management

    SNMP

    typically UDP

    Routing Protocol

    RIP

    typically UDP

    Name Translation

    DNS

    typically UDP

     

    6- After you finish , add the second WFE host, follow same steps

    image

    7- Two servers are now joined to the NLB cluster

    image

    9. Go to Active Directory and create a host record for the NLB cluster name

     

    Multicast vs Unicast:

    Multicast

    Unicast

    NLB adds the new virtual MAC to the network card, but also keeps the card’s original MAC address

    NLB replaces the network card’s original MAC address with the New entered one. (Cluster IP)

    Not all routers can support having two MAC addresses on one Network card where routers might reject replies from NLB hosts since a UniCast IP must have one MAC not Two MAC addresses

    Work with all routers since each network card only has one MAC address

    Servers can communicate with each other in NLB administrator via the original addresses of their NLB network cards

    Since all hosts in the cluster all have the same MAC and IP address, they do not have the ability to communicate with each other via their NLB network card. A second network card is required for communication between the servers

    Note: be aware that some routers doesn’t support Multi Mac addresses over Unicast IP's , special configuration might be needed for routers.

    The IP addresses starting from : 224.0.0.0 to 239.255.255.255 (class D) are reserved for multicast networks. e.g. 192.168.0.158 is an address that belong to Unicast network.

  • Prepare SharePoint Farm – Part 4 Install and Configure SharePoint farm (3 -tier)

     

    In previous posts , we have configured windows cluster, SQL cluster, and NLB for the WFE servers.

    Part 1 - Prepare Windows Cluster

    Part 2 - Install and configure SQL Cluster

    Part 3 - Install and Configure NLB on WFE

    In this post we will perform a farm installation and configuration to meet 3-tier farm topology described as shown in above figure

    1- Make sure all servers have internet connectivity

    2- Turn off firewall on all Application and web front end servers

    3- Install windows updates on all servers.

    4- Add the following Roles on each server

    • IIS Role
    • Application Server

    image

    5- Run SharePoint installation setup on all servers (except the db servers), start with App Server and configure Central administration there , then continue with WFEs as below.

    6- run the default.hta –> prerequisites installer

    image

    7- The setup will run the following:

    image

    8- After pre requisites installation complete , run SharePoint setup:

    9- enter SharePoint key

    10- Make sure to select Server Farm as we are installing SharePoint on multiple servers for scalability, click next

    image

    11- Make sure to select complete, as standalone installation will install SharePoint under SQL Express

    image

    12- Determine the installation path, and Data index files path (these index files will save index information regarding search, application server will push the index files regularly to the below location. Index files helps the WFE to find search queries and post search index result to the Search crawler to get the result faster.

    image

    13- After installation , we need to run the configuration wizard , this wizard should be executed first on the Application server, where SharePoint Central Administration going to be hosted

    14- On application Server, run the configuration wizard, and select New Farm option, since this App server is the first farm in the farm.

    image

    15- Select the Cluster SQL instance name (created in Part2 of this series), the SharePoint admin account (SPSadmin) should be admin on database server.

    image

    Note: the SQL instance name above is abstracted from the windows cluster as below: (Refer to Post Part 1 &2)

    image

    16- Enter a Passphrase, make sure to save this password in some reachable place, this password is needed when ever a new server is going to be joined to the farm: 

    image

    17- Select some port to host the Central Administration web application

    image

    18- Review summary page, the click Next

    19- The Central Administration URL is: http://[AppServerName]:100/

    20- Open Central Administration run the Configuration wizard to configure SharePoint service applications:

    image

    Note: in more advanced scenarios you can create service applications one by one manually as per the farm requirements., in this post we will walk through creating the service applications as per the default settings

    21- Check the services you need for this Farm, to create the required Service Applications:

    image

    22- Skip the Web application creation step ,as you want to create the web application on WFE NLB servers not on application server

    image

    Note : we will perform some additional configurations for the service applications in more details once we finish the installation for the rest of servers.

    23- Now go to WFE Servers , Run Configuration wizard on both WFE servers selecting now the Join Farm option (not create new farm)., make sure to select Connect to existing farm

    image

    24- Enter the SQL Instance Cluster name, then press on Retrieve Database names

    image

    25- Make sure to supply the passphrase you entered previously while creating the farm on the application server

    image

    26- In the summary page , you can click on the advanced option to make sure that the WFE servers are not used to host the Central Administration Web Application (as its hosted on Application server only), then Click next:

    image

    image

     

     

    Configure SharePoint Farm Server Roles

    now we have installed and configured  SharePoint on all servers , we need to configure one of them as App server, and two of them as WFE servers, actually there is no dedicated configuration page to identify the server Roles explicitly ,alternatively the process of identifying the server roles is much dynamic and yet advanced, the trick here is that you identify the server role through the Manage Services page by identifying the appropriate services on each server, as below:

    1. Open Central Administration

    2. Click on Manage Service on the farm and make sure to perform the following

     

    • WFE Servers should have the following services started on it to serve users requests only and to act optionally as query indexers for the search,  this is why we stated the search services but in a later step we will eliminate the search role here to act as query indexer only.

    image

    Note: the most important service to be started on WFE servers to identify them as WFE is Microsoft SharePoint Foundation web application

    when starting the Search service for the first time , you might be prompted to configure the search service as below:

    image

     

     

    • Application server should have all the required services started on it, as below (some services are intentionally stopped as they are not needed in this farm

    image

    Note:  Microsoft SharePoint Foundation web application Service is stopped here

     

     

    Search Service Topology configuration

    in this section we will identify the search service role on each server, its true that the application server should be responsible for Search , but some components need to be there on WFE servers to handle search queries by client.

    As a best practice, its better to install search components on the Web and application tiers in order to optimize performance by distributing the load placed on the servers in the farm. To distribute search components across the farm tiers, you must follow these steps:

    • Move the query components to the Web tier
    • Move the Crawl components to the App tier
    • Keep the Database components to run on SQL Server.

    Search is complicated enough that it has its own topology configuration settings. The services that need to be tuned are:

    • Search Query and site settings
    • SharePoint Server Search

    You can use a special configuration page to place the query functionality on the WFE and to place the crawling\indexing functionality on the Application Server.

    1. Defining the Search Topology, Go to Manage Service Application to configure the Search Topology , click on Search Service Application.

    image

    Click modify and make sure to have the following :

    • App Server : change its role to be Crawler , click on the default Crawl component and change it to be assigned to application server.
    • WFE Server: change its role to be Indexer, click the default Query component –> add mirror.

    after you finish press on Apply topology

    2. Click on content sources on the left menu –> click on the Scope you want to schedule:

    image

    3. Scroll down to Crawl Schedule –> create schedule

    image

    4. For low search latency and instant search crawling , The configuration is set is the below , which can be changed later on for by the admin as needed

    • Full Crawl setting:

    image

    • Incremental crawl settings:

    image

    Note: the above configuration is set to run a full crawl once every night , and to run an incremental crawl every 5 minutes for 24 hours, the interval may vary incase no dedicated application server is there for search.

     

    Create your First Intranet Web Application in the Farm

    in this step we will be creating our intranet web application on the NLB URL

    1. Go to Central Administration , click manage web applications

    2. Create new web application (with NLB considerations) as the following:

    image

    Make sure to supply the NLB cluster name in the host header (created in post 3 of this series) , and proper Content database name, this content database will contain the intranet content (documents, lists, etc…

    image

    3. After Web application is created, create a site collection as the following:

    image

    note how the URL is appearing using the NLB cluster name instead of server name, this is because of the host record we supplied in the previous step.

    note: make sure that your NLB cluster name and IP are added as host record to the Active Directory. 

     

    Configure Alternate Access Mapping

    In some situations users might need to request to have an easy name for the portal so when ever they want to access the portal, it will be easy to memorize it and type in the URL (e.g. myPortal)

    1. Go to Central Administration –> Application management –> Configure Alternate Access Mapping

    2. Press on edit public URLs

    image

    3. Add a new alternate access mapping

    image

    make sure to supply the name that you want users to use when accessing the portal (e.g. myPortal), this URL is then resolved to the NLB cluster URL automatically because of this configuration of this configuration we are doing now.

    4. Add binding to the web application at the IIS

    image

    5. make sure to add “myPortal” as a host record to the AD 

     

     

  • Converged Fabric in Windows Server 2012 Hyper-v

     

    Converged Fabrics tends to simplify data center management by consolidating all communication (Management, LiveMigration, iSCSI, CSV and Cluster heartbeat & CSV) onto a single fabric for ease of management and better utilization of high availability features like Teaming.

    In this post we will create multiple virtual communication networks over two 10 Gbps  physical NICs teamed to form “LAN01-Team”, as shown in the following diagram:

     

    Before proceeding with converged networks creation and QoS configuration some terminologies has to be described first

    ·         MaximumBandwidth<Int64>:Specifies the maximum bandwidth, in bits per second, for the virtual network adapter. The specified value is rounded to the nearest multiple of eight. Specify zero to disable the feature.

    ·         MinimumBandwidthAbsolute<Int64>: Specifies the minimum bandwidth, in bits per second, for the virtual network adapter. The specified value is rounded to the nearest multiple of eight. For predictable behavior, you should specify a number larger than 100Mbps.

    ·         MinimumBandwidthWeight<UInt32>: Specifies the minimum bandwidth, in terms of relative weight, for the virtual network adapter. The weight describes how much bandwidth the virtual network adapter intends to have in relative to other virtual network adapters connected to the same virtual switch. The range of the value is 0 and 100. Specify zero to disable the feature.

    ·         DefaultFlowMinimumBandwidthAbsolute<Int64> “To be used with Virtual Switches only set-vmswitch”: Specifies the minimum bandwidth, in bits per second, that is allocated to a special bucket called “default flow.” Any traffic sent by a virtual network adapter that is connected to this virtual switch and does not have minimum bandwidth allocated will be filtered into this bucket. Specify a value for this parameter only if the minimum bandwidth mode on this virtual switch is absolute.By default, the virtual switch allocates 10% of the total bandwidth, which depends on the physical network adapter it binds to, to this bucket. For example, if a virtual switch binds to a 1GbE network adapter, this special bucket can use at least 100Mbps. If the value is not multiples of 8 it will be rounded down to the nearest number that is. For example, input 1234567 will be converted to 1234560.

    Now we will start to create and Virtual Switch, different virtual networks with different QoS bandwidth allocation for each network as per the below table;

     

    VMSW01

    Management

    Guest Access

    Live Migration

    iSCSI

    Cluster

    MaximumBandwidth

     

    1000000000

    5000000000

    1000000000

    2000000000

    500000000

    MinimumBandwidthAbsolute

     

     

     

     

     

     

    MinimumBandwidthWeight

     

    20

    40

    20

    20

    20

    DefaultFlowMinimumBandwidthAbsolute

    20

     

     

     

     

     

     

    1.       First step to configure the above network requirements is to create the virtual switch VMSW01 and set the DefaultFlowMinimumBandwidthAbsolute to be 20, to do so in windows PowerShell run the following command "This switch will be used by Virtual Machines for guest networking access":

    New-VMSwitch "VMSW01" -MinimumBandwidthMode weight -NetAdapterName “LAN01-Team” –AllowManagement 1

     

    Set-VMSwitch "VMSW01" -DefaultFlowMinimumBandwidthWeight 20

    2.       Now we will create the Management, Live migration, iSCSI and Cluster network, to do so in windows PowerShell run the following commands:

    Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName  VMSW01"

      

    Add-VMNetworkAdapter -ManagementOS -Name "Live Migration" -SwitchName "VMSW01"

     

    Add-VMNetworkAdapter -ManagementOS -Name "iSCSI" -SwitchName "VMSW01"

     

    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "VMSW01"

     

    3.       Then the final step is to set the QoS bandwidth allocation limits on each of the created networks, to do so in windows PowerShell run the following commands:

    Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 20 –MaximumBandwidth 100000000

      

    Set-VMNetworkAdapter -ManagementOS -Name "Live Migration" -MinimumBandwidthWeight 20 –MaximumBandwidth 1000000000

     

    Set-VMNetworkAdapter -ManagementOS -Name "iSCSI" -MinimumBandwidthWeight 20 –MaximumBandwidth 2000000000

     

    Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 20 –MaximumBandwidth 500000000

     

    To get all of the created virtual network adapters settings run the following PowerShell command:

    Get-VMNetworkAdapter -all -Name * | fl 

    Other Examples of configuring virtual networks while separating the management interface

    Example1: No high availability for any type of traffic.

    Example2:


     

    I hope you found this post helpful.

     

     

  • Creating fine grained password policies through GUI Windows server 2012 “Server 8 beta”

    A quick description of fine grained password policies is that you can specify multiple password policies within a single domain. You can use fine-grained password policies to apply different restrictions for password and account lockout policies to different sets of users in a domain

    One of the nice features introduced in Windows Server 2010 “Server 8 beta” AD DS is the ability to configure fine grained password policies through GUI.

    In this post we will walk through the configuration steps to create and assign different password policies to different user groups within the same Active Directory Domain, table below gives an example of different password policy requirements:

    Group Name/Setting

    Group1

    Group2

    Group3

    Policy Name

    Poli-Group1

    Poli-Group2

    Poli-Group3

    Minimum password length

    2

    6

    19

    Minimum password age

    1

    2

    14

    Enforce password history

    24

    15

    none

     

    To configure password policies as per the table above

    1.       Login using a domain admin account to a machine that has Active Directory administration tools and open Server Manager.

    2.       Go to tools and open Active Directory Administrative Center.

    clip_image002

    3.       Click on Tree View.

    clip_image004

    4.       Navigate to System container then Password Settings Container.

    clip_image006

    5.       Right click Password Settings Container, then New-Password Policy

    clip_image008

    6.       Specify the password policy settings for each of the required policies

    clip_image010

    7.       Click add to link the created policy to users security group “Group1”

    clip_image012

    clip_image014

     

    clip_image016                                                

    8.       Repeat steps 5-7 for the remaining policies.

  • Windows Server 2012 Direct Access Part 2–How to Build a Test Lab

    Here is the second part of Windows Server 2012 Direct Access blog series.

    Part1: http://blogs.technet.com/b/meamcs/archive/2012/05/03/windows-server-2012-direct-access-part-1-what-s-new.aspx

    In the first post we discussed what’s new and what are the design differences between new and previous version of Direct Access feature.

    In this blog post, we’ll discuss about our Lab configuration that will lead us for the next parts and help us to design and test Direct Access feature within virtual environment.

    To build a reliable Direct Access Lab, Microsoft provides Base and Test Lab guide documentations.

    Base Lab: http://www.microsoft.com/en-us/download/details.aspx?id=29010

    Test Lab: http://www.microsoft.com/en-us/download/details.aspx?id=29029

    Regarding base lab guide, you can build a base lab that includes Infrastructure servers (DNS, Active Directory), Application Server (Intranet IIS Site), Simulated Internet (DNS Server) and single Direct Access Server.

    After you build base virtual machines, then you should follow Test Lab guide and configure&test Direct Access feature.

    Let’s look at the lab details and introduce virtual machines & roles.

     

    image

     

    - First of all you must build a Domain Controller as an intranet domain controller, DNS Server and DHCP server. This server will be responsible for authentication purposes and will act as main identity store for the Lab environment. Also a DNS server is a must to built a healthy Active Directory environment. DHCP is another role that you have to install. It will be used to configure Client1’s ip address automatically. Since you will change Client1 subnet frequently during test processes, providing ip addresses automatically will help us.

    - One intranet member server running Windows Server 2012 named APP1. It will be configured as a general application and web server. When a client resides on internet network and successfully connects intranet network through IPSEC tunnel (Direct Access Server), to test Direct Access client side functionalities, being able to access real intranet resources will be more helpful test. On application server, a file share and an intranet IIS web site will be created.

    - One member client computer running Windows 8 Consumer Preview named Clinet1. You will use that client machine for testing purposes. I recommend that put three network interface to try for internet, intranet and behind NAT communications.

    - One intranet member server running Windows Server 2012 named EDGE1. That will be our Direct Access Server. Most important point is that it should have two different network cards to access both intranet and internet networks. This server also will act as a DNS64. That means it will get DNS ipv6 requests from Windows 8 clients that resided in Internet and make ipv4 DNS requests to the intranet DNS server on behalf of DA clients.

    - And the last required server for base lab is INET1. It’s required to simulate internet network. You will have to create DNS zones to answer DNS queries from internet clients.

     

    I’m sure if you want to build that lab, you will download base and test lab and follow the steps. So I will only highlight for the important steps that is also covered basically within documents.

    - Since this is a limited Lab environment, you can minimize hardware requirements. 1024Gb ram will be enough for each VM.

    - Unlike previous Windows 7 Direct Access Test lab guide, this guide includes PowerShell script for each step. You do not have to follow 15-20 steps one by one. Just copy powershell script provided and run within evelated powershell console .image

     

    After you complete Base Lab Guide and before to start Test Lab Guide, if you want to test Direct Access functionality behind a NAT device, you also have to build following HomeNet Lab.

    Optional mini-module: Homenet subnet

    It’s an optional step and will help you to fire up one another Windows 8 virtual machine that will act as a NAT device.

    Before you start to install Direct Access Feature and test connectivity, you must have following environment:

    image

    I know it seems a little bit crowded, but once you build that kind of virtual lab, you can also use it to test other new  Windows Server 8 features.

    Next part we will assume that you have a working Lab environment and will start to install and configure Direct Access feature.

  • Prepare SharePoint Farm – Part 1 Prepare Windows Cluster

    This part demonstrate how to configure windows cluster for two server, to be used as SQL Cluster. 

    image

    Before you start

    · You need to have two network adapters on each node, one Public and one Private(for heartbeat communication).

    · Shared storage (like SAN storage) should be present and connected to both cluster nodes  with at least:

      • Quorum Disk (5GB)
      • DTC Disk (1GB)
      • SQL data files and log file disk(s)

    · domain user account (SPSadmin): add SPSadmin user as administrator on both servers

    · Prepare a preserved static IP and Cluster Name to be used.

    · Prepare a preserved static IP and DTC Name to be used.

     

    Windows Cluster Configuration

    1. Install latest windows updates on all server nodes.

    2. Install Application role and IIS role on both SQL DB server nodes

    image

    3. Install Fail over clustering feature on both SQL DB server nodes.

    image

    4. Provide a Cluster Name and Cluster IP for the database nodes:

    image

    Note: make sure that the public network is used here not the private (heartbeat)

    5. Below are the servers info

    image

    6. Cluster Disk files are configured as the following:

    image

    7. Configure DTC as clustered service , this is a pre requisite for SQL Cluster installation

    image

    8. DTC cluster configuration

    image

    9. Assign the DTC a cluster disk

    image

    10. Create SQL Group which is a logical group to include all SQL resources in :

    image

     

     

    Part 2 - Install and configure SQL Cluster

    Part 3 - Install and Configure NLB on WFE

    part 4 - Install and configure SharePoint farm (3-tier)

     

  • Reset the DSRM Administrator Password

    To Reset the DSRM Administrator Password

    1. Click, Start, click Run, type ntdsutil, and then click OK.
    2. At the Ntdsutil command prompt, type set dsrm
      password
      .
    3. At the DSRM command prompt, type one of the following
      lines:
      • To reset the password on the server on which you are
        working, type reset password on server null. The null
        variable assumes that the DSRM password is being reset on the local computer.
        Type the new password when you are prompted. Note that no characters appear
        while you type the password.

        -or-
      • To reset the password for another server, type
        reset password on server
        servername
        , where
        servername is the DNS name for the server on which
        you are resetting the DSRM password. Type the new password when you are
        prompted. Note that no characters appear while you type the
        password.
    4. At the DSRM command prompt, type q.
    5. At the Ntdsutil command prompt, type q to exit.
  • System Center 2012 Configuration Manager–Part7: Software Updates (Deploy)

    In our last article Part6: Software Updates (SUP), we’ve configure the Software Update point and ran the synchronization with Microsoft Updates server.

    As a result of this process, we’ve got the Software Updates metadata synchronized and the result can be viewed from the Configuration Manager console

    16

    Throughout this article, we will select few updates and deploy them to a collection of Windows 7 machines. Before we do that, it would be nice to review the Software Update policy to make sure its properties satisfy our business needs.

    From the Client Settings in the Administration tab, Click Software Update

    image

    If you are planning to use Software Update point to patch your environment, make sure you do not configure domain policy for client computers to receive updates from WSUS through Group Policy Settings. The group policy settings used by Windows Update Agent (WUA) on client computers will override any machine policy sent from Configuration Manager and hence the client agent will retrieve the updates specified by the “unmanaged” WSUS.

    Deploying Software Updates to client machines is simply the process of adding software updates to a software update group and then deploy the software update group to clients. There are actually two methods to deploy updates. The first one is a manual process where we select updates from the console and deploy it to a collection of machines and the second method is automatic by using an automatic deployment rule or by adding software updates to an update group that has active an deployment.

    At your initial install, you might need to use first the manual method to get your devices up-to-date with required software updates and then you create an automatic deployment rule to manage your ongoing monthly software update deployments.

    As you’ve seen in our first screenshot, there are hundreds of updates in the console. The first step here would be to filter the updates by criteria.

    To do so, from the Configuration Manager console, click Software Library.

    Expand Software Updates and click All Software Updates.

    In the search pane, click Add Criteria and select the criteria that you want to use to filter software updates and click Add

    22

    Click Search to filter the Software Updates

    23

    Select the updates you wish to deploy, right click on your selection and click Deploy

    27

    On the General page, specify the name of the deployment, the software update group name and the collection where the updates will be deployed

    28

    On the Deployment Settings page, make sure Required is selected as the Type of deployment to make sure the updates will be mandatory with an installation deadline and Minimal for Detail level.

    On the Scheduling page, select Client local time, on the Software Available Time, select As soon as possible to make sure clients are notified for updates installation as soon as their next policy polling cycle and on the Installation deadline, specify a time where the software updates will get installed automatically

    30

    On the User Experience page, you can keep the default settings and click Next

    31

    On the Alerts page, configure how Configuration Manager and Operations Manager will generate Alerts

    32

    On the Download Settings page, when a client is connected to a slow network or is using a fallback content location, specify whether the client will download and install the software updates and when the content for the software updates is not available on a preferred distribution point, you can specify whether to have the client download and install the software updates from a fallback distribution point and on the Allow clients to share content with other clients on the same subnet: specify whether to enable the use of BranchCache for content downloads

    33

    On the Deployment Package page, select to create a new deployment package and specify its properties

    34

    On the Distribution point page, select the distribution point to host the software update files.

    35

    On the Download location page, select to Download software updates from the internet

    36

    On the Language selection page, select the languages for which the selected software updates are downloaded.

    On the Summary page, review the settings and click Save As Template to save the settings for a future deployment

    38

    Click Next and on the Completion screen click Close.

    At this stage, you would need to wait for the next policy polling cycle on the client machine or you can force the client machine to retrieve the machine policy by double clicking the Configuration Manager Client Agent found in Control Panel.

    From the Actions tab, select Machine Policy Retrieval & Evaluation Cycle and click Run Now

    image

    After few seconds, you will notice a notification message

    40

    From the Software Center, you can check the Software Updates deployment settings

    41

    Once the updates get installed, you will be able to view the installed updates with a description of each update

    42

    This comes to the end of this article where we’ve discussed the required steps to deploy Software Updates to devices. We will be discussing in a future article the automatic deployment rule when it comes to Endpoint Protection.

    “This article can also be viewed from my blog

  • Internet Facing SharePoint 2010 Site with Windows Live ID–Part 1

    In this series of post I will talk about how to allow Windows Live users to login to an internet facing SharePoint 2010 site. Most of the information can be found online, but I am putting them here in a form of a series to make it easy for who ever wants to implement the integration to find the information easily.

    The first part of the series would be how to register and configure the site to use Windows Live ID as an authentication provider.

    Registering with Windows Live ID

    To use Windows Live ID authentication, the site should be registered first using the Microsoft Service Manager web application located at http://msm.live.com. Below are the steps needed to register the website for Windows Live ID Authentication

    1. In your browser, browse to http://msm.live.com

    2. Login to the website using an already registered Windows Live ID. This Windows Live ID will be the main ID that will be used to manage the Windows Live ID registration.

    3. In the left menu, click on Register Your Site

    4. A page will open where you need to enter the name of the site and the DNS Name

    image

    5. Choose Windows Live ID

    6. Click Submit

    7. On the confirmation page, click Yes.

    8. The below screen will appear

    image

    9. Click on Go to Manage your Site link.

    10. In the drop-down list, select the site that was just registered and the click on Modify Editable Site Properties

    image 

    11. In the next screen, check the Show advanced properties check box to enable more options
     image

    12. Enter the values in the fields that appear on the screen (Replace the below with your own domain)
    Domain Name: contoso.com
    DNS Name: urn:contososharepoint:int
    Default Return URL: https://contoso.com/_trust/default.aspx
    Expire Cookie URL: https://contoso.com/wlid/expirecookie.aspx

    image

    13. Scroll down until you see the Override Authentication Policy. Select MBI_FED_SSL from the dropdown.

    image

    14. Scroll up to the page and click the Submit button.

    15. On the next screen, note down all the information on the screen, and then click the Yes button.

    image

     

    Certificates

    Claims based authentication uses certificates for encryption and signing and we have to trust the certificate of the IP on the SharePoint servers. The following steps must be done on all WFE's in the farm.

    1. To get the IP Certificate, browse to the federation metadata URL
    https://nexus.passport-int.com/federationmetadata2/2007-06/federationmetadata.xml

    2. Select and copy the text from the first X509Certificate node

    image

    3. Make sure to select only the inner text, excluding the <X509Certificate></X509Certificate> tags.

    4. Open Notepad application, paste the text and then save the file as LiveID-INT.cer. We now have the certificate in a file and we need to import it to the correct locations on the SharePoint Servers.

    5. On the WFE server, press Window Key + R on the keyboard and then type mmc.exe and press enter to open up the management console.

    6. Add the Certificates snap-in to the management console. Choose Computer Account to manage and then select to use the Local Computer as computer to manage

    7. Expand the tree until your reach SharePoint > Certificates. Right click on the node and select All Tasks > Import…

    image

    8. In the import wizard, locate the LiveID-int.cer file we just created and then click Next > Next > Finish.

    9. Repeat same procedure above to import the certificate to the Trusted Root Certificate Authority and Trusted People.

    image

    10. These procedures should be done on all WFE servers.

     

    Create the STS Provider in SharePoint 2010

    We need to create a Trusted Identity Token Issuer in SharePoint which will be configured to be used as the login provider for our Web Application.

    1. On one of the SharePoint servers, fire up the Powershell console.

    2. Execute the below powershell script

    1: asnp microsoft.sharepoint.powershell
    2: $realm = "urn:contososharepoint:int"
    3: $certfile = "C:\Temp\LiveID-INT.cer"
    4: $rootcert = Get-PfxCertificate $certfile
    5: New-SPTrustedRootAuthority "Live ID INT Root Authority" -Certificate $rootcert
    6: $emailclaim = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/claims/EmailAddress" -IncomingClaimTypeDisplayName "http://schemas.xmlsoap.org/claims/EmailAddress" -SameAsIncoming
    7: $upnclaim = New-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" -IncomingClaimTypeDisplayName "UPN" -LocalClaimType http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn
    8: $authp = New-SPTrustedIdentityTokenIssuer -Name "LiveID INT" -Description "LiveID INT" -Realm $realm -ImportTrustCertificate $certfile -ClaimsMappings $emailclaim,$upnclaim -SignInUrl "https://login.live-int.com/login.srf" -IdentifierClaim "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"

    3. After running the script, verify that the script executed correctly by going to Central Administration > Security > Manage Trust.

    image

     

    Create the Web Application

    1. From Central Administration, go to Application Management > Manage Web Applications.

    2. Click on Create a new Web Application

    3. In the Create New Web Application page, choose Claims Based Authentication from Authentication providers list.

    4. Choose Create a new IIS website. Use the following values to fill the IIS Web Site fields (Replace the below with values that correspond to your website)
    Name: Contoso Public Portal
    Port: 443
    Host Header: contoso.com

    5. Under Security Configuration, select Yes under Use Secure Sockets Layer (SSL)

    image

    6. Under Claims Authentication Types, check the Trusted Identity provider checkbox

    7. Check the LiveID INT checkbox from the list

    image

    8. Click on Create Web Application button.

    After following the above steps, your web application will be ready to authenticate against Windows Live ID.

  • System Center 2012 Configuration Manager–Part2: Discovery Methods

    In part 1 of this series, we went over the steps to deploy a stand-alone primary site.

    Throughout this article, we will configure discovery methods for our primary site.

    From the Administration tab, expand Hierarchy Configuration and click Discovery Methods

    1

    As you have noticed, Active Directory System Group discovery has been removed and Active Directory Security Group has been renamed to Active Directory Group Discovery and discovers the group memberships of resources.

    By default, Heartbeat Discovery is the only enabled discovery method. Heartbeat discovery is pre-configured to run every 7 days on every computer and it aims to create a discovery data record (DDR)  which contains the network location, NETBIOS name and operational status details. The DDR (size of 1KB) is submitted to the management point and processed by the primary site to maintain the active client’s record in the database or force the discovery of an active client that have been removed from the database, or that has been manually installed and not discovered yet.

    As a best practice, keep the heartbeat discovery method enabled all the time and if you need to modify the schedule the discovery runs make sure the value is always less than the value of the task Delete Aged Discovery Data which deletes inactive client records from the site database.

    This task can be viewed from Site Maintenance

    image

    From the Discovery Methods page, double click Active Directory Forest Discovery.

    Check to enable the forest discovery and check the other two options to Automatically create Active Directory site boundaries when they are discovered and to Automatically create IP address range boundaries for IP subnets when they are discovered

    2

    Active Directory Forest Discovery is a new method which will discover the IP subnets and the Active Directory sites and add them as boundaries. We will be covering later how we can use the discovered information for site boundaries.

    This method is scheduled by default to run every 7 days and it doesn’t support Delta Discovery. You can always run the method if you right click on it and select to Run Full Discovery Now

    image

    If you go to Boundaries, you will notice the automatic creation of boundaries

    image

    The above Forest Discovery ran on the top-level site of the hierarchy.

    You can also run this method on other Active Directory Forests. To do so, go to Active Directory Forests from the Administration tab and select the Forest you want to discover.

    On the General page, you’d need to specify an account from the designated forest that has the privileges to discover Active Directory Sites and IP subnets and to publish information to the Active Directory. To do, this account must have full permissions on the System Management object in Active Directory.

    Alternatively, the site server computer account can be used if he has permissions to do so.

    image

    On the Publishing page, you can select the site to be published to the designated forest.

    You can monitor the Discovery Status and the Publishing Status from the lower right pane

    image

    You can also check further information on the status by checking the ADForestDisc.log file found in the <InstallationPath>\Logs

    image

    Next, double click Active Directory System Discovery and check to enable this method on the contoso.com domain

    4

    On the Polling Schedule tab, click to enable Delta Discovery.

    Delta Discovery is not a full discovery but instead a method that will search Active Directory Domain Services (AD DS) for specific attributes that have changed since the last full discovery cycle. Even though, Delta Discovery will discover new resources and changes, it won’t detect when a resource is deleted from AD DS. If Delta Discovery is enabled for Active Directory Group Discovery, it will detect when computers or users are added or removed from a group.

    On the Active Directory Attributes page, you can check the attributes that are selected for default discovery and you can select from the Available Attributes list a custom attribute such as “location” attribute and add it to the discovery method. This option has been improved from Configuration Manager 2007 R3

    7

    On the Option page, select the options to filter out stale computer records from the discovery. This is a new feature in the product which will help keep the site database up to date with active client records.

    8

    Let us enable Active Directory User Discovery on the contoso.com domain

    image

    Similar to Active Directory System Discovery, on the Polling Schedule page you can enable Delta Discovery and on the Active Directory Attributes page you can select additional attributes to be added to the default discovered attributes.

    Those are the only discovery methods I will enable for my current environment.

    This comes to the end of part 2 where we’ve configured discovery methods and discussed the new and the improved discovery functionalities.

    In our next article, we will be discussing Boundaries and Boundary Groups.

    “This article can also be viewed from my blog

  • Extracting NPS / RADIUS Accounting & Logging Information to SQL Server

    Introduction

    In this post I would like to go through quick steps to configure Network Access Protection to extract data to SQL Server, and describe the minimum settings needed to accomplish this task. This post has been written to reference the following technologies:

    • SQL Server 2008 R2
    • Microsoft Windows Server 2008 & NPS (RADIUS)

    Configuration Steps

    To keep this post short I will include the steps need to extract the data, in addition I will include some links that will provide additional references.

    1. Create SQL Server Database:

    To extract the NPS data you need to create a centralized repository to store logging & accounting information (A useful database name will be something like NPSDB). The database needs to have at least the following object created:

    • NPS_Packets Table: This table will store the data coming from NPS.
    • Report_Event Stored Procedure: This procedure will be used to send the data to the RADUIS_Events Table. Keep in mind that you need to use the exact name for the stored procedure as RADIUS will be using that exact name.

    Sample on Table:

    CREATE TABLE [dbo].[NPS_Packets](

    [PacketTime] [datetime] NOT NULL,

    [NPS_Attributes] [xml] NOT NULL

    ) ON [PRIMARY]

    GO

    ALTER TABLE [dbo].[NPS_Packets]
    ADD DEFAULT (getdate()) FOR [PacketTime]

    GO

    Sample on Stored Procedure:

    CREATE PROCEDURE [dbo].[Report_Event]

    (@doc nvarchar(max))

    AS

    INSERT INTO NPS_Packets

    ([PacketTime], [NPS_Attributes])

    VALUES (GETDATE(), @doc)

    GO

    Additional Information:

    I found a nice post by Jeff Sigman that has another samples (http://blogs.technet.com/b/nap/archive/2008/07/08/nps-nap-logging-bsu-edu-style.aspx)

    2. Configure NPS Accounting Settings:

    After creating the database, you need to connect the NPS to SQL which is straight forward as following:

    • Log-in to NPS Server
    • Start NPS (from Control Panel -> Administrative Tools)
    • Select Accounting Table (from the left side menu)
    • Click on configuring NPS on SQL Server.
    • Provide the SQL Server information & Database name

    Additional Information:

    you can find additional information on this issue using the below links:

    General Consideration:

    There are few recommendation that I found useful to give you some heads up before planning to implement this solution:

    • NPS Accounting Information are sent to SQL Server in XML format, so you need to consider to extract that data and interpret it if you plan to use it somewhere else.
    • If one of the attributes (columns) sent from NPS has a Null value, it will not appear in the XML.
    • NPS usually sent huge amount of data to SQL, so you need to consider the performance of the database.
    • If NPS failed to connect to the Database for records will be lost and cann't be retrieved, so consider providing a suitable error handling technique.

     

  • Create SharePoint 2010 Web Application using FBA (forms Based Authentication)

    This blog to illustrate how to create a Forms Based Authentication for SharePoint 2010 using SQL database as users repository ,  users will be authenticated from SQL database instead of active directory

    1. From Central Administration –> Manage Web Application , create a new web application

    image

    In below option make sure to check NTLM and Forms Based Authentication this will allow users form active directory and from SQL to login ,

    image

    note: enter a name for the membership provider  and Role manage , these names will be used later on. (you can choose any name).

    2. Create SQL Database , from any server (like App servers) that has access to the database server run the aspnet_regsql .exe:

    image

    · Select to configure database

    image

    · Select the SQl server db name , and provide a proper name for the database , this database will be used to host SharePoint users :

    image

    3. Create an admin account under this database to be used to connect to FBA_Users DB , From inside SQL management studio –> Security –> new login

    image

    · From General tab , enter password

    image

    · From users mapping place FBAadmin as db_owner for the FBA_Users DB:

    4. you have to configure the FBA web application to use the SQL database as users’ repository for user management , On IIS for Each Front end Server WFE do the following:

    a. Select the web application which you wish to configure:

    image

    b. Click on Connection String , create a connection string that points to the database we have just created:

    image

    c. Click on Providers , to create a new SQL provider :

    image

    Note: ignore the warning message.

    d. Make sure that Roles is selected from features dropdown , then click add

    image

    e. Make sure to enter the same Role name ,you entered at the web application creation time in a previous step:

    image

    Note: application name should be “/” to indicate the root web.

    f. To add new membership , select users form the feature dropdowns , then click add:

    image

    g. Place the membership provider name:

    image

    5. Do the same steps(from a-g) for the Secure store services, this will guide the secure store to authenticate users from SQL database as well.

    image

    6. you can optionally download from codePlex User Management webParts from http://sharepoint2010fba.codeplex.com/   which will help site admins to add/edit/delete users from inside SharePoint pages

    image

     

    7. By this the configuration is done , login to site collection , and assign permissions to SQL users from site permission

    image

    8. You can assign permissions to users from SQL membership provider as shown below:

    image

  • Prepare SharePoint Farm – Part 2 Prepare SQL Cluster

    Part 1 - Prepare Windows Cluster

    Part 3 - Install and Configure NLB on WFE

    part 4 - Install and configure SharePoint farm (3-tier) 

     

    1. Install SQL 2008 R2 setup , make sure to select  New SQL Server failover Cluster

    image

    2. Click next , until reaching the below screen, select all features

    image

    3. Provide SQL Server a new Cluster name as below:

    image

    4. Place SQL resources in the SQL Group created previously

    image

    5. Assign SQL disks to the SQL Group

    image

    6. Give a proper SQL IP address for the clustered SQL instance :

    image

    7. For the service account , place a dedicated service account for the SQL services

    image

    Note: SQL Reporting cant be clustered.

    8. Make sure to add the current user and to select mixed mode:

    image

    9. Make sure to point to the proper disk drives in the Data Directories tab:

    image

    10. Make sure to point to the proper disk drives in Analysis screen

    image

    11. Next to Finish.

    12. Make sure you can connect to SQL Instance from management studio:

    image

    13. Join the other node, by running the setup and selecting

    image

    14. Click next until reaching to the below page, make sure the correct SQL Cluster name is provided

    image

    15. Install SP1 on each node (to consider high availability, make sure to install SP1 on the passive node) , from SQL Management Studio à help –> about check after SQL version it should be as below:

    image

  • System Center 2012 Configuration Manager–Part4: Client Settings

    In my last blog article Part3: Boundaries and Boundary Groups, we’ve covered how to automatically discover and create boundaries and how to use these boundaries in boundary groups for site assignment and content location.

    Throughout this article, we’ll cover Client Settings which was known as Client Agent Settings in Configuration Manager 2007.

    One of the major changes in this area that Client Settings are now configured on the hierarchy level. With ConfigMgr 2007, Client Agent Settings are configured on a site level. Having that said, you didn’t have the option to configure different client agent settings for agents that exist within the same Configuration Manager site.

    In System Center 2012 Configuration Manager, client settings are hierarchy based. The default client settings policy is applied to all agents within the hierarchy and additional client settings policies can be created and applied to collections. These collections could be a group of computers or a group of users.

    The following client settings can be applied for devices (click on the policy to know more information):

    The following client settings are for users:

    To create a new custom client settings for a user or a device, go the Administration tab in the console, right click Client Settings and select to create a new policy setting

    image

    Select a custom setting such as Remote Tools and click on Remote Tools from the upper left box to configure settings

    image

    Click Configure, check the box to Enable Remote Control on client computers and check the box of the Domain profile to automatically configure the Remote Control port and program exceptions for clients.

    image

    Set your other settings as desired and click on Set Viewers

    image

    Type the permitted viewers such as a user or a group and click OK

    image

    Once done you will notice the newly create device settings

    image

    If it happens to have two policies such as Remote Control settings and both being applied to the same collection, the policy with lower priority value will take over any other policy.

    You can increase or decrease the client settings priority by right clicking the policy and selecting to increase the priority

    image

    To deploy the newly created policy to a device collection, right click the policy and click Deploy

    image

    Select the device collection and click OK

    image

    From the properties of the device collection, you will notice that the custom settings now appear as being applied to the collection

    9

    “This article can also be viewed from my blog

  • Web Performance and Load test in VS 2010 Ultimate, Part1

    Visual studio 2010 ultimate enable you to measure, improve and verify web application performance, Web performance test and load test only shipped with VS 2010 ultimate edition, i would like to mention that i will divide this topic into serial of blogs to be able cover all important technical points that web performance and load test covers,

    Part 1: Overview of the web performance and Load test

    Part 2: How to configure your visual studio environment to start web performance and Load test

    Part 3: Run results and how to analyzing the load performance result

    Part 4: Tips and Tricks

     

    Part1: Overview of the web performance and Load test

    Web performance Test enables verification that a Web application’s behavior is correct. They issue an ordered series of HTTP/HTTPS requests against a target Web application, and analyze each response for expected behaviors. You can use the integrated Web Test Recorder to create a test by observing your interaction with a target Web site through a browser window. Once the test is recorded, you can use that Web performance test to consistently repeat those recorded actions against the target Web application. Then you can customize your test by adding any extraction, validation rule, context parameters data sources (used to read randomize parameters) and make call to another web tests

    Load Test is used to verify that your application will perform as expected while under the stress of multiple concurrent users. You configure the levels and types of load you wish to simulate and then execute the load test. A series of requests will be generated against the target application, and Visual Studio will monitor the system under test to determine how well it performs.

    Load testing is most commonly used with Web performance tests to conduct smoke, load, and stress testing of ASP.NET applications. However, you are certainly not limited to this. Load tests are essentially lists of pointers to other tests, and they can include any other test type except for manual tests and coded UI tests. When you create a new Load Test, you are presented with a wizard that guides you through the necessary steps.

    By the wizard you will able to configure your load test by the following configurations:

    · Scenario Information

    o Load pattern (constant load, step load)

    o Test Mix Model based on (total no. of tests, no. of virtual users, user pace or sequential test order)

    o Test Mix (add no. of previously created web tests and determined the distribution %)

    o Network Mix (add multiple network types and determined the distribution %)

    o Browser Mix (add multiple browser types and determined the distribution %)

    · Counter sets

    o You add the computers that you want to monitor and the performance counters you are interested in (you can monitor your Database Server and IIS)

    · Run Settings

    o You can specify the test duration or test iterations

     

    Prerequisite steps to start your Load test:

    · Start to record many web performance tests that reflect your test cases that cover the user business cases

    · Each web test should cover only one test case (Flow)

    · Check and validate each flow separately

    · Existence of “LoadTest” DB

    When you run the load test, you will see the Load Test Monitor; by default it will show the Graphs view.

    How to prepare your scenarios:

    Before start working with Visual studio, you need to set the test scenarios and workflows, that you will be used through your web performance test and then build your load test depending on these scenarios. Here under I designed the below table to assist you to prepare your flows and then construct your different scenario and determine the weight for each scenario.

    image

     

    The above table contains the following information:

    Flow category: in this column you will list all the flow categories which group all your flows; I put one example for the first flow category (Login) which contains one flow (User Login flow).

    Flow Name: in this column you will list all the flows which construct your flow category; as example if we have flow category named “Internet banking transaction list” for internet banking portal, we will find the following flows included:

    · View latest Transaction for Current accounts

    · View latest Transaction for Saving accounts

    · View latest Transaction for Loan accounts

    · View latest Transaction for card accounts

    Scenarios: each scenario should composite from selected flow depend on the business cases from the customer prospective.

    Weights: you need to set two weights, user % weight (which indicate the percentage of usage this flow with every user) and Scenario weight (which indicate the percentage of executing this scenario per users).

    After completing the above table, now it seems you be ready to start working with visual studio and create your first test Project.

    Will continue in (Part 2: How to configure your visual studio environment to start web performance and Load test)

  • Hyper-V Resource Metering in Windows server 2012 “Server 8 Beta”

     

    IT organizations need tools to charge back business units that they support while providing the business units with the right amount of resources to match their needs. For hosting providers, it is equally important to issue chargebacks based on the amount of usage by each customer.

    To implement advanced billing strategies that measure both the assigned capacity of a resource and its actual usage, earlier versions of Hyper-V required users to develop their own chargeback solutions that polled and aggregated performance counters. These solutions could be expensive to develop and sometimes led to loss of historical data.

    To assist with more accurate, streamlined chargebacks while protecting historical information, Hyper-V in Windows Server 2012 “Server 8 Beta” introduces Resource Metering, a feature that allows customers to create cost-effective, usage-based billing solutions. With this feature, service providers can choose the best billing strategy for their business model, and independent software vendors can develop more reliable, end-to-end chargeback solutions on top of Hyper-V.

    Metrics collected for each virtual machine

    §  Average CPU usage, measured in megahertz over a period of time.

    §  Average physical memory usage, measured in megabytes.

    §  Minimum memory usage (lowest amount of physical memory).

    §  Maximum memory usage (highest amount of physical memory).

    §  Maximum amount of disk space allocated to a virtual machine.

    §  Total incoming network traffic, measured in megabytes, for a virtual network adapter.

    §  Total outgoing network traffic, measured in megabytes, for a virtual network adapter

    To enable Hyper-V resource metering on hyper-v host HV01 run the following PowerShell commands:

    Get-VM -ComputerName HV01 | Enable-VMResourceMetering

    By default the collection interval for Hyper-v metering data is one hour to change this interval the following PowerShell command can be used “value used in the command below is one minute”:

    Set-vmhost –computername HV01 –ResourceMeteringSaveInterval 00:01:00

    To get all VMs metering data run the following PowerShell command:

    Get-VM -ComputerName HV01 | Measure-VM

    To get a particular VM “test01” metering data run the following PowerShell command:

    Get-VM -ComputerName HV01 -Name “test01” | Measure-VM

     

  • System Center 2012 Configuration Manager–Part1: Installing Stand-Alone Primary Site

    Throughout this blog series, we will go through the installation and configuration of the site server and the site system while exploring the existing and the new features in the product.

    In part 1, I’ll be driving you through the installation of a stand-alone primary site. For guidelines on installing the prerequisites, you can refer to my previous article Building Configuration Manager 2012 Hierarchy – Part 1 Installing prerequisites

    Once you have completed the prerequisites installation, double click the Install button from the Configuration Manager media.

    On the Before You Begin page, click Next

    On the Getting Started Page, select Install a Configuration Manager primary site

    2

    On the Product Key page, select to use the product as evaluation period of 180 days or type in your product key

    3 

    On the next two pages, Accept the Microsoft Software License Terms, the Prerequisite Licenses and click on Next

    On the Prerequisite Downloads page, select a UNC path where you’ve downloaded the updates and click Next

    6

    On the Server/Client Language selection pages, keep the default value (English) and click Next

    On the Site and Installation Settings page, fill in the Site Code, the Site Name and click Next

    9

    On the Primary Site Installation page, select to Install the primary site as a stand-alone site and click Next.

    On the pop-up Configuration Manager informational message, click Yes

    10

    On the Database Information page, keep the default values and click Next

    On the SMS Provider Settings page, make sure the site server name FQDN is selected and click Next

    On the Communication page, select the second option and click Next

    12

    On the Site System Roles page, check to install a management point (MP) and a distribution point (DP) and click Next

    13

    Click Next on the CEIP page and on the Settings page

    On the Prerequisite Check page, review the warnings and click Begin Install (you can ignore the WSUS warning message as we will be installing it in a future article)

    14

    Once the installation complete, make sure you have no errors in the core setup

    15

    “This article can also be viewed from my blog

  • SharePoint 2010 Replication Scenarios

    In some disconnected environments with low or instable network bandwidth, it might be necessary in such situation to deploy two or more identical SharePoint farms and perform content replication/synchronization between them.

    It’s important to mention that SharePoint 2010 product supports natively only one way content replication through Content Deployment It is important to be aware that content deployment is a one-way process: content is deployed from a source site collection to a destination site collection. The content deployment feature does not support round-trip synchronization from source to destination and back
    again. Creating new content or changing existing content on the destination site collection can cause content deployment jobs to fail.

    if you are interested in Content Deployment I highly recommend to refer to this technet article

    In order to perform two ways replication, a third party application should be used, the drawback of this is that Microsoft doesn’t provide support or guarantee on 3rd party applications. However I have listed below some considerations that should be taken into consideration when evaluating a 3rd party replication software for SharePoint 2010:

    • Item level replicaiton: the ability to replicate web application , site collection , or site.
    • Replicating SharePoint content across limited bandwidth connections and use a minimized network bandwidth when performing the replication (data compression).
    • The ability to replicate the changed item only or full replication.
    • Replicate permissions.
    • Scheduling:  to provide manual or timer job scheduling.
    • conflict resolutioncapabilities between source and destination.
    • Reporting capability: generate different type of reports to describe the replication status.
    • Integrated administration and operations page within SharePoint administration page no need to use a separate administration console.

    I personally recommend to have a look at one of the two below vendors (again this is my own personal recommendation and does not reflect Microsoft point of view in anyway):

     

     

     

  • Modular Web Applications with MEF

    A lot of times we are faced with a challenge of building a web application that should evolve with time. As the web application grows, maintaining and adding new functionality becomes harder and harder. Also allowing multiple teams to work on the same web application has its own challenges.

    A number of solutions were created to tackle this problem, an example would be ASP.NET webparts, which can be plugged into a web application to add functionality.

    In my previous post, I gave a small introduction about the Managed Extensibility Framework (MEF), and how it can be used to build an application that supports plugins. In this post I am going to talk about using MEF to create a pluggable web application.

    As explained in the previous post, MEF uses the concept of “Exports” and “Imports”. Each export exports an interface type, and each import imports an interface type. Exports and imports are matched at runtime by their interface types.

    We can define an interface that will be used by the web application and the modules to agree on what information is required to be supplied by the module so that the web application can load it and make it available for the website users.

    An example of this interface could be as follows

    public interface IModule
    
        {
    
            string ModuleName { get; }
    
            string DefaultPage { get; }
    
            string Version { get; }
    
            bool ShowInMenu { get; }
    
            List<string> ModulePermissions { get; }
    
        }
    

     

    The interface defines some metadata about the module, information like module, landing page or default page of the module, version, what if it should be available through the web application’s main menu, and who can access this module. Of course more data can be added depending on the web application being implemented.

    As sample implementation of the above would be

    [Export(typeof(IModule))]
    
        public class StudentsModule:MCS.Common.Interfaces.IModule
    
        {
    
            public string ModuleName
    
            {
    
                get {
    
                    return "StudentsModule";
    
                }
    
            }
    
            public string DefaultPage
    
            {
    
                get {
    
                    return "~/Students/Default.aspx";
    
                }
    
            }
    
            public string Version
    
            {
    
                get {
    
                    return "1.0";
    
                }
    
            }
    
            public bool ShowInMenu
    
            {
    
                get {
    
                    return true;
    
                }
    
            }
    
            public List<string> ModulePermissions
    
            {
    
                get {
    
                    return new List<string> { "AddStudent", "DeleteStudent", "UpdateStudent" };
    
                }
    
            }
    
        }
    

     

    Loading these modules into the main web application would be as simple as importing the types that implement the IModule interface. This could be done in a base page class that is used by all the site pages

     [ImportMany]
    
     public List<IModule> Modules { get; set; }

    After the application loads, MEF will look into the bin folder of the web application for any assemblies that contain a class that implements the IModule type, and load them into the collection.

    From this point on the Modules collection can be used to create a site menu and do other custom logic to allow the modules to be accessible to site users.

    Deploying the module would be as easy as copying the website files into a subfolder inside the web application, and the dlls into the main application’s bin folder.

    Using this method, although very simple, can allow different teams to work on separate modules independently.

  • Configuring IIS 7.5 Shared Configuration

    Making a website highly available requires that it gets deployed on multiple servers so that if one server fails for some reason, the other servers can still serve the website for the users. This setup is usually called a server farm.

    On IIS 7, each website requires a set of configuration on the webserver for it function, like application pool configuration, host header configuration, security configuration, etc. Having multiple web servers means that these configuration should always be kept in sync on all servers, which requires effort.

    IIS 7 includes a feature which makes this task easier. IIS 7 includes what is called Shared Configuration. IIS configuration can be shared by multiple servers, which means changing the configuration on one server will automatically change the configuration on the other servers in the farm.

    Below are the steps required to configure Shared Configuration on multiple IIS servers.

    We assume here that we have two servers in the farm. Lets call them Server1 and Server2

    1. On Server1, open IIS manager, click on the server name in the Connections pane on the right. You will see the IIS server features on the left.

    2. If you can’t see the Management section, scroll down. You will see the Shared Configuration icon in that section.

    image

    3. Double click on the Shared Configuration icon to open the settings screen

    image

    4. In the Actions pane, click on Export Configuration. This will open up the Export Configuration dialog box

    image

    5. In the Physical path text box, enter the path where you want the configuration to be saved. Enter a password for the encryption key. This will be saved a long with the configuration files.

    6. Click the Ok button to do the export.

    7. In the Shared Configuration properties, check the Enable shared configuration checkbox.

    image

    8. In the Physical Path text box, enter the path where you saved the exported configuration files.

    9. In the User name and Password text boxes, enter the username and password of a user that has read/write access to the shared folder.

    10. In the Actions pane, click the Apply button. When you do so, a dialog box will appear asking you to provide the encryption key password that you set when you exported the configuration.

    image

    11. Enter the password and click the Ok button. With this step you have exported the configuration files, and configured Server1 to used the shared configuration files.

    12. Now you need to make the shared configuration folder accessible to all other servers in the farm. To do so you need to share the folder and give other servers access to it.

    13. Perform steps 7 to 11 on Server2, but this time providing the UNC path of the Shared Configuration folder.

  • Web Performance and Load test in VS 2010 Ultimate, Part 2

     Part 2 : How to configure your visual studio environment to start web performance and Load test

    In part 1 of this post series we go through an overview of the web performance and Load test, you can review part 1 by using the following Link:

    (http://blogs.technet.com/b/meamcs/archive/2012/05/07/web-performance-and-load-test-in-vs-2010-ultimate-part1.aspx)

    Prerequisite Configuration and Setup for using Load test:

    · Create Load test DB (“LoadTest2010”) in your SQL Server, this database will host all the load test results, and you cannot run load test before connect your visual studio to this database.

    · You will find the script of creating this database in the following path “C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE” with name “loadtestresultsrepository.sql”, the path depend on your installation path options, also you will find this file attached with my blog.

    The following snapshot gives you the detail steps need to be done to configure load test DB with visual studio:

    1. Run the scripts “loadtestresultsrepository.sql”, in the SQL Server Management studio

    image

    2. From visual studio click on the Test Menu, then click on Manage test controllers… sub-menu

    image

    3. In the Manage test controller’s window, we defined the controller name, when the default controller is your machine by default, and we also need to set the connection string to connect to load test DB, which created from the previous script.

    image

    4. Now we complete the configuration to start create new Web performance test:

    a. create new Test Project with the three following steps

    b. Create New Test Project, from Visual studio, click File à New à Project…

    image

    c. Add new item to Test Project, by right bottom on the project and choose Web Performance Test

    image

    d. Once you choose Web Performance Test, then automatic IE open with Web test Recorder Plug-in Appear to start record your scenario, when you complete your scenario then press stop, the visual studio start saving your recording into serial of requests to able to run this recording multiple times later or you can include this scenario in load test later on.

    image

    5. Now you can create new Load test:

    a. Right clicks on the test project node, then Click add then Load Test…

    image

    b. Right clicks on the test project node, then Click add then Load Test… , Visual studio start New Load Test Wizard

    image

    1. Determine the Scenario Name, name of test load scenario, the second option on this page is to configure think times, which is a delay between each request, which can be used to approximate how long a user will pause to read, consider options, and enter data on a particular page. The think time profile panel enables you to turn these off or on.

    image

    2. Determine the load pattern, the next step is to define the load pattern for the scenario, you have two load pattern options: Constant and Step. A constant load enables you to define a number of users that will remain unchanged throughout the duration of the test. Use a constant load to analyze the performance of your application under a steady load of users. A step load defines a starting and maximum user count. You also assign step duration and a step user count.

    image

    3. Determine test mix Model, The test mix model determines the frequency at which tests within your load test will be selected from among other tests within your load test. The test mix model allows you several options for realistically modeling user load.

    image

    4. Determine Test Mix, which include no. of different scenarios and their distribution percentage.

    image

    5. Determine Network Mix, which give you the ability to specify the kinds of network connectivity you expect your users to have (such as LAN, 3G and Cable-DSL).

    image

    6. Determine browse Mix, which give you the ability to define the distribution of different browser types that you wish to distribute.

    image

    7. Determine Counter set, which give you the ability to add different computers with the counter set to each related computer, the counter set that can be used like (Application, ASP.NET, .NET APPLICATION, IIS and SQL).

    A counter set is a group of related performance counters. All of the contained performance counters will be collected and recorded on the target machine when the load test is executed.

    image

    8. Determine Run Settings, the Final step is to specify the length of the load test, we have two options (Load test duration or Test Iteration),

                         Load test duration we determine two parameters (Warm-up duration and Run Duration)

                        Sample Rate determine how often performance counters will be collected and recorded. A higher frequency (lower number) will produce more detail.

                       Validation Level indicate which web performance test validation rules should be executed.

    Finally click finish to complete the wizard and create Your new load test.

  • WCF Proxy Caching

    Introduction

    One of the way tuning performance of WCF services is to use WCF proxy caching. There has been a long discussion on this subject. Bottom line is FW 4.0 supports caching proxies with some conditions, you may see details here.

    In order to call WCF services, we need a proxy first. We can create one either at design time or run time. Let’s see each one

    Design Time Proxy Creation

    Once the services are ready and in run state, you can utilize svcutil.exe or Visual Studio’ service reference features to create a proxy. For example, in my prior post here, when added a service reference to the project, VS 2010  creates proxy and contracts classes behind as below:

     1: [System.Diagnostics.DebuggerStepThroughAttribute()]
     2: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")]
     3: public partial class Service1Client : System.ServiceModel.ClientBase<ServiceApplicationTest.TargetService.IService1>, ServiceApplicationTest.TargetService.IService1 {
     4:     
     5:     public Service1Client() {
     6:     }
     7:     
     8:     public Service1Client(string endpointConfigurationName) : 
     9:             base(endpointConfigurationName) {
     10:     }
     11:     
     12:     public Service1Client(string endpointConfigurationName, string remoteAddress) : 
     13:             base(endpointConfigurationName, remoteAddress) {
     14:     }
     15:     
     16:     public Service1Client(string endpointConfigurationName, System.ServiceModel.EndpointAddress remoteAddress) : 
     17:             base(endpointConfigurationName, remoteAddress) {
     18:     }
     19:     
     20:     public Service1Client(System.ServiceModel.Channels.Binding binding, System.ServiceModel.EndpointAddress remoteAddress) : 
     21:             base(binding, remoteAddress) {
     22:     }
     23:     
     24:     public string GetData(int value) {
     25:         return base.Channel.GetData(value);
     26:     }
     27: }

    Notice that it is using Channel property of ClientBase<T> abstract class and here we don’t own this channel or its lifetime, once the call is done, proxy is gone (like a lyricSmile).   After that, I can call the proxy as simple as below from my unit test method:

     1: [TestMethod]
     2: public void GetDataTest()
     3: {
     4:     //TargetService.IService1 proxy = new TargetService.Service1Client("targetEP");
     5:     TargetService.IService1 proxy = new TargetService.Service1Client("routerEP");   
     6:     int value = 99;
     7:     string expected = string.Format("You entered: {0}", value);
     8:     string actual = proxy.GetData(value);
     9:     Assert.AreEqual(expected, actual);
     10: }

    More details on this approach can be found here http://msdn.microsoft.com/en-us/library/ms733133.aspx

    Run Time Proxy Generation

    As a second approach, you can create your own channel through Channel Factory. Here is code to create a proxy with an  IType by using Factory Pattern.

       1: internal class ProxyFactory<IType>
       2: {      
       3:     static System.ServiceModel.Channels.Binding binding;
       4:  
       5:     public static ChannelFactory<IType> Create()
       6:     {
       7:         #region Getting Dynamic Values
       8:         EndpointAddress epa=null;
       9:         string fullName = typeof(IType).FullName.Replace("Contracts", "Services").Replace(".I", ".");
      10:         System.ServiceModel.Configuration.ServicesSection services = System.Configuration.ConfigurationManager.GetSection("system.serviceModel/services") as System.ServiceModel.Configuration.ServicesSection;
      11:  
      12:         System.ServiceModel.Configuration.ServiceElementCollection coll = services.Services;
      13:         string strBinding = string.Empty;
      14:         for (int i = 0; i < coll.Count; i++)
      15:         {
      16:             System.ServiceModel.Configuration.ServiceElement el = coll[i];
      17:             if (el.Name.Equals(fullName))
      18:             {
      19:                 epa = new EndpointAddress(el.Endpoints[0].Address.ToString());
      20:                 strBinding = el.Endpoints[0].Binding;
      21:                 
      22:                 switch(strBinding)
      23:                 {
      24:                     case "basicHttpBinding":
      25:                         binding = new BasicHttpBinding();
      26:                         break;
      27:                     case "wsHttpBinding":
      28:                         binding = new WSHttpBinding();
      29:                         break;
      30:                     case "netTcpBinding":
      31:                         binding = new NetTcpBinding();
      32:                         break;
      33:                     case "netmsmqBinding":
      34:                         binding = new NetMsmqBinding();
      35:                         break;
      36:                     default:
      37:                         throw new NotImplementedException(strBinding + " not implemented.");
      38:                 }
      39:                 break;
      40:             }
      41:         }
      42:         #endregion
      43:         
      44:         return new ChannelFactory<IType>(binding, epa);           
      45:     }
      46: }

    As we know, WCF clients communicate to services based on contracts (hand-shake) and that is the minimal piece to be shared between publisher and consumer. Here, what we do is to make an assembly including both contracts and proxy creation class(es) and add reference to it from the client.

    Pros and cons of using Run time proxy

    • Pros
      • No need to go through refreshing service references if contract or address changed
      • You have your own transparent proxy, you can control it
      • All is implemented in an assembly or assemblies, easy to share
      • Clean config
    • Cons
      • Service calls are per service contracts

    Proxy Caching

    In either way, generating a proxy for each call is an expensive operation (constructing channel, opening it, closing, etc.). Instead we can cache each channel once they are created and utilize them for the next calls, as pointed out earlier, there are some cases, caching is already supported within the framework.

    Above, the 2nd option, generating proxy at run time is not calling Open() explicitly, so we know it is not going to use caching by default. Remember we are to cache ChannelFactory that the proxy is created based on. Here is a proxy manager class which adds or removes the underline ChannelFactory to/from dictionary object.

       1: internal static class ProxyManager<IType>
       2: {
       3:     internal static ConcurrentDictionary<string, ChannelFactory<IType>> proxies = new ConcurrentDictionary<string, ChannelFactory<IType>>();
       4:  
       5:     internal static IType GetProxy(string key)
       6:     {
       7:         return proxies.GetOrAdd(key, m => ProxyFactory<IType>.Create()).CreateChannel();
       8:     }
       9:  
      10:     internal static bool RemoveProxy(string key)
      11:     {
      12:         ChannelFactory<IType> proxy;
      13:         return proxies.TryRemove(key, out proxy);
      14:     }
      15: }

    Hope it suits your case.

    Conclusion

    WCF proxies are expensive to generate/dispose. FW 4.0 supports caching proxies with some conditions. In this post, I have leveraged ConcurrentDictionary object (FW 4.0) to cache each channel factory from which proxy is generated in a thread-safe manner to tune WCF service performances one step ahead.

  • Proxy Caching with Lazy Loading

    We can optimize proxy caching solution defined here one step ahead by leveraging lazy loading -there is always room for improvement, isn’t it?

    Lazy loading is one of the design patterns, which delays initialization of an object to the time when it is actually used. It can be useful when resource-sensitive objects are to be created. It may be in 1 of 4 forms: Lazy initialization, virtual proxy, ghost, value holder.

    With version 4.0, thread-safe lazy pattern is adapted into the framework. You can visit here for details on Lazy<T> implementation. It is important to point out that object locking (may be required in case of concurrent use) is implemented inclusively:

    “By default, all public and protected members of the Lazy<T> class are thread safe and may be used concurrently from multiple threads. These thread-safety guarantees may be removed optionally and per instance, using parameters to the type's constructors.”

    In this blog, I am going to use it for proxy/channel factory caching. To see the whole picture, please visit prior post on this series.

    To make it simple and clear, I am going to show code for each case:

    • Eager loading (Non-lazy):
       1: internal static class ProxyManager<IType>
       2: {
       3:     internal static ConcurrentDictionary<string, ChannelFactory<IType>> proxies = new ConcurrentDictionary<string, ChannelFactory<IType>>();
       4:  
       5:     internal static IType GetProxy(string key)
       6:     {
       7:         return proxies.GetOrAdd(key, m => ProxyFactory<IType>.Create()).CreateChannel();
       8:     }
       9:  
      10:     internal static bool RemoveProxy(string key)
      11:     {
      12:         ChannelFactory<IType> proxy;
      13:         return proxies.TryRemove(key, out proxy);
      14:     }
      15: }
    • Lazy loading
       1: internal static class ProxyManager<IType>
       2: {
       3:     internal static ConcurrentDictionary<string, ChannelFactory<IType>> proxies = null;
       4:  
       5:     internal static IType GetProxy(string key)
       6:     {
       7:         if (proxies == null)
       8:             proxies = new ConcurrentDictionary<string, ChannelFactory<IType>>();
       9:  
      10:         return proxies.GetOrAdd(key, m => ProxyFactory<IType>.Create()).CreateChannel();
      11:     }
      12:  
      13:     internal static bool RemoveProxy(string key)
      14:     {
      15:         ChannelFactory<IType> proxy;
      16:         return proxies.TryRemove(key, out proxy);
      17:     }
      18: }
    • Lazy loading by using FW
       1: internal static class ProxyManager<IType>
       2: {
       3:     internal static ConcurrentDictionary<string, Lazy<ChannelFactory<IType>>> proxies = new ConcurrentDictionary<string, Lazy<ChannelFactory<IType>>>();
       4:  
       5:     internal static IType GetProxy(string key)
       6:     {
       7:         return proxies.GetOrAdd(key, m => new Lazy<ChannelFactory<IType>>(() => ProxyFactory<IType>.Create())).Value.CreateChannel();            
       8:     }
       9:  
      10:     internal static bool RemoveProxy(string key)
      11:     {
      12:         Lazy<ChannelFactory<IType>> proxy;
      13:         return proxies.TryRemove(key, out proxy);
      14:     }
      15: }