Step 1: Create a new BizTalk project in visual studio. For example let’s call this “ESB.Samples.Itinerary.Extenders”.
Step 2: Add references to the ESB assemblies needed which are
Microsoft.Practices.ESB.Adapter Microsoft.Practices.ESB.ExceptionHandling Microsoft.Practices.ESB.ExceptionHandling.Schemas.Faults Microsoft.Practices.ESB.Itinerary Microsoft.Practices.ESB.Itinerary.Schemas Microsoft.Practices.ESB.Resolver Microsoft.Practices.ESB.Resolver.BRE Microsoft.Practices.ESB.Transform
All these should be added from the server GAC.
Step 3: Add a new orchestration to this assembly and call it (for example) ProcessGWMsg.
Step 4: Add a direct bound receive port to get the message from the message box database
Step 5: Add a new message with the expected message type you will be processing in this orchestration.
Step 6: Add a receive shape to this orchestration and choose the message variable you will receive the message in and link that to the direct receive port already created and mark the Activate property of this shape to True.
Step 7: Open the editor for the filter expression property of this orchestration.
Step 8: Add a filter expression like the below The service name should be unique between all your custom orchestrations to make sure there is no multiple-subscription (unless this is needed of course).
Step 9: Give the assembly a strong name, build it, and deploy the assembly to the GAC using the command gacutil /I ESB.Samples.Itinerary.Extenders.dll
Step 10: Using the command gacutil /l ESB.Samples.Itinerary.Extenders
Step 11: Mark the strong assembly full name.
Step 12: Open the ESB.Config file in the path “C:\Program Files (x86)\Microsoft BizTalk ESB Toolkit 2.1”
Step 13: Search for the section itineraryServices and add a new line at the end of this section as below:
Where the id actually is a new generated GUID and the ProcessMsg is the name of the orchestration you will be creating later.
Step 14: Close visual studio and start it over.
Step 15: Create a new itinerary called for example “MsgProcessingIti”. Now add to this itinerary all the on-ramp, off-ramp, and itinerary services you want to process your messages. But what we will be interested in is how to call the custom orchestration.
Step 16: Add a new itinerary service
Step 17: Change the extender to be an orchestration extender
Step 18: Open the service name property and you will find the ProcessGWMsg orchestration. Select that.
Step 19: Now add all the needed resolvers for this itinerary service as needed.
Step 20: Validate, save, and publish the itinerary to the itinerary database.
Step 21: Now revert back to the orchestration and define new variables to handle itinerary operations as below
Variable Name
Variable Type
Itinerary
Microsoft.Practices.ESB.Itinerary.SerializableItineraryWrapper
Microsoft.Practices.ESB.Resolver.ResolverDictionary
resolvers
Microsoft.Practices.ESB.Itinerary.ResolverCollection
step
Microsoft.Practices.ESB.Itinerary.SerializableItineraryStepWrapper
Step 22: Define a new correlation type called “ItineraryAdvance” and define it to include the properties
BTS.OutboundTransportLocation,BTS.OutboundTransportType
Microsoft.Practices.ESB.Itinerary.Schemas.IsRequestResponse
Microsoft.Practices.ESB.Itinerary.Schemas.ServiceName
Microsoft.Practices.ESB.Itinerary.Schemas.ServiceState
Microsoft.Practices.ESB.Itinerary.Schemas.ServiceType
Step 23: Define a new correlation set of the previously created type in the orchestration with the name “ItineraryAdvanceSet” Now the real purpose of this correlation set is to promote the itinerary properties to allow the message to be routed to the next itinerary service step.
Step 24: Add a new expression shape to initialize the itinerary variables as below.
Step 25: You can access the resolvers in the current executing itinerary service by first making sure we have resolvers by adding a decide shape as below.
Where the “ResolverValid” condition is as below.
Step 26: And to get the resolver dictionary
Step 27: And to configure a dynamic send port using parameters coming out of the resolver
Step 28: Finally when you done processing and want to advance the itinerary you create a new message that will pass to the next step and while creating this message add a message assignment shape with the code.
Step 29: And when you are sending this message you send it using a direct port to the message box and initialize the correlation set already added to the orchestration previously.
Step 30: Now edit the orchestration to process the messages as you need.
You find yourself writing custom pipeline components in nearly all BizTalk related projects. So this is a common activity. And usually it will mean eventually you will need to extract data from the message coming on the wire using XPath statements.
The issues you usually face while trying to extract data from BizTalk massages is that the message stream would not seekable. And another problem is how to do this data extraction with the minimal memory footprint (no XmlDocument L) and also as fast as possible.
I created mainly two functions that I use commonly for these tasks. The first one is to get the stream from the message and create a seekable one to be used later on.
private Stream GetMessageStream(Microsoft.BizTalk.Message.Interop.IBaseMessage msg, Microsoft.BizTalk.Component.Interop.IPipelineContext context)
{
Stream stream = msg.BodyPart.GetOriginalDataStream();
if (!stream.CanSeek)
ReadOnlySeekableStream readStream = new ReadOnlySeekableStream(stream);
if (context != null)
context.ResourceTracker.AddResource(readStream);
}
msg.BodyPart.Data = readStream;
stream = readStream;
return stream;
The second method is the one that would perform the data extraction as follows.
private string ExtractDataValueXPath(Stream MsgStream, string MsgXPath)
XmlReaderSettings settings = new XmlReaderSettings()
ConformanceLevel = ConformanceLevel.Document,
IgnoreWhitespace = true,
ValidationType = ValidationType.None,
IgnoreProcessingInstructions = true,
IgnoreComments = true,
CloseInput = false
};
MsgStream.Seek(0, SeekOrigin.Begin);
XmlReader reader = XmlReader.Create(MsgStream, settings);
string strValue = null;
if (!string.IsNullOrEmpty(MsgXPath))
if (reader.Read())
XPathDocument xPathDoc = new XPathDocument(reader);
XPathNavigator xNavigator = xPathDoc.CreateNavigator();
XPathNodeIterator xNodes = xNavigator.Select(MsgXPath);
if (xNodes.Count != 0 && xNodes.MoveNext())
strValue = xNodes.Current.Value;
return strValue;
As you can see I am using XPathDocument with a XmlReader to perform this as fast as possible.
In this post series I will go through Microsoft System Center Orchestrator 2012 Deployment and Configuration in Step by Step format with screenshots, and will go into details of the product to help you better understand how you can get benefits from this powerful product.
In this post I will go through Orchestrator System Requirements based on RC version that can be subject to change in future version.
A basic deployment of Orchestrator consists of the following features:
Management server
The Management server is the communication layer between the Runbook Designer and the database.
Runbook Server
Orchestrator database
Runbook Designer
The Runbook Designer is the tool used to build, edit, and manage Orchestrator runbooks. For more information on the Runbook Designer, see Using Runbooks in System Center 2012 - Orchestrator Release Candidate.
The Runbook Tester is a runtime tool used to test runbooks developed in the Runbook Designer. For more information on the Runbook Tester, see How to Test a Runbook in Using Runbooks in System Center 2012 - Orchestrator Release Candidate.
Orchestration console
The Orchestration console enables you to start or stop runbooks and view real-time status from a web browser. For more information on using the Orchestration console, see the Using the Orchestration Console in System Center 2012 - Orchestrator Release Candidate
The Orchestrator web service is a REST-based service that enables custom applications to connect to Orchestrator to start and stop runbooks, and retrieve information about operations using custom applications or scripts. The Orchestration console uses this web service to interact with Orchestrator.
Deployment Manager
The Deployment manager is a tool used to deploy integration packs (IPs), Runbook servers, and Runbook Designers. For more information on this tool, see the Deploying System Center 2012 - Orchestrator Release Candidate.
For System Requirements we have two scenarios, Single Server hold all Orchestrator Roles and individual features distributed on different Servers,I will focus on this port on first scenario for Single Server,
The following minimum hardware configuration is required for a full installation of Orchestrator:
Windows 2008 R2 is the supported Operating System for Management Server, Orchestrator web Service, Runbook Designer, and Runbook Sever.
The following software is required for a full installation of Orchestrator on a single computer:
Notes:
In the coming post I will go through Orchestrator 2010 Step by Step in details with screenshots.
----------------------------------------------------------------------------------------------------------
In this post I will go through Integration Pack to better understand the concept and know how to Download, Import and use Integration Packs in Orchestrator 2012, again we still working with RC version which is subject to change in the next release.
System Center 2012 - Orchestrator includes over 41 built-in workflow standard activities that perform a wide variety of functions. You can expand Orchestrator’s functionality and ability to integrate with other Microsoft and third-party platforms and products by installing integration packs. Integration packs for Orchestrator contain additional activities that extend the functionality of Orchestrator.
You can download integration packs from the Microsoft Download Center. Each integration pack has a guide that provides installation instructions, describes any known issues, and includes reference information for all of the activities supported by the integration pack.
Microsoft provides integration packs for all of the System Center products, as well as other Microsoft and third party products and technologies.
The following integration packs are available:
IBM Tivoli Netcool/OMNIbus Integration Pack for System Center 2012 - Orchestrator Release Candidate
VMware vSphere Integration Pack for System Center 2012 - Orchestrator Release Candidate
Integration Pack for System Center Configuration Manager
Integration Pack for System Center Data Protection Manager
Integration Pack for System Center Operations Manager
Integration Pack for System Center Service Manager
Integration Pack for System Center Virtual Machine Manager
Integration Packs for Orchestrator 2012 still not ready and will be released when Orchestrator 2012 RTM release very soon, expected to release before end of this year, however you you can download prerelease IPs from here http://www.microsoft.com/download/en/details.aspx?id=27842, and each Integration Pack has it’s own required configuration and in the below example I will focus in Virtual Machine Manager 2008 R2 Integration Pack as an example.
Step
Description
Select one of the following: a. Stop all running runbooks before installing the integration pack to stop all running runbooks before deploying the integration pack. b. Install the Integration Packs without stopping the running Runbooks to install the integration pack without stopping any running runbooks, then click Next.
The execution policy in Windows PowerShell determines which scripts must be digitally signed before they will run. By default, the execution policy is set to Restricted. This prohibits loading any configuration files or running any scripts.
To run the scripts in this integration pack, you must set the execution policy to RemoteSigned. Use the following command: <System Drive>:\PS>set-executionpolicy remotesigned. For more information abouthow to configure the Windows PowerShell execution policy, see Set-ExecutionPolicy in the Microsoft TechNet Library (http://go.microsoft.com/fwlink/?linkID=113394) .
You can use WS-Management quotas in Windows PowerShell remoting to protect the Orchestrator server and VMM computers from excessive resource use, both accidental and malicious. The MaxConcurrentOperationsPerUser quota setting in the WSMan:\<ComputerName>\Service node provides this protection by imposing a limit on the number of VMM objects that can run concurrently.
By default, MaxConcurrentOperationsPerUser is set to 5. This means that you can run a maximum of five VMM objects (shells) concurrently across all VMM policies.
If this default setting does not meet the needs of your organization, see About_Remote_Troubleshooting in the Microsoft TechNet Library (http://go.microsoft.com/fwlink/?linkID=135188) for information about how to configure remote operations in Windows PowerShell.
Till this stage we able to deploy and configure the integration pack, and we can see and use all SCVMM activities from Orchestrator 2012 Runbook Designer in any new Runbook as show in screenshot below:
------------------------------------------------------------------
In this port I will start with an overview about Orchestrator 2012 and Orchestrator 2012 Architecture based on RC version that is subject to change in future release,
With Orchestrator, you can connect different systems from different vendors together without any scripting or programming.
To automate processes with Orchestrator, you use the Runbook Designer to drag and drop Activities and Link them together to create workflows called Runbooks. These runbooks become the encapsulation of your automated procedures.
The graphic below illustrates a simple runbook that monitors an event log, then based on the event detected, checks the status of a Windows process, end the process and restart it, then send email to an administrator,
You can create runbooks that contain just a few steps, or you can create runbooks that contain hundreds of steps, including running other runbooks. Each activity publishes data to the Databus, which can be used by any later activity it in the runbook. You can use the Published Data to enable dynamic decision-making in your runbooks or to provide for automated creation of emails, alerts, accounts, and more.
The Standard Activities included with Orchestrator provide a number of monitors, tasks, and runbook controls. In addition, you can extend the capabilities of Orchestrator through Integration Packs, which include additional functionality for third party products, technologies, or complex actions.
Before using new runbooks in your production environment you can quickly test the runbooks in a safe environment using the Runbook Tester. Also, with the browser-based Orchestration Console you can keep track of runbooks that are currently running and control them.
Using the Deployment Manager, runbooks are deployed to and run on one or more Runbook Servers within the infrastructure, providing a scalable, highly-available process automation architecture. Linking all of the components together is the Management Server and the Data Store, which is the database where configuration information, runbooks and logs are stored.
Multiple Runbook Servers are used to provide load balancing and failover capabilities. If a Runbook Server goes down, runbooks will automatically begin to process on another Runbook Server.
The following terms will be used during the course of this lab. Familiarizing yourself with their definitions will help you progress through the lab more easily.
Runbook – These are the workflows that are created and used with Orchestrator. They contain exactly one starting point, but can contain many branches, many different actions, and many potential stopping points.
Activities – The individual actions within a runbook, available within the Activity Pane.
Standard Activities – The activities that are included with the default installation of Orchestrator; they include the basic activities used to build runbooks.
Integration Pack (IP) – This is a collection of custom activities that are specific to a product or a technology. An example is the Integration Pack for System Center Operations Manager.
Link – The connection between two activities in a runbook. Links provide the flow from one activity to another based on criteria defined in the filters.
Monitors – A special type of activity designed to run continuously and trigger action based on a specific condition. Monitors are often used to start a runbook.
Variables – Global values that can be used in runbooks to define frequently-used settings, such as directory paths to common files, server names, or other strings.
Counters – Global integer variables you can use within runbooks, with the ability to modify the value within a runbook, such as an incremental number of attempts.
Schedules – Global settings that allow you to define a set of date/time criteria for use within a runbook. Schedules can be used to define when a runbook will run an action, or when it cannot run an action.
Check In / Check Out – When you have finished updating a runbook, you must check in your changes to commit them to the Orchestrator Data Store. This is the way Orchestrator saves the runbooks.
Trigger – Causing a Runbook Activity to activate, either by satisfying a monitor condition or by linking to a previously-run activity.
Databus – The Databus is a mechanism used within Orchestrator to pass information in a runbook from one activity to later activities. The data flowing in the Databus for a runbook is called Published Data, and each activity in a runbook publishes its own data to the Databus.
Published Data – The foundation of a working runbook. It is the data resulting from each activity in a runbook. This data is published to the Databus so that activities in the runbook can subscribe to it and use it in their configuration. Link conditions also use this information to add decision-making to the runbook.
Pipelining – A term used to describe the process of moving from one activity to the next in the runbook based on links between the activities and conditions placed on the links.
Job – When you start a runbook, a job is created and added to a queue in the data store. Runbook servers poll this queue and pick up any available jobs that they can work on.
Instance – When a runbook server picks up a job, an executing instance of the runbook is spawned on the server.
This is the Runbook Designer. This is the interface where a designer builds, edits, and manages runbooks in Orchestrator. The interface is organized into three core areas:
1. The Connections pane on the left side is the folder structure where you can organize runbooks in the Orchestrator system. These folders are saved to the Orchestrator central database. You can also edit permissions on folders.
2. The Design Workspace is where you create and manage runbooks.
3. The Activity Pane contains all the activities (either Standard Activities or activities available from Integration Packs) available for insertion into runbooks. You drag activities from the Activity Pane into the workspace then link them together to form runbooks.
4. A fourth area, the Log pane shows you the activity of your runbooks and actions
Orchestrator includes the following main components:
Runbook Server – The engine that runs runbooks. Runbook Servers communicate with the Data Store. Runbook Servers do not require a Management Server to be online to be able to run runbooks. You can deploy a single Runbook Server or multiple.
Management Server – The central manager of Runbook Designer, Runbook Servers, Runbooks, Runbook Tester, and the Self-Monitoring functionality. The Management Server deploys Orchestrator Integration Packs to Runbook Servers and Runbook Designers, deploys runbooks to Runbook Servers, and acts as a communication link between the Runbook Designers, the Runbook Servers, and the Data Store.
Runbook Designer – The tool used by designers to create, modify, and publish runbooks.
Runbook Tester – The tool used by designers to test runbooks that are developed in the Runbook Designer before they are deployed.
Orchestration Console – A web-based application that enables you to see which runbooks are available in the system, view their real-time status, and start or stop them.
Data Store – The Data Store is the SQL Server database where configuration information, runbooks, and logs are stored.
Web Service – The Web Service is the web interface to the Data Store. The Orchestration Console uses the web service to access data.
The following diagram illustrates each of the Orchestrator features and the communication between each:
The database is the center of the Orchestrator installation containing all runbooks, configuration settings, and logs. The Management server is required as a communication layer between the Runbook Designer and the database. One or more Runbook servers communicate directly with the database to retrieve runbooks to run and store information about the jobs created from the runbooks. The web service also communicates directly with the database and provides a connection from client browsers for the Orchestration console.
In the coming post I will go through Orchestrator 2012 System Requirements for Orchestrator 2012 and Installation Steps.
In this post we will start the installation steps on single computer, to install Orchestrator 2012 RC version that is subject to change in future release, considering that all H.W. and S.W. requirements are already covered.
Steps are as the following:
Screenshot
In the coming post I will go through the steps to build a Runbook.
----------------------------------------------------------------
In this post we will start the configuration steps starting from building, testing and monitoring a Runbook,
In the below Runbook example new runbook is designed to monitor a folder for the addition of a new .TXT file, and when that occurs, it will trigger the Send Event Log Message action, which logs events in the Events tab of the Logs pane. Following that, another runbook will be triggered to start. The next exercise will build the second runbook that will be triggered by the first Runbook, and to do this we will create two test folders on C: drive called “LAB_Monitor1” & “LAB_Source1” and create a test text file called “Setup.txt” and type some line entries inside this test file as the following:
domain=domain
account=account
password=password
And save it inside “LAB_Source1” folder, later in this post we will go through the steps to test the created tasks and Runbook and show how to monitor these tasks in Orchestrator Console,
Double-click the first Find Text to open the Find Text Properties dialog.
This post will detail how to configure the ESB management portal on a multi-machine environment with minimal access configuration in mind. The environment consists of mainly three server roles:
1. An active directory server role.
2. A SQL server role.
3. An application server role hosting both the BizTalk server along with the management portal itself.
Assume that we have the domain name as: esbtest.local
The ESB Management Portal, included as part of the Microsoft BizTalk ESB Toolkit, is a sample site that demonstrates the use of metrics and the possibilities that exist for extending the BizTalk ESB Toolkit. The portal provides a starting point from which you can build your own customized portal.
The ESB Management Portal is also a highly capable and useful tool that can help to maximize the use and efficiency of the ESB exception management system. In addition, the portal provides a user interface for an underlying Universal Description, Discovery, and Integration (UDDI) registry server and graphical configuration capabilities for features such as auditing, viewing history information, configuring alerts and notifications, and allowing users to subscribe to alerts that indicate faults occurring in ESB applications.
These steps need to be performed prior to installing the ESB management portal as per the next section. These include creating the domain passwords and granting them the proper rights.
During the setup process you will need to have created a service account that will be used for the following.
1- Web applications (Portal, Exceptions service, UDDI service, and BAM service).
2- The Alerts windows service.
3- The UDDI publisher windows service.
This account will not be used to access any SQL server databases as all SQL access will be done impersonating the portal user account. In this case we will not need to have created any access rights on SQL server side for this account.
This account will need to be used as an application pool so it should be added to the local IIS WPG group. It will also be accessing the ESB Toolkit event viewer so it should have write access to the “Applications” event log.
Assume we created the account as “saESBPortalAppPool”.
We will create also a new group that will include all the ESB portal administrators. Call this group “gEsbPortalAdmins” for example.
Since this user will be used to impersonate the portal user identity and it will try to access SQL server resources using this user identity; a double hop will happen and to allow this user identity delegation rights should be given to the application pool user used. The procedure steps to perform this are listed below.
1. Log on to the domain controller.
2. Open a command prompt.
3. Run the following set of commands:
setspn -A HTTP/<ApplicationServer1Name> ESBTEST\<ESBPortalAppPool> setspn -A HTTP/<ApplicationServer1Name>.esbtest.local ESBTEST\<ESBPortalAppPool>
setspn -A HTTP/<ApplicationServer2Name> ESBTEST\<ESBPortalAppPool> setspn -A HTTP/<ApplicationServer2Name>.esbtest.local ESBTEST\<ESBPortalAppPool>
By running these commands, the user can use any of the names when calling the Web or Windows Communication Foundation (WCF) service. OR if you are using the portal using a load balancer then you need to run the commands:
setspn -A HTTP/<LoadBalancerName> ESBTest\<ESBPortalAppPool> setspn -A HTTP/<LoadBalancerName >.esbtest.local ESBTEST\<ESBPortalAppPool>
For our test environment we have only one server with no load balancer so we issue the commands:
setspn -A HTTP/ESB-Test-Bts ESBTEST\saESBPortalAppPool
setspn -A HTTP/ ESB-Test-Bts.esbtest.local ESBTEST\saESBPortalAppPool
· If SQL is running under a domain identity, run the following commands: setspn -A MSSQLsvc/<Database Cluster Name>:<Named Instance Port Number> ESBTest\<SQL server service account> setspn -A MSSQLsvc/<Database Cluster Name>.esbtest.local:<Named Instance Port Number> ESBTest\<SQL server service account> The following provides more information about the preceding commands:
a. Named Instance Port Number is the port number that the named instance hosting the application databases is using.
b. MSSQLsvc is the SQL service for which you are registering the Service Principal Name (SPN).
By running these commands, the user can use any of the names when connecting to an instance of Microsoft SQL Server.
An example of these commands in our test environment are as follows. Assuming that we have a named instance working on the port 55738.
setspn -A MSSQLsvc/ESB-Test-SQL:55738 ESBTest\saSQLSvcESB setspn -A MSSQLsvc/ESB-Test-SQL.esbtest.local:55738 ESBTest\saSQLSvcESB
2. Start the Microsoft Management Console (MMC) Active Directory Users and Computers snap-in.
3. In the left pane of the MMC snap-in click Users.
4. In the right pane, double-click the account identity under which your application pool runs (ESBTEST\<ESBPortalAppPool>) to display the Properties dialog box.
5. On the Delegation tab of the Properties dialog box for the WCF server computer, Do not trust the user for delegation is selected by default. Select Trust this user for delegation to any service (Kerberos only). This is shown in the snapshot below.
Install the MSCHARTS package on the server.
You need to create the ESB portal management database ESBAdmin on the SQL server cluster.
You need to install the ESB management portal web application and the supporting WCF services on both application servers that will be hosting the portal. Follow the following steps to complete the installation and configuration of the web applications.
1- Open the IIS management console by running the command INETMGR
2- Expand the server node.
3- Right click on “Application Pools” and click add application pool.
4- Enter the name to be “ESBPortalAppPool”.
5- Select .NET framework 4.
6- Make sure “Classic” mode is used.
7- Click ok.
8- Click on the “Application Pools” node from the left.
9- Right click on the application pool “ESBPortalAppPool” and click advanced settings.
10- Make sure you set enable 32 bit applications to false.
11- Under identity click on the ellipses button.
12- Enter the application pool domain account already created previously in the preparation steps.
1- Open the folder where you have built the ESB management portal installation package.
2- Run the Setup.exe
3- Select the ESBPortalAppPool as the web application pool.
4- Finish the installation.
3- Expand the node “Default Web Site”.
4- Click on the web application “ESB.Portal”.
5- Double click in the right panel on the “Authentication” icon under IIS category.
6- Make sure that it is configured as the below screen:
7- Right click on “Windows Authentication” and click “Advanced”.
8- Make sure you uncheck the check box for the Kernel mode authentication.
9- Click ok.
10- Right click on “Windows Authentication” and click “Providers”.
11- Remove all providers added by default and only add the “Negotiate:Kerberos” provider.
1- Open the Web.config file from the folder “C:\inetpub\wwwroot\ESB.Portal” in notepad
2- Search for the DB connection string “AdminDatabaseServer” near the top of the file.
3- Change the connection string to the proper value to connect to the SQL server database already created ESBAdmin.
4- Make sure that we have an authorization line for the “gEsbPortalAdmins” group.
5- Search in this file for any occurrence of “Ntlm” and replace that with “Negotiate”.
6- Save the file and close it.
7- Open the Web.config file from the folder “C:\inetpub\wwwroot\ESB.Portal\Admin” in notepad
8- Make sure that we have an authorization line for the “gEsbPortalAdmins” group.
9- Save the file and close it.
1- Open the Web.config file from the folder “C:\inetpub\wwwroot\ ESB.Exceptions.Service” in notepad
2- Search for the DB connection string “EsbExceptionDbConnectionString” near the top of the file.
3- Change the connection string to the proper value to connect to the SQL server database of the ESB exceptions default name is “EsbExceptionsDb”.
4- Search in this file for any occurrence of “Ntlm” and replace that with “Negotiate”.
5- Save the file and close it.
1- Open the Web.config file from the folder “C:\inetpub\wwwroot\ESB.UDDI.Service” in notepad
2- Search for the UDDI server URL string “http://esbwssit.esbtest.local/uddi” and change all occurrences with the proper location of the UDDI server was deployed on.
3- Save the file and close it.
1- Open the Web.config file from the folder “C:\inetpub\wwwroot\ESB.BAM.Service” in notepad.
2- Search for the DB connection string “BAMEventSource” near the top of the file.
3- Change the connection string to the proper value to connect to the SQL server database of the ESB exceptions default name is “BAMPrimaryImport”.
4- Save the file and close it.
Make sure that the services installed “ESB.BAM.Service”, “ESB.Exceptions.Service”, and “ESB.UDDI.Service” are all using the application pool created earlier for the ESB portal.
This will be done for the web applications “ESB.BAM.Service”, “ESB.Exceptions.Service”, and “ESB.UDDI.Service”.
4- Click on the web application to change. (this should be done for the three listed applications above)
7- Right click on “Windows Authentication” and click “Providers”.
8- Remove all providers added by default and only add the “Negotiate” provider.
You need to go through the below steps to verify that it is configured as shown.
3- Click on the node “Default Web Site”.
4- Click on “Configuration Editor”.
5- Open the Section combo box and select the handlers node under “system.webserver”
6- Make sure in the actions pane it is saying Lock and not unlock as per the screen below.
7- If it is saying unlock then press it to unlock this section.
8- Open the Section combo box and select the modules node under “system.webserver”
9- Make sure in the actions pane it is saying Lock and not unlock as per the screen below.
10- If it is saying unlock then press it to unlock this section.
This is needed as the portal will impersonate the identity of the user and if an error happens then it will need this write access to allow it to write to the server event log. To perform this follow these steps on all servers hosting the portal.
1- Download the PsTools from the link: http://technet.microsoft.com/en-us/sysinternals/bb896649
2- Extract the download to any folder.
3- Open a new command prompt window.
4- Change to the PsTools folder.
5- Execute the command “PsGetSid.exe gEsbPortalAdmins”.
6- Mark the SID returned by this command as it will be needed later.
7- Execute the command “wevtutil gl Application”.
8- Under the section “channelAccess:” copy the entire string starting with “O:BAG:SYD:”.
9- Execute the command “wevtutil sl Application /ca:O:<Copied Channel Access string got in step 8>(A;;0x3;;;<Group SID got in step 6>)”
This is needed to allow the operators group to get the list of BizTalk applications from the portal.
1- Open “Computer Management”.
2- Expand “Services and Applications”
3- Left click on “WMI Control”.
4- Then right click on “WMI Control”.
5- Click “Properties”.
6- Click “Security” tab
7- Expand the root one level.
8- Search for “Microsoft BizTalk Server” then click “Security”
9- Click “Advanced”
10- Make sure that the check box below is checked and click “Add”
11- Search for the operators’ group “gEsbPortalAdmins” and give it “Allow” access to everything as below.
12- Click ok and close the computer management.
This is needed to give the operators group the required database access to the BizTalk server databases. This can be done in two ways.
1- Either we give the operators group the same access level as the BizTalk operators group on all BizTalk databases using SQL server management studio. This should be done on all BizTalk databases.
a. Add a new login to the group “gEsbPortalAdmins” to the SQL server.
b. Under the tab “User Mapping” make sure you select the databases BizTalkDTADb, BizTalkMgmtDb, and BizTalkMsgBoxDb
c. Under each database make sure that the user has the database role “BTS_OPERATORS” selected.
2- OR Add the group as a sub group to the BizTalk operators group and convert the BizTalk operators group to be a universal group.
This is needed to allow proper operation on the portal functionality. This should be done using the SQL management console.
1- Open the SQL management console.
2- Open the database ESBExceptionsDb and expand the security node.
3- Add a login for the group “gEsbPortalAdmins”. If not already added.
4- Give that group the database roles ESBPortal and ESBPortalAdmin roles
2- Click on the Application pool on the left.
3- Click on the management portal application pool.
4- Click on the right panel recycle.
I faced a strange issue were I was trying to deploy a new BAM activity and view using the bm.exe tool. Whenever I executed the deploy-all command it simply reports that it deployed the activity and fails while deploying the view with an error like this:
OLE DB error: OLE DB or ODBC error: The SELECT permission was denied on the object ‘bam_<Some view name>_RTATable’, database ‘BAMPrimaryImport’, schema ‘dbo’.; 42000.
I then go to the BAMPrimaryImport database and find that nothing is deployed!
Actually this is not entirely true as when I opened a SQL profiler trace I found that it is really deploying the activity and then re-trackting everything once the view deployment fails. So what I found is that the utility would execute a SELECT statement on the given table in the BAMPrimaryImport database using the identity of the SQL server service account!!! And as this account really has no access to the database it fails! I do not know why it tries to use the service account but I was able to resolve this by giving the SQL service account db_datareader permissions to this database and it turned out in this case also I need to give this account the BTS_Admin_Users permissions on the BAMStarSchema database.
Although the requirement I had was to add a custom page and customize the BizTalk BAM portal; this post actually applies to any web application that is using the compiled web site template.
The BAM portal is the business analysis and monitoring portal that comes with BizTalk server. This is actually a portal that is using the compiled web site template.
The need was to add several custom pages to this portal so that they would be completely integrated with the portal. Meaning that they should be using the same master page and hence include all the main menu and links within this master page.
Now if this is a web application that would be an easy task (right?) since you would have already the mark up of the master page file but since this is a compiled website the site.master is actually a placeholder file and contains nothing L
Step 1: So what I did is that I created a new website with any name. In this new website you would get a complete solution with many out of the box files as below:
Step 2: Now what we need to do is to delete all things we don’t need. So basically you need to delete everything except:
1- Site.master
2- Web.config (Although we don’t need it it will not compile without this)
Please note that you MUST delete the file “Global.asax”.
Step 3: Now we need to mimic the structure of the BAM portal. So we create a new folder called “MasterPages” and move the Site.Master to this folder.
Step 4: Create a folder called Pages and create any needed custom ASPX pages to this folder. These pages should refer to the master page already moved to the same folder as the original BAM website as described above.
Step 5: If you want to refer to available classes and helper methods within the original web site then add references to the needed assemblies. For me that was “Microsoft.BizTalk.Bam.Portal.dll”
Step 6: Right click on the website and click properties. Change the properties to be as in the below image Note that you must make the site not updatable and using fixed naming.
Step 7: Make sure that you are targeting the same .NET framework as the original website. For my case that’s .NET 4.0
Step 8: Right click on the web site and click deploy and select a valid deployment folder (not an IIS website)
Step 9: Now open the deployment PrecompiledWeb folder and then copy the ASPX custom pages from the pages folder to the pages folder under the original website. Also copy the pages compiled files and DLLs from the bin folder to the original website bin folder.
That’s basically it and now your pages would has the same master page as the BAM portal.
Please note that we did NOT copy the empty Site.Master file and did nothing to it.
A common scenario when using an STS (Being ADFS or Custom STS) is the requirement to cache the security token to be used repeatedly with the requests to WCF services to authenticate the calls. This is usually easy in desktop applications when most people go and cache the entire service proxy object in some global variable!
The recommended approach is to cache the security token itself and use it later on which has the following advantages...
I've created a small sample with Active Web client with one page that first: caches the token, then use it to make the service calls.
The following method can be used to cache the token...
CacheToken()
// First, create binding to the service. The below URL is the name of the binding
// It is important to note that this will cause the next calls to use v1.3. If you want to use Feb 2005 standards, use WSHttpBinding, not 2007
WS2007HttpBinding wsf =new WS2007HttpBinding(@"https://vs2010.contoso.com/ATMServicesSTS/Service1.svc/IWSTrust13");
// Now create a WS trust factory that will be used to create the communication channel with the STS
WSTrustChannelFactory trustChannelFactory = new WSTrustChannelFactory(wsf,new EndpointAddress(@"https://vs2010.contoso.com/ATMServicesSTS/Service1.svc/IWSTrust13"));
// I use User Name/Password for security
trustChannelFactory.Credentials.UserName.UserName ="My User Name";
trustChannelFactory.Credentials.UserName.Password ="My Password";
// just to make sure no certificates involved
trustChannelFactory.Credentials.ServiceCertificate.Authentication.CertificateValidationMode = X509CertificateValidationMode.None;
trustChannelFactory.Credentials.ServiceCertificate.Authentication.RevocationMode = X509RevocationMode.NoCheck;
// specifiy the trust version
trustChannelFactory.TrustVersion = TrustVersion.WSTrust13;
// Now create the cannel
WSTrustChannel channel = (WSTrustChannel)trustChannelFactory.CreateChannel();
// Specify the request parameters including Audience URI and lifetime
RequestSecurityToken rst =new RequestSecurityToken(WSTrust13Constants.RequestTypes.Issue){Lifetime =new Lifetime(DateTime.UtcNow, DateTime.UtcNow.AddMinutes(5))};
rst.AppliesTo = new EndpointAddress(@"https://vs2010.contoso.com/ATMServices/");
RequestSecurityTokenResponse rstr =null;
// Get the token
SecurityToken token = channel.Issue(rst,out rstr);
// Cache it in the session
Session["Token"] = token;
Now, Use the token...
// Create the proxy object
ActiveClient.ATMServices.ServiceClient sc =new ActiveClient.ATMServices.ServiceClient();
// Configure the channel factory
sc.ChannelFactory.ConfigureChannelFactory<ActiveClient.ATMServices.IService>();
// Create the channel with the issued token
ActiveClient.ATMServices.IService serviceChannel = sc.ChannelFactory.CreateChannelWithIssuedToken<ActiveClient.ATMServices.IService>((SecurityToken)Session["Token"]);
// call the service method
txtReturn.Text = serviceChannel.GetData(50);
Remember to add references to WIF to your project.
Happy Coding:)
Update 20/1/2012: Sample project added as an attachement.
The BizTalk ESB toolkit is an implementation of an enterprise service bus messaging standard. It allows for separation between message content, processes implementation, and process configuration.
As part of the ESB samples you find the ESB portal sample with some helper services. One of these services is the ESB exceptions notifications service. This is a windows service that checks every specified timespan for new exceptions happening on your platform and if there is anyone who created a subscription to this type of exception or not. If it finds valid subscriptions to the new exception it simply send emails to the subscribing user notifying them of the exceptions.
This service is actually very useful and would make your ESB support team more responsive and Performant.
The problem I found is that if someone actually added new subscriptions with a non-valid email address this would throw an exception while sending the email. And because this service is processing emails in batches it will consider the entire batch as not being sent although it might even have already sent messages within this batch already. This would make the service just keep sending the same messages over and over again until the request to send this email is removed from the database.
The solution is to change the behavior of the exceptions service by handling this condition. In my case all I needed in this case is to mark this email as being sent and just completely discard the error.
1- Open the Alerts visual studio project from the folder “C:\Projects\ESBSource\Source\Samples\Management Portal\ESB.AlertService”
2- Open the file “Nofier.cs”
3- Go to the function called Notify
4- Change the send lines to be like the below
try
emailClient.Send(message);
System.Diagnostics.Trace.WriteLine("Email successfully sent");
catch (SmtpFailedRecipientsException )
alertEmail.Sent = true;
Please note that I left the marking of the email as being sent as I do not want it to reprocesses the same email again.
This is going to be the first part of series of posts on Team Foundation Server (TFS) 2010 and will include overview of TFS 2010 product and an installation scenario. I am going to be practical as much as possible. Alright let’s start.
Why TFS
TFS is a platform to create a repository well aligned with functions and roles (project manager, developers, testers, analysts, team lead, etc.) on an IT project life cycle. Even though it is mostly used for source control, it is more than that; it is integrated and scalable platform for Application Lifecycle Management (ALM). Core features:
− Work Item Tracking: any traceable units (bugs, tasks, issues, etc.)
− Version Control: code repository with storage, retrieval, comparison, access rights features
− Build Automation: customizable build processes that can be scheduled & automated
− Reporting: Built-in report templates to track the team project status
− Project Management: ALM methodology adoption (CMMI, MSF Agile, SCRUM), monitoring team progress
Architecture:
As seen in above image, TFS 2010 has 3 tiers: data (repository), application (services), and client (user interface).
Installation Scenario:
In this installation, we will follow TFS 2010 installation guidelines specific to dual-server topology, here are the servers and their features:
I would recommend each server to have 2.40 GHz, 6 GB RAM, and 1 VHD with 7.2 k RPM and 100+ GB (300 GB for Database server) for a moderate size teams (20-50 members).
As you’ve seen, in this installation application and data tiers are physically separated for the sake of scalability.
Also please note that, we will use SharePoint Foundation 2010 which is new version of Windows SharePoint Services (WSS 3.0), which is available for mid-size, pilot solutions with no charge.
Installation Steps
Conclusion:
In this part, we have covered an overview of TFS 2010 briefly and installation steps of TFS 2010 according to dual-server topology or TFS Advanced installation mode on which data and application tiers are separated into 2 servers. Hope you like it and keep it tuned for coming parts
For more information:
· Team Foundation Server 2010
· Team Foundation Server 2010 Product
· Team Foundation Installation Guide 2010