I started this blog series by giving an overview about Exchange 2010 SP1 Hosting in this post Exchange 2010 SP1 Hosting – Part 1 “Overview” and then I went through Hosting Description in this post Exchange 2010 SP1 Hosting – Part 2 “Hosting Description” , and then I went through Exchange 2010 SP1 Hosting – Part 3 “Hosting Setup”, and in this post I will cover Exchange 2010 SP1 Multi-tenant Setup available in hosting and its features,
First I will start with some definitions:
Service Plan - specifies a list of organization features, a set of mailbox plans, org wide resource limits and RBAC permissions delegated to customer.
Service Plan template - based on requirements, these templates will specify the features and predefined permissions that need to be provisioned for the customer organization and their mailboxes.
Mailbox Plan - defines a set of Exchange features that need to be enabled on the mailbox. A mailbox plan is created by using a service plan template.
RBAC - Role based access control – A permission model that define and grants access to Exchange management tasks.
When Hosting-Exchange 2010 CAS Role is installed, it also install an additional folder in CAS Server role, under this folder “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\ServicePlans”, in this folder you will find file called “ServicePlanHostingRemap.csv”, this file and .serviceplan file contains all available plans and mailbox planes, when you open, serviceplan file, you will find XML file starting the approporiate features, Different available Service Plan templates are as the following:
Creating Service Plan:
1) Locate the available service Plans “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\ServicePlans”.
2) Determine which service plan template meets your needs and open the template using Notepad.
3) Save the service plan template with a new name in the same service plan location.
4) If you are going to create multiple Mailbox Plans, copy the mailbox plan section starting with MailboxPlanName and ending with MailboxPlan and paste it after the MailboxPlan end section. Make sure that the mailbox plan is within the MailboxPlans section. You will need to change the following properties for the new mailbox plan:
MailboxPlanName This property specifies the name of the mailbox plan, for example Gold, Silver, Bronze.
MialboxPlanIndex This property must be unique for each mailbox plan.
ProvisionAsDefault This property specifies that this mailbox plan is the default mailbox plan. When new users are created and you do not specify a mailbox plan at that time the default mailbox plan will be applied to the mailbox. You can only have one default mailbox plan.
5) Save the new service plan.
6) Add the service plan to the service plan map, using the following procedure.
Add a Service Plan:
1) Locate the “ServicePlanHostingRemap.csv” on “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\ServicePlans”.
2) Open the csv file using Notpad.
3) Add a new line and provide the following comma separated information for the new service plan:
4) Save and close the file.
5) Ensure that you have copied the service plan and the serviceplanhostingRemap file across all CAS servers.
Verify Service Plan:
After creating a new service plan, you can validate it by assigning it to a new organization using WhatIf parameter by running the following command on Exchange PowerShell:
New-Orgzniation –Name “Contoso.com” –DomainName “Contoso.com” –location “en-us” –ProgramId “Business” –OfferId “SmallOrg”-Whatif
You should use the same ProgramId and OfferId that you used while adding the service plan in the “ServicePlanHostingRemap.csv” file.
Create New Tenant Organization:
Now we are ready to create new Tenant Organization using New-Organization command, the syntax is as the following:
New-Organization -Name <String> -DomainName <SmtpDomain> -Location <String> -OfferId <String> -ProgramId <String> [-Administrator <WindowsLiveId>] [-AdministratorNetID <NetID>] [-AdministratorPassword <SecureString>] [-AuthenticationType <Managed | Federated>] [-Confirm [<SwitchParameter>]] [-CreateSharedConfiguration <SwitchParameter>] [-EnableFileLogging <SwitchParameter>] [-ExternalDirectoryOrganizationId <Guid>] [-HotmailMigration <SwitchParameter>] [-IsDatacenter <SwitchParameter>] [-IsDirSyncRunning <$true | $false>] [-IsPartnerHosted <SwitchParameter>] [-LiveIdInstanceType <Consumer | Business>] [-PartnerObjectId <Guid>] [-WhatIf [<SwitchParameter>]]
And as an example to create new organization run the following PowerShell Command from CAS Server:
New-Organization -Name ProvTest -DomainName Provetest.com -Location en-US -ProgramID HostingSample -OfferID 5 -AdministratorPassword (get-credential).password
You will be prompt for user name and password, because this will create admin user for the new created organization,
In the above example the “ServicePlanHostingRemap” CSV file should include line for provtestand it’s ProgramId “HostingSample” and OfferID “5” like below,
Once the new Organization created then you can verify the OU creation in AD for the new Tenant Organization under Microsoft Exchange Hosted Organization as in the following diagram:
And under the new Tenant Organization there will be the Organization Administrator, RBAC Management Roles, Default Mailbox Plan, and System Mailboxes required for this organization as in the following diagram,
Also you can find the created accepted Domain, built-in Exchange Roles and Roles Assignment and the following security groups be created under the Tenant Organization OU under “Hosted Organization Security Groups,
Also it is automatically add the tenant’s administrator into the appropriate groups,
And automatically the Administrator user will be Mailbox Enabled, and the following objects be created under Domain Naming Context,
And automatically creates tenant’s Organization Configuration Container,
And to get all information about tenant organization you can use “Get-Organization” command, syntax as below:
Get-Organization [-Identity <OrganizationIdParameter>] [-DomainController <Fqdn>] [-Filter <String>] [-ForReconciliation <SwitchParameter>] [-ResultSize <Unlimited>]
Finally to remove Tenant Organization, you can use Remove-Organization using the following command:
Remove-Organization –Identity Contoso.
In the coming post, I will go into some more provisioning tasks related to managing Tenant Mailbox,
What I want to mention finally in this post that it is very important to know that all these manual tasks should be automated for any enterprise using any of available 3rd party control panel, and in our region in Medill East and Africa we as Microsoft Service provisioned a new MCS Control Panel that we are currently using as a supporting panel in our Microsoft Services Exchange 2010 SP1 Hosting project in MEA, and if anyone already working with Microsoft Service Hosting Project and interested in the control panel just let me know so I can direct him to the proper contact.
Related Posts:
A very interesting feature of SharePoint 2010 is the Word automation services. Word automation services allows you to convert between multiple document formats which runs as a batch job. This is very useful in many server side scenarios where customers require to archive documents, protect them from editing among many others.
I am Sharing the following code snippets that will allow you to convert between multiple document formats....
private void ConvertDocFileToPDF(SPFile filex){ // The job and its settings ConversionJobSettings settings; ConversionJob job; // Set the conversion settings, the enumeration allows you to convert to PDF, RTF, XPS and other word formats settings = new ConversionJobSettings(); settings.OutputFormat = SaveFormat.PDF; // Create th conversion job...this must be the name of the job as defined in SharePoint farm job = new ConversionJob(ConfigurationHelper.Instance.DocumentConversionService, settings); // Most propably, you will want this job to run with system credintials, however, you can use other job.UserToken = SPUserToken.SystemAccount; // Add your file to the conversion job...in this example, you can also save the converted file to an attachement!!! IN the below code, you might use the SPContext object to get the root URL of your website string pdfFile = filex.Url.Replace("docx","pdf") job.AddFile("http://<your site URL>/"+ filex.Url, "http://<your site URL>/" + pdfFile);
// Start the conversion job job.Start();}
You will need to add reference to the assembly "Microsoft.Office.Word.Server" which you will find in "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI\".
To ensure that your code runs with no problem, ensure that you did setup a Word Automation Service in your SharePoint farm by checking the service applications from the central administration, or create new one...
when you want to test it, run the job "Word Automation Services Timer Job" which will allow you to test the conversion immediately. If you cannot find this job, this means that the configuration is missed up and your code will probably not run, so check this job first before running your code.
Happy coding:)
In this post series I am going through step by step into System Center Operations Manager 2007 R2 starting from “Pre-Build”, then I continued with “Installing Operations Manager Database”, followed by another post “Installing Root Management Server” and then “Configure GPO for SCOM”,
Now in this post I will cover Configuring SQL Reporting Service that is required for SCOM Reporting as a step before Installing Data Warehouse Database, and what you should remember is that to install SQL Reporting Service you will need IIS to be installed before install SRS.
Steps are as the following:
Step
Description
Screenshot
Login as Domain\SQL Service
1.
Go to Start Programs Microsoft SQL Server 2008, Configuration Tools and click Reporting Services Configuration Manager.
2.
Enter SQL server or SQL Virtual Server name as Machine Name. Select Instance Name and click Connect.
3.
Under Report Server Status should show as Started. Then click Web Service URL on left hand side pane.
4.
Confirm that Report Server Web Service URL is working by click on blue URL, then click Advanced to confirm Web Site Configuration, and then click ok to close Advanced Window.
5.
Click Database on left hand side pane, confirm Reporting Database creation, or create a new Reporting Database, and then click Apply
6.
Click Report Manager URL on left hand side pane, and confirm Report Manager URL and it’s Virtual Directory is working by click on the blue Report Manager URL, then click Advanced to confirm Report Manager URL site settings, and then click ok to close Advanced Window
7.
Confirm all other optional settings and confirm that SQL 2008 Reporting service is healthy and ready and close SQL 2008 Reporting Configuration Manager, then start Data Warehouse Database Installation in next step.
In the coming post I will go through the steps to install Data Warehouse Database for Reporting...
In IIS 7.5 there is a new feature called the auto start provider. This feature allows you to load any custom web application resources to allow the application to provide better performance right from the first request.
Usually implementing an auto start provider is a simple task but what I think needs more clarification is how to write a setup module to configure it using code. So I will first start by implementing a custom auto start provider by implementing the interface “IProcessHostPreloadClient”. This interface has only one method as follows.
public class CustomAutoStartProvider : IProcessHostPreloadClient
{
public void PreloadThread(Object obj)
// Load your custom resources here
}
public void Preload(string[] parameters)
ThreadPool.QueueUserWorkItem(new WaitCallback(PreloadThread), null);
It is not required to start a new loading thread but it is always preferable that you put the extensive loading code in a different thread.
Now to configure this new auto start provider, you need to add its configuration to the ApplicationHost.Config file. You need to perform the following steps:
1- Register the new auto start in the section “system.applicationHost/serviceAutoStartProviders”.
2- Change the web application app pool field “startMode” to be “AlwaysRunning”.
3- Change the web application field “serviceAutoStartEnabled” to be “true”.
4- Change the web application field “serviceAutoStartProvider” to the name you used when you registered the auto start provider.
Here is how you do it in code.
using (ServerManager serverManager = new ServerManager())
Configuration config = serverManager.GetApplicationHostConfiguration();
ConfigurationSection serviceAutoStartProvidersSection = config.GetSection("system.applicationHost/serviceAutoStartProviders");
ConfigurationElementCollection serviceAutoStartProvidersCollection = serviceAutoStartProvidersSection.GetCollection();
ConfigurationElement addElement = FindElement(serviceAutoStartProvidersCollection, "add", "name", @"CustomAutoStartProvider");
if (addElement == null)
addElement = serviceAutoStartProvidersCollection.CreateElement("add");
addElement["name"] = @"CustomAutoStartProvider";
addElement["type"] = @"My.Custom.Providers.CustomAutoStartProvider,My.Custom.Providers";
serviceAutoStartProvidersCollection.Add(addElement);
else
ConfigurationSection applicationPoolsSection = config.GetSection("system.applicationHost/applicationPools");
ConfigurationElementCollection applicationPoolsCollection = applicationPoolsSection.GetCollection();
addElement = FindElement(applicationPoolsCollection, "add", "name", siteName);
if (addElement == null) throw new InvalidOperationException("Element not found!");
addElement["startMode"] = @"AlwaysRunning";
ConfigurationSection sitesSection = config.GetSection("system.applicationHost/sites");
ConfigurationElementCollection sitesCollection = sitesSection.GetCollection();
ConfigurationElement siteElement = FindElement(sitesCollection, "site", "name", siteName);
if (siteElement == null) throw new InvalidOperationException("Element not found!");
ConfigurationElementCollection siteCollection = siteElement.GetCollection();
ConfigurationElement applicationElement = FindElement(siteCollection, "application", "path", @"/");
if (applicationElement == null) throw new InvalidOperationException("Element not found!");
applicationElement["serviceAutoStartEnabled"] = true;
applicationElement["serviceAutoStartProvider"] = @"CustomAutoStartProvider";
serverManager.CommitChanges();
The helper function FindElement code is as below.
private static ConfigurationElement FindElement(ConfigurationElementCollection collection, string elementTagName, params string[] keyValues)
foreach (ConfigurationElement element in collection)
if (String.Equals(element.ElementTagName, elementTagName, StringComparison.OrdinalIgnoreCase))
bool matches = true;
for (int i = 0; i < keyValues.Length; i += 2)
object o = element.GetAttributeValue(keyValues[i]);
string value = null;
if (o != null)
value = o.ToString();
if (!String.Equals(value, keyValues[i + 1], StringComparison.OrdinalIgnoreCase))
matches = false;
break;
if (matches)
return element;
return null;
Have fun coding.
Using the ASP.NET SQL membership provider to authenticate and authorize calls to WCF services is not an uncommon scenario. But the problem is in the authorization part. Usually to authorize access to WCF service methods this is done using static hard coded attributes decorating the methods definition. So basically you would say that this method is accessible to anyone in the specific role. This is shown below.
[PrincipalPermission(SecurityAction.Demand, Role = "Managers")]
public string GetData(int value)
return string.Format("You entered: {0}", value);
Now the more challenging scenario is when you need to configure the access rules in the runtime like from a database or configuration files rather than in the service code.
To implement this scenario you would need to implement a custom WCF service behaviour along with a custom service authorization manager.
First the behaviour goes like this.
public class SqlProviderSecurityBehavior : IServiceBehavior
private ISqlProviderSecuritySettings settings;
public SqlProviderSecurityBehavior(ISqlProviderSecuritySettings settings)
this.settings = settings;
#region IServiceBehavior Members
public void AddBindingParameters(ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase, System.Collections.ObjectModel.Collection<ServiceEndpoint> endpoints, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
if (this.settings.MembershipProvider != null)
ServiceCredentials cr = bindingParameters.Find<ServiceCredentials>();
if (cr == null)
cr = new ServiceCredentials();
bindingParameters.Add(cr);
// set membership provider
SqlMembershipProvider sqlMembership = new SqlMembershipProvider();
NameValueCollection config = new NameValueCollection();
//if specifying the actual connection string, use reflection to set the values
if (!string.IsNullOrEmpty(this.settings.MembershipProvider.ConnectionString))
Type t = typeof(SqlMembershipProvider);
FieldInfo fi = t.GetField("_sqlConnectionString", BindingFlags.NonPublic | BindingFlags.Instance);
fi.SetValue(sqlMembership, this.settings.MembershipProvider.ConnectionString);
sqlMembership.ApplicationName = this.settings.MembershipProvider.ApplicationName;
//use Initialize method when specyfing connectionName... no reflection needed
config.Add("connectionStringName", this.settings.MembershipProvider.ConnectionStringName);
config.Add("applicationName", this.settings.MembershipProvider.ApplicationName);
sqlMembership.Initialize("SqlMembershipProvider", config);
cr.UserNameAuthentication.UserNamePasswordValidationMode = UserNamePasswordValidationMode.MembershipProvider;
cr.UserNameAuthentication.MembershipProvider = sqlMembership;
if (this.settings.RoleProvider != null)
// set role provider
serviceHostBase.Authorization.PrincipalPermissionMode = PrincipalPermissionMode.UseAspNetRoles;
SqlRoleProvider sqlRoleProvider = new SqlRoleProvider();
if (!string.IsNullOrEmpty(this.settings.RoleProvider.ConnectionString))
Type t = typeof(SqlRoleProvider);
fi.SetValue(sqlRoleProvider, this.settings.RoleProvider.ConnectionString);
sqlRoleProvider.ApplicationName = this.settings.RoleProvider.ApplicationName;
config.Add("connectionStringName", this.settings.RoleProvider.ConnectionStringName);
config.Add("applicationName", this.settings.RoleProvider.ApplicationName);
sqlRoleProvider.Initialize("SqlRoleProvider", config);
serviceHostBase.Authorization.RoleProvider = sqlRoleProvider;
serviceHostBase.Authorization.ServiceAuthorizationManager = new SqlAuthorizationManager(sqlRoleProvider);
// store as service extension
RoleProviderServiceHostExtension ext = new RoleProviderServiceHostExtension(sqlRoleProvider);
serviceHostBase.Extensions.Add(ext);
public void ApplyDispatchBehavior(ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase)
public void Validate(ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase)
#endregion
As you can see in this WCF behaviour it sets the service authorization manager to be our custom class that is implemented as follows.
public class SqlAuthorizationManager : ServiceAuthorizationManager
private SqlRoleProvider sqlRoleProvider;
public SqlAuthorizationManager(SqlRoleProvider _sqlRoleProvider) : base()
sqlRoleProvider = _sqlRoleProvider;
protected override bool CheckAccessCore(OperationContext operationContext)
bool baseResult = base.CheckAccessCore(operationContext);
//For mex support (starting WCF service, etc.)
//NOTE: Other than for service startup this will NOT be true because the WCF
//configuration dictates that WindowsCredentials must be sent and Anonymous users
//are NOT allowed.
if (operationContext.ServiceSecurityContext.IsAnonymous) return true;
//Extract the identity token of the current context user making the call to this service
IIdentity Identity = operationContext.ServiceSecurityContext.PrimaryIdentity;
//Prior to proceeding, throw an exception if the user has not been authenticated at all
if (!Identity.IsAuthenticated)
throw new SecurityTokenValidationException("Authenticated user is required to call this service.");
string[] roles = sqlRoleProvider.GetRolesForUser(Identity.Name);
if (roles.Length <= 0)
throw new System.ServiceModel.Security.SecurityAccessDeniedException("User is not part of the service account role.");
if (!roles.Contains("The role you need to check comes here or can be dynamic"))
//this is the custom authorization rules in a custom table of your choosing
ASPProvidersEntities entities = new ASPProvidersEntities();
var userrule = (from serviceAuth in entities.ServiceAuthorizations
where serviceAuth.ServiceContractName == operationContext.EndpointDispatcher.ContractName && serviceAuth.Username == Identity.Name
select serviceAuth).SingleOrDefault();
if (userrule == null)
throw new System.ServiceModel.Security.SecurityAccessDeniedException("User is not authorized to call this service.");
return baseResult;
Happy coding :)
Symptoms : -
When you install Forefront Protection for Exchange on Exchange 2010 HUB Server, after the installation Microsoft Exchange Transport Service stops and when you open forefront console, the console cannot be loaded.
Reason: -
If the service WinHTTP Web Proxy Auto-Discovery Service is not running during the installation, Forefront Protection for Exchange does not get installed successfully.
Workaround : -
Uninstall Forefront Protection for Exchange, start all Exchange services, make sure that the service “WinHTTP Web Proxy Auto-Discovery Service” is not disabled and the service is running. Now install Forefront Protection for Exchange.
When you try to migrate Windows XP SP 2 computer with ADMT, the migration fails and the error "ERR2:7711 Unable to retrieve the DNS hostname for the migrated computer 'v-test-mig.source.com'. The ADSI property cannot be found in the property cache. (hr=0x8000500d)" is logged in the migration log.
Make sure that the patch discussed at http://support.microsoft.com/kb/944043 is installed on the target XP computer and start the migration of computer again.
Cheers .....
Migrating Forms Based authentication sites from SharePoint 2007 to SharePoint 2010 using database attach method applies the general principles of database attach upgrade method; however, it includes some additional steps to cater for membership & roles providers.
The steps for this migration are as follows:
There are already several standard and well documented methods for creating a NLB cluster for BAM Portal like here. The problem with all these methods is that they tend to be very complex and long. Usually you would end up missing some steps and it will not work.
What I was able to find is a very simple and short way to configure BAM portal on several servers using Windows NLB.
Use the BAM Management Utility to get the current BAM configuration. To do this, click Start, click Run, and type drive:\Program Files\Microsoft BizTalk Server 2010\Tracking\bm get-config -FileName:MyConfig.xml.
Replace the local host name with the name of the NLB cluster. To do this, click Start, click Run, and type notepad drive:\Program Files\Microsoft BizTalk Server 2010\Tracking\MyConfig.xml.
For hardware-based NLB only, verify the configuration file has the following:
<GlobalProperty Name="BAMVRoot"> http://<NLB IP Address>:portname/BAM</GlobalProperty>
Modify the following line to point to the NLB cluster by replacing the computer name (machinename) with the cluster name:
<GlobalProperty Name=" BAMVRoot"> http://machinename:portname/BAM </GlobalProperty>
Save the new configuration. To do this, click Start, click Run, and type drive:\Program Files\Microsoft BizTalk Server 2010\Tracking\bm update-config -FileName:MyConfig.xml.
In my previous post, I mentioned that sometimes you need to create dynamic queries for SQL servers and that one of the options is to write queries that dynamically creates the needed queries for you.
Although this might seem unclear when describing in words, an example could clarify this point more.
Example:
You need to create a set of insert statements that insert values in the “Navnodes” table. These statements should include an insert statement for each row in another table “Webs” if that row has a specific value in “SiteID” column.
The insert statements should look like this:
Insert into [navnodes] values (‘6DD2A42C-28D3-4026-801A-BF34B1102472’,’F34195CC-6DB1-40C1-B688-014BD519887B’,1025,0,0,0,1,’’,NULL, 'Quick Launch',’01-01-2011’,NULL,1,1,0) '
Insert into [navnodes] values (‘6DD2A42C-28D3-4026-801A-BF34B1102472’,’F34195CC-6DB1-40C1-B688-014BD519887C’,1025,0,0,0,1,’’,NULL, 'Quick Launch',’01-01-2011’,NULL,1,1,0) '
Insert into [navnodes] values (‘6DD2A42C-28D3-4026-801A-BF34B1102472’,’F34195CC-6DB1-40C1-B688-014BD519887D’,1025,0,0,0,1,’’,NULL, 'Quick Launch',’01-01-2011’,NULL,1,1,0) '
Insert into [navnodes] values (‘6DD2A42C-28D3-4026-801A-BF34B1102472’,’F34195CC-6DB1-40C1-B688-014BD519887E’,1025,0,0,0,1,’’,NULL, 'Quick Launch',’01-01-2011’,NULL,1,1,0) '
Insert into [navnodes] values (‘6DD2A42C-28D3-4026-801A-BF34B1102472’,’F34195CC-6DB1-40C1-B688-014BD519887F’,1025,0,0,0,1,’’,NULL, 'Quick Launch',’01-01-2011’,NULL,1,1,0) '
Insert into [navnodes] values (‘6DD2A42C-28D3-4026-801A-BF34B1102472’,’F34195CC-6DB1-40C1-B688-014BD519887G’,1025,0,0,0,1,’’,NULL, 'Quick Launch',’01-01-2011’,NULL,1,1,0) '
Note that the number of insert statements you want to create is dependent on the value of the SiteID column and could be in 100s or 1000s range; thus, it is not practical to create them manually.
A simple query that would return the list of the insert statements needed is the following:
select 'insert into [navnodes] values('‘<Site Collection ID>'', ''' + cast(id as varchar(70)) + ''',1025,0,0,0,1,'''',NULL,''Quick Launch'',getdate(),NULL,1,1,0) '
from Webs where SiteId=‘<Site Collection ID>‘
Notice the following:
By running this select statement, the output will be rows of the format required and that contains the needed insert statements. Thus, we can redirect the output to a text file and use it as our sql script.
This method could be used to create any queries you want whether they are insert/update or delete with little effort and can scale to as many records you want.
According to technet article http://technet.microsoft.com/en-us/library/cc424952.aspx , “Host-named site collections provide a scalable Web hosting solution with each site collection assigned to a unique DNS name. In a Web hosting deployment, each host-named site collection has its own vanity host name URL, such as http://customer1.contoso.com, http://customer2.contoso.com , or http://www.customer3.com”.
Host Name Site Collection is actually a means to have a separate DNS for each site collection rather than having all the site collections follow the URL of the web application (path-based site collections). The advantages and limitations of this approach as well as an overview of Host Named site collections is present at http://technet.microsoft.com/en-us/library/cc424952.aspx
A sample power shell script to create a web application for Host Name Site collections as well as creating a host named site collection under it is as follows:
$webTemplate = "STS#1"
$hostHeaderWebApp = "http://<WebAppName>:<WebAppPort>"
$hostHeaderWebAppPort = <WebAppPort>
$AppPoolName = <AppPoolName>
$AppPoolAccount = <AppPoolAccount>
$firstHostNamedSiteCollection = <URL of Host Name Site Collection>
$firstHostNamedSiteCollectionContentDB = <Content Database for Site Collection>
$webApp = New-SPWebApplication -ApplicationPool $AppPoolName -ApplicationPoolAccount $AppPoolAccount -Name $AppPoolName -Port $hostHeaderWebAppPort -AuthenticationMethod NTLM
New-SPContentDatabase -Name $firstHostNamedSiteCollectionContentDB -WebApplication $hostHeaderWebApp
$SpSite = New-SPSite $firstHostNamedSiteCollection -ContentDatabase $firstHostNamedSiteCollectionContentDB -OwnerAlias <OwnerDomainAccount> -HostHeaderWebApplication $hostHeaderWebApp -Template $webTemplate
Users might lose access to their EFS private keys through cases, like corrupted user profiles, hard disk failure, OS reinstall etc. Now here are some important facts about EFS Keys before we start the comparison
1- A time valid certificate and private keys needs to be available for encryption/decryption of data
2- The client will only check for revocation status of the certificate, in the case of Encryption, or adding users to the EFS ACL. In the case of decryption or data access, revocation checking is not checked. Accordingly, revoking a certificate does not mean that the user has lost access to data; it only means that he/she cannot encrypt more files with it. The only way for it to become unusable is if it expires
Key Recovery Agent is I would say the most systematic and controlled key fault tolerance method available. The reason for this, is that for key Recovery to occur the following conditions need to be fulfilled
1- One or more users need to have a valid KRA certificate and private key
2- Key archival needs to be enabled on the Issuing CA and on the Certificate template
3- By default the Certificate manager needs to approve the issuance of the KRA certificate
4- The CA manager(s) needs to manually extract a file from the CA Database that contains the intended private key using the command Cetutil –getkey
5- The KRA need to take that file and decrypt it using his/her KRA private key
From an operational perspective, it is easy to apply governance to the KRA process especially if proper segregation of duties is applied. However, it is definitely the most time consuming method, and practically speaking, it might take the EFS Key owners too long to access files when they don’t have access to their private keys. The downside of KRA is that, data encrypted with old EFS keys cannot be recovered since they are not archived in the CA.
The DRA is a shadow user that has access to EFS encrypted data, along with data owners and the designated users who have access to the data.
The issue with a DRA is that it is hard to govern the data recovery process. Practically speaking, if the DRA has access to the data location, he/she can access it maliciously. One way to govern DRA is use Forefront Identity Manager or to use smart card n to m authentication for the DRA user. The upside of DRA is that it provides the fastest recovery time.
Credential Roaming could be in a way looked at as a key fault tolerance option, because it allows for replicating the local key store from the user profile located on the computer to the user object on the ADDS. This is a good option and involves very fast key recovery time. However the downside of it is that it does not help when key deletion is intentional, since the local key store will replicate the deletion action. However in cases of hard disk failure or profile corruption (if the local profile is corrupted), there’s a big chance that they key on the user object will be retained since the deletion would most probably not be replicated.
If you are developing an ASP.NET application and required to have the password placed in the configuration file then you have seen this condition. The thing is that usually we would want to put the password encrypted and one way that existed since ASP.NET 1.1 is the placement of the password in the registry with encryption with the tool aspnet_setreg and then refer to the password as a registry key in the configuration file. I used this before and had no problems with it. BUT today I was faced with the weirdest issue I could ever think of. The thing is that I used this on an environment and it kept giving me that the user or password are not valid although I was using the local machine administrator. I banged my head to the wall several times until I tried to change the user password and it worked.
So I traced the error and found that the problem happens if the password has a " (double quote) in it
So bottom line do not use the double quotes in your password if you intend to use an encrypted version of this password.
After the creation of host names site collections as per the script mentioned in my previous article, you might find the need to create managed path and have separate site collections for them. [Note the technet article on managed path Host Name Site Collections as it’s different than managed path under Path-based site collections http://technet.microsoft.com/en-us/library/cc288637.aspx#section5 ]
The script to create a managed path under a host named site collection and creating a site collection under it are as follows:
$ManagedPathLocation = "sites"
$ManagedHostNamedSiteCollection = "http:// <URL of Host Name Site Collection>/Sites/TestManaged"
New-SPManagedPath $ManagedPathLocation -HostHeader
New-SPSite $ManagedHostNamedSiteCollection -OwnerAlias <OwnerDomainAccount> -HostHeaderWebApplication $hostHeaderWebApp -Template $webTemplate