I often see customers struggle to make use of the export mailbox features in Exchange Server 2010. I suspect this is because it is an area that has changed in the last few years and there is a lot of conflicting information online that doesn't clearly state which version of Exchange the information is for. I thought I would write this to explain how to export mailbox data in Exchange Server 2010 RTM and the changes that were made in SP1.
With Exchange 2010 we introduced RBAC to control access rights, this required a change in role assignments before we could use the cmdlet, it also brought a change to 64-bit only code which meant that we need a 64-bit installation of Outlook before we can successfully export a mailbox…
Requirements
1: New-ManagementRoleAssignment –Role “Mailbox Import Export” –User “<username>”
1: New-ManagementRoleAssignment –Role “Mailbox Import Export” –Group “<usergroup>”
After assigning permissions it may be necessary to logoff and back on or to close and re-open your PowerShell window.
1: Export-Mailbox -Identity john@contoso.com -PSTFolderPath C:\PSTFiles\john.pst
With the release of Exchange Server 2010 SP1 the export-mailbox cmdlet was replaced with new-mailboxexportrequest, it has similar requirements to export-mailbox, however it does not require Outlook to be installed.
1: New-MailboxExportRequest -Mailbox AylaKol -FilePath "\\SERVER01\PSTFileShare\Ayla_Recovered.pst"
I have been involved with a few escalations recently (I don't even work in support!), the cases have been mostly due to Jetstress tests failing – typically then the relationship between customer and storage vendor becomes more hostile and eventually someone points the finger at Jetstress or Exchange… this is often where I get involved
A recent trend has been to blame Jetstress for not taking passive workload into account i.e Jetstress has no concept of passive databases and treats all databases as active. The theory goes that since a passive database generate much less IOPS workload than an active database, testing them as if they were all active is overkill.
I generally respond to this by reminding them that we need to validate that every database copy that could become active has sufficient storage performance should that event arise, we validate this by testing it in its active state.
Having said all of that I decided to test to see what the difference actually is between active and passive database workload requirements. To do this I used a 3 node DAG in my test lab with a single database configured. I made MBX2 the active copy of the database and MBX1 and MBX3 replica copies. I then configured a loadgen test to generate some workload and observed the results…
You can clearly see here that the IOPS workload on all of the DAG nodes is similar – certainly it would not be a viable proposition to suggest that replica database copies always require significantly lower IOPS than active?
Food for thought…? but the moral of the story is simple… if you are going to go to the expense of configuring a database copy, make sure that it has sufficient storage performance
Recently one of our partner has highlighted an issue in the current version of Jetstress that means the test may run in performance mode when in fact it should be running in stress mode.
Put simply, performance mode validates the storage performance against the standard parameters recommended by the Exchange product group. A stress test is intended for longer duration tests, most commonly in shared storage environments where the impacts of other infrastructure may interfere with Exchange performance.
Performance Test “Strict Mode” (<= 6 hour test)
Stress Test “Lenient mode” (> 6 hour test)
With the current version of Jetstress (14.01.0225.017) if you configure your test via the GUI to run for >6 hours, the test type may appear as a performance test and not a stress test. You can see this in the test summary before you run the test.
This means that your “stress” test will be subjected to the strict mode validation criteria and may fail due to the tighter test parameters.
Note: This issue does NOT affect tests run via JetstressCmd.exe
There are two workarounds for this issue. The underlying problem is that the GUI does not accurately change the test type setting based on test duration. When a new test is defined (i.e an existing XML is not loaded), the test will always run in performance mode, regardless of what you set the test duration to. However, if you load a existing XML file where the duration was set to 6 hours or more the test will run in stress mode, even if you modify the duration to be less than 6 hours in the GUI after loading the XML file.
To ensure that the test is running in stress mode rather than performance mode, verify that the Database read latency thresholds show as…
To Verify that your test is running as a stress test, ensure that the Test Type shows as “Stress” at the summary page…
Our engineering team has identified the cause of this issue and we expect a fix to be included in the next released of Jetstress.
Neil.
Over the past 18 months I have been dealing with more and more cloud engagements. One thing that crops up during the planning phase of all of these engagements is network link latency and bandwidth requirements.
Bandwidth prediction is something I am working on in the background, but network link latency seemed like a mystery!. Everywhere I looked and everyone I spoke to had a different opinion on “maximum latency” for Outlook clients.
So.. I decided to try to devise a test that would let me model the impact of network link latency on Outlook client performance. My lab already had a network emulator installed that was capable of simulating link latency so all that I needed was a script to automate Outlook to perform some common tasks, then record how long the script took to run – easy right?
I already had some scripts for Outlook 2010 from my bandwidth prediction work, so I decided to make use of them. The scripts were written with AutoIT which is a regular part of my lab testing toolkit these days!
Software versions in use for all tests…
Note: All tests began from a cold start i.e Outlook was not running.
Note: All clients were tested from 0ms added latency to 1000ms in steps of 100ms and then up to failure point from 1000ms onwards in steps of 500ms. Failure was deemed to be if any single test failed.
Test 1. Sending
Test 2. Opening
Test 3. Delegates
Test 4. Out of Office
Test 5. Calendar
Notes:
I decided to run tests 1-5 sequentially for three types of client connections
Note: All latency values represented are round-trip times RTT as reported by ping.exe from a command line.
What is evident from this set of results is that the impact of network link latency is linearly related to outlook client performance. Certainly when we consider a broad range of functions. Cached mode dramatically reduces the user wait time for send/receive functions, but it was obvious watching the test run that whenever actions were requested that ran outside of the OST file, such as delegate access or out-of-office requests large delays were experienced at higher latency values. I decided to set the maximum test run time limit at 200 seconds, which was double the test run time with no additional latency applied. Client behaviour when the test duration was taking 200 seconds was the poorest performance level that I perceived to be acceptable by anyone for this test. I judged that if any single operation took 10 seconds or more to display a dialog box or the client appeared to hang then it was unacceptable for the end user.
In this combined test the limiting task was working with the delegate permissions, which was taking approximately 9 seconds to complete at the maximum test run time. Accessing delegate permissions was similar amongst all connection modes.
The results for Outlook Anywhere were confusing initially, although the majority of the difference between Cached and Cached-OA is due to the additional time it takes for an Outlook Anywhere connected client to establish its initial connection. Typically an OA client will take around 10 seconds longer to establish a connection when compared to a normal RPC MAPI connection. This rises to around 30 seconds at 800ms latency. Under use both Outlook Anywhere and Cached mode performed with very similar results.
The test results for me were quite interesting. Prior to this testing I had assumed that Outlook tolerance for link latency was was much worse than the test results show. This is no doubt due to changes in adaptive TCP windowing on Windows 7 plus improvements in Outlook 2010. The bottom line is clear though, Outlook 2010 on Windows 7 is able to tolerate quite high network link latency values before performance begins to suffer noticeably. In the case of Cached mode and Outlook Anywhere both clients were able to tolerate 500ms round-trip-time latency without problem and if your main tasks are sending and receiving mail only, cached mode actually provided a good experience at up to 1000ms round-trip-time latency. Online mode was far more sensitive to network link latency as expected and performance became noticeably slower at 190ms round-trip-time and above. Online mode was still usable above 200ms latency but felt sluggish.
One thing that I didn't account for during this test was that the clients would behave differently as latency increase. My automated script initially didn't cope very well when Outlook in online mode was subjected to high latency values. The long pauses before dialog boxes were displayed made things quite tricky at times, AutoIT is extremely capable though which really helped in this testing!.
The following failure points were noted..
Note: Although the cached mode clients failed at 2000ms for delegate access the client continued to provide usable mail performance. I plan to run further tests in cached mode to see where the break point is for mail only users since this represents the majority of users. Keep watching my blog for more information on this test
This test was designed to isolate the effects of network link latency on client performance, however the reality is that network link latency is only one part of the Outlook client performance experience. In practice it is more likely that poor client performance is caused by local hardware limitations or insufficient resources allocated on the Exchange Server itself. This testing does show however, that consolidating Exchange Server resources into central locations is viable with Windows 7 and Outlook 2010.
So.. this is an interesting question that came to mind while I was performing this testing. We can provide a good user experience in normal day to day tasks over quite latent network links, but what happens if I need to rebuild my OST file? OST file resync is a fairly common occurrence; even if we just consider cases of SATA HDD failure (5% AFR typically), that means in an organisation of 30,000 laptop users there will be approximately 1500 OST resync’s per year (7.5 per working day!) due to HDD failure. I was curious about how long it would take to synchronise an OST file under specific latency conditions, so I ran some tests….
I created a 100MB mailbox then timed how long that took to synchronise down as latency increased, I then calculated how many seconds it took per 1MB of data. My plan was to use this to extrapolate how long larger mailboxes would take to synchronise down.
Note: I had initially planned on doing this experimentally but then the reality of having to run 10 or more tests each taking 20 hours or longer to complete set in and I decided to deal with it theoretically. I may run a few tests to validate the theory when i get time though.
Using this data I was able to extrapolate how long it would take to fully synchronise down a 25GB mailbox (default Office 365 quota limit). Given the assumption that there is always sufficient network bandwidth available, the data looks as follows…
This data suggests that even if a cached mode client were running over a 500ms round-trip-time network latency connection that it would be possible to fully resynchronise their 25GB OST file within 43 Hours. This would require a throughput of 1.34Mbits/sec.
One of the customers I am working with is considering a 5GB quota within their Office 365 tenant so I promised them I would consider their case specifically, so here is the data for a 5GB OST sync.
Taking the same point of 500ms, this prediction suggests that their 5GB OST file would be fully synchronised after 8.6 Hours, assuming that the client had 1.33Mbits/sec of network bandwidth available.
To perform your own predictions using this data use the following form…
T = ( M ) x (( 0.0114 x L ) + 0.3208 )
Where
This testing sparked a few thoughts in my mind, that I just didn't have time to follow up (and I was being harassed by various colleagues and customers to release the data i had!) … anyway, the following are areas i plan to research further when time allows…
For me this test data suggests that network latency is not as critical for client performance on Windows 7 with Outlook 2010 cached mode as it used to be on legacy operating systems. It also shows that the adaptive TCP windowing in Windows 7 goes some way to alleviate the negative effects of network latency. Certainly for my current projects it has allowed me to consolidate more mailboxes into a central location than i would have initially thought possible and the client involved tells me that end user performance is good.
If you are considering running Outlook clients over high latency connections I would urge you to perform a full user pilot to establish your users tolerance of client performance. Also some Outlook plugins can have a dramatic effect on how the client performs under very latent conditions.
Updated: 17/06/2011
Last Updated : 21/06/2011
The field guide is released at major intervals on the Exchange Team Blog, however I have decided to keep dot release updates here to ensure that the very latest information is available to those that do a lot of Jetstress testing (yes, there are some of us that do a lot of stress testing!)
I should point out that the version released on EHLO will have been through technical review by the Exchange product group and the CXP team to validate that it is accurate, so if this is your first time running through a Jetstress test I would recommend using the version on EHLO.
Note: I have moved the un-reviewed version to SkyDrive since that's where I keep the primary copy for edit. If you have problems accessing this copy please let me know directly.
Un-reviewed changes since last issue
Neil Johnson [neiljohn@microsoft.com] Senior Consultant, Microsoft Consulting Services, UK
I have been involved with a number of Office 365 deployments over the last 12 months, one thing that they have all had in common was struggling to find good sources of information. Things are generally documented… somewhere.. its just a matter of finding what you need!
I spotted that the deployment guide for Office 365 Enterprise Plans was issued again this month and I wanted to raise awareness. If you are beginning an Office 365 deployment and need a starting point… this is it
Throttling policies in Exchange Server 2010 have been an often discussed and much misunderstood part of Exchange over the last few years. Especially with Exchange Online! However, Michael Mainer has written a great article about Exchange Throttling policies on his blog.. well worth a read if you want to understand what all of those policy options really do
http://blogs.msdn.com/b/exchangedev/archive/2011/06/23/exchange-online-throttling-and-limits-faq.aspx
So.. with all of the launch fuss I failed to notice that the service descriptions for Office 365 had also been released as non-beta for the first time! When I am working with new customers and they want to know “what” Office 365 is in more detail than the shiny marketing slides can explain, I generally point them at the service descriptions – they contain enough information (at a high level) to determine if Office 365 is right for you and your company… in my opinion these documents are a must read for anyone and everyone considering Office 365… avoid the marketing and competitive FUD with Google and arm yourself with the real information you need…
One thing that I often waffle on about is gathering good quality user profile data before beginning an Exchange design. I was out in Redmond recently and a new Exchange consultant asked me a great question.. “OK, so we know that gathering good quality user profile data is important… but where is it documented how to go about gathering the correct data?”
I thought about the question for a while and after searching online and finding pretty much nothing I decided it would be a good topic for a blog article… so here it is
Basically we need sufficient information to complete our performance and scaling design work. Currently this usually consists of filling out the Mailbox Role Calculator (and another calculator that is coming soon uses the same values). The primary values we need are as follows…
Now, that is an interesting question. Historically we would use the Exchange Server Profile Analyzer (EPA) for Exchange Server 2003 and Exchange Server 2007, however the EPA relies on WebDAV and that's obviously missing in Exchange Server 2010… gathering user profile data from an Exchange 2010 server is involved though and is a topic for another day
If you are analyzing an Exchange 2003 or 2007 organisation, the EPA is the easiest way to gather user profile data..
The following prerequisites must be met
The EPA is available for download in two versions
Install the chosen MSI file on a suitable server. Often the best solution is to install the EPA onto an Exchange Server or Admin Console.
To run the EPA an account is required that has the following rights
Instructions to create this account are detailed below…
Creating the EPA Account on Exchange Server 2003
The following two procedures are for configuration tasks that use the Exchange System Manager (Exchange Server 2003). They outline how to configure an account as an Exchange View-Only Administrator and an account that has full mailbox permissions.
To configure an account as an Exchange View-Only Administrator
To configure an account that has full mailbox permissions on all mailboxes stored on an Exchange server or on an individual mailbox store
Creating EPA Account on Exchange Server 2007
In Exchange Server 2007, tasks are performed through the Exchange Management Console and the Exchange Management Shell.
The following cmdlets will assign the account "Send As" and "Receive As" permission on the server.
NOTE: It may take a couple of hours before the new account rights are reflected by the Exchange server.
The Exchange Server Profile Analyzer should be available from the Windows Start Menu, under Exchange Server. Once started, the GUI will walk you through configuring a scan.
To begin click “Connect to Active Directory”
Configure the correct DC and user account to connect…
Configure the scan appropriately. For user profile analysis it is typically not required to gather individual mailbox statistics and this data just makes the report difficult to open once it has been generated. It can also be worthwhile to scope the scan timeframe to the last few months or weeks to reduce the amount of time that the scan takes.
Once scanning begins, it may take several hours or days to complete…
Speeding up the EPA scanning process
By default the EPA will scan servers one at a time and one mailbox at a time at a rate of between 500KB-1MB/sec. This default behaviour is designed to reduce any performance impact on the Exchange servers during the scan. However, on large deployments this may take a very long time to complete.
The configuration options to increase the number of servers and mailboxes that are scanned in parallel are hidden within the XML configuration file (to generate this XML file, simply configure the scan via the GUI and then save the configuration without running the scan).
A default scan configuration file…
<UserSetting>
<System StatisticsPerMailbox="No"/>
<Account DC="EX2K7DC1" User="epauser" />
<Log Level="Info" />
<TimeFrame From="01/08/2011 00:00:00" To="02/08/2011 11:59:59" />
<Servers>
<Server Name="EX2K7E02">
<MailboxStore Name="Mailbox Database" StorageGroup="First Storage Group" />
</Server>
</Servers>
</UserSetting>
To increase the scan speed modify the following options under the “System” section
<System
StatisticsPerMailbox="No"
ServerThread="2"
MailboxThreadPerServer="2" />
Once the scan is completed the EPA will save a copy of the report. The best way to view the report is via a web browser. Navigate to %appdata%\Microsoft\EPA and open your report HTML file.
The report screen shows the following information…
From this report, we need to expand the “Message Size” section to expose the value recorded for “Aggregates of message size across all messages: avg: “
Additionally we need to expand the “Message Frequency” section and record the “Aggregates of messages received per weekday including dumpster” and “Aggregates of messages sent per weekday including dumpster”
Another often useful metric is for calendaring and availability..
In this example, my lab users profile is as follows…
Running the Exchange Profile Analyzer is a fairly trivial and painless task, however it can often take several days or weeks to complete the scanning of a large environment. It is absolutely vital to discover this information as accurately as possible before beginning any Exchange scaling work with the Mailbox Role Calculator.
Since Exchange 2010 launched I have had many discussions with customers over the use of archiving technology that removes data from Exchange and replaces it with a message stub. This type of technology is fairly common throughout the messaging world and used to be pretty much mandatory back with Exchange 2003 when storage was complex and expensive.
The Microsoft recommendation for Exchange Server 2010 is to use large mailboxes (10GB+) and to store this data on low cost storage. The I/O improvements in store.exe for Exchange Server 2010 allow the use of quite low end disk spindles and also storage solutions without raid given enough database availability group copies are deployed. This is obviously a fairly significant shift from the days of Exchange Server 2003 where we did not even have a robust HA technology, never mind data availability built in to Exchange!
So, my experience is that most customers buy into the large mailbox and cheap storage message pretty quickly and design their Exchange solutions accordingly. However, where things get less straightforward are those customers with a significant and potentially quite recent investment in archiving technology. These customers want to move to Exchange Server 2010 and bring their archiving technology with them…
The problem here is that Exchange Server 2010 received the largest change to the ESE store schema since Exchange 5.5, these changes encourage fewer, larger I/O’s rather than many smaller I/O’s to make deployment on SATA spindles viable, one of the many ways this was achieved was to increase the database page size to 32KB from 8KB in Exchange 2007 and 4KB in Exchange 2003. This change in database page size brought some changes to the behaviour of archive stubbing technology, chiefly the archiving process fails to recover the right amount of white space after removing data from the mailbox databases. James Carroll has written a great blog explaining this issue here,
So… where does this leave us? The archiving software vendors are looking into this issue and are trying to resolve it – I would assume that they will simply set a larger minimum threshold for message size before stubbing the message, however it does raise the question if this practice of removing data from Exchange is actually adding any significant benefit or just adding complexity and cost?
Way back when I was responsible for running a messaging service I liked archiving technology because it reduced my storage costs and improved my search capability. Sure it was seriously expensive software and was never exactly reliable, but it served a purpose and I was happy to pay both the monetary and operational costs since I calculated it was still better than trying to scale my messaging system to cope with the storage demands i had at the time. Given the changes in Exchange Server 2007 and even more so in Exchange Server 2010 I don't think the same approach remains valid, my storage is much cheaper and I no longer require additional indexing for client search, I also have legal hold and online archive to keep my internal auditors happy. Given significant compliance requirements my view is that a 3rd party journal solution would still be required to capture envelope data in Exchange Server 2010, however since journaling does not make changes to the Exchange databases, this does not suffer from the same database page fragmentation that message stubbing does, even more importantly the journal data is kept separately, so my compliance data is safely secured and my Exchange data is kept in Exchange so that end users can access their data from whichever devices they like.
Should you use archiving technology with Exchange Server 2010? Well, as with lots of things in the technology game its your choice, I would however urge caution before just carrying over your archiving technology from previous versions of Exchange to Exchange Server 2010.
Steps
Actions
Add the online tenant to the on-premise EMC
In the on-premises EMC right click ‘Microsoft Exchange’ and select add forest
In the drop down list specify ‘Exchange Online’ as the external exchange forest and specify your Office 365 administrator credentials if prompted
Verify Exchange Online is added to the you on-premise EMC
Written by Daniel Kenyon-Smith
Action
Enable free/busy calendar sharing on-premise
Run the following command
Set-SharingPolicy ‘Default Sharing Policy’ –domains ‘*: CalendarSharingFreeBusySImple’, ‘Company.com:CalendarSharingFreeBusyReviewer, ContactsSharing’, ‘Company.onmicrosoft.com:CalendarSharingFreeBusyReviewer, ContactsSharing’, ‘Office365.Company.com:CalendarSharingFreeBusyReviewer, ContactsSharing’
Confirm the settings have been applied in the EMC, under Organisational configuration, Mailbox, Sharing Policies
Enable free/busy calendar sharing for Office 365
Open a remote PowerShell session by running the following commands and set the sharing policy
$liveCred = Get-Credential
$Session = New-PSSession –ConfigurationName Microsoft.Exchange –ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred –Authentication Basic -AllowRedirection
Set-SharingPolicy ‘Default Sharing Policy’ –Domains ‘*: CalendarSharingFreeBusySimple’, ‘Company.com:CalendarSharingFreeBusyReviewer, ContactsSharing’, ‘Company.onmicrosoft.com:CalendarSharingFreeBusyReviewer, ContactsSharing’
Enable move mailboxes for the organization relationship
Run Set-OrganizationRelationship -id "Office 365 Tenant" -MailboxMoveEnabled $True
Note:
The MailboxMoveEnabled parameter specifies that the organization relationship is used to provide the credentials for moving mailboxes to Office 365. If you don’t set this parameter you are required to provide admin credentials for the remote move.
Access the Mailbox Replication Service Proxy (MRSProxy) service config file
By default MRSProxy is disabled and must be enabled to help facilitate cross forest moves. On the Client Access Server browse to
C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\exchweb\ews\web.config
Enable the MRSProxy setting on all CAS’
Locate ‘Mailbox Replication Proxy Service configuration’ in the web.config file and enable ‘ISEnabled’ by setting to true
Enable on-premise Mailtips
Set-OrganizationRelationship -id "Office 365 Tenant" -MailTipsAccessEnabled $True -MailTipsAccessLevel all
Enable Office 365 Mailtips
Set-OrganizationRelationship -id "On-Prem" -MailTipsAccessEnabled $True -MailTipsAccessLevel all
Configure the ‘Office 365 Tenant’ Organization Relationship
Get-OrganizationRelationship "Office 365 Tenant" | fl
Run - Set-OrganizationRelationship "Office 365 Tenant" -ArchiveAccessEnabled $true
Start a remote PowerShell session
Run
$LiveCred = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection
Import-PSSession $Session –AllowClobber
Configure the ‘On Prem’ Organization Relationship
Run - Get-OrganizationRelationship "On Prem" | fl
Then enable ArchivedAccessEnabled attribute by running - Run - Set-OrganizationRelationship "On Prem" -ArchiveAccessEnabled $true
Enable user archive
In EMC select the mailbox you want to enable, right click and select ‘Enabled Hosted Archive’
Select Yes to enable hosted archive message ‘The archive will created in the online tenant specified. An archive will be created for ‘MAILBOXNAME’. Would you like to proceed?’
Note the icon changes for the mailbox when the archive is enabled
Login to the user mailbox
Ensure the archive appears in the users profile (either Outlook 2010 or OWA)
Login to the portal
Sign into https://portal.microsoftonline.com as the tenant administrator
Add the user to the Discovery Management Role
Select ‘Manage’ Under Exchange Online
Select ‘Roles and Auditing’ and select the ‘Discovery management Role’ and details to add a user to the Discovery Management role group
Add the relevant user
NOTE: This user will have access to search information within a user’s mailbox
Note the new membership
Perform a new search and estimate the search results first
When I login to the portal with the assigned user we specified above I see the ‘Discovery’ tab under ‘Mail Control’.
When I click on new search I need to specify what the search criteria is, such as keywords, mailbox to search and you can also estimate the search results before you run an actual search
Note: If no results are returned you can go back into details and redefine your search
Perform a search and select ‘copy search results’
Select details again
You can now save the search results to a destination mailbox (Discovery Search Mailbox)
Note the number of items found, click open to see the search results
Select ‘Open’ to open the discovery mailbox and view the search results
View the search results
The first step is understand the different types of retention policies you can apply and plan appropriately
See http://help.outlook.com/en-us/beta/gg271153.aspx for more details on retention policies.
First create Retention Tags
Create retention tags under the online tenant (for mailboxes that have been migrated to O365), under ‘Organisation configuration’ and ‘Mailbox’, the select the ‘Retention Policy Tags’ tab and create a new retention tag based on the setting you planned for in the previous step e.g.
This tag will apply to all folders and is therefore an default policy tag
In this example I also created a personal tag
This tag applies to personal folders or individual items that the user can select e.g. emails
Create a retention policy and assign the retention tags to the policy
In the online EMC click ‘Retention Policies’ tab and create a new retention policy and assign the retention tags to the policy
Assign a mailbox to the policy
Process the retention policy immediately if required
Start-ManagedFolderAssistant -Identity "Mailbox added the previous step" (using remote PowerShell)
Check the retention policy is applying to the user
Get-mailbox "Mailbox added the previous step" | fl ret* (using remote PowerShell)
Login to the users mailbox to see if the retention polices have applied and are available (this will depend on the retention policy types you have created)
Login to a user’s mailbox that has the retention policies assigned and notice the available policies in the ‘Home’ ribbon in Outlook 2010. The ‘Use Folder Policy’ is the default policy tag we created called ‘Move To Archive’ and is applied to all folders. The 14 days/2weeks policy is the ‘Custom – Move To Archive’ retention tag we created earlier, this is applies to ‘Personal folders’ and is therefore a personal tag, which users can apply to custom folders and individual items e.g. emails
Apply the personal tag to an individual item
I recently wrote about gathering user profile data for Exchange Server 2003 and 2007 by using the Exchange Server Profile Analyzer tool. As a re-cap the EPA tool uses WebDAV to interrogate the mailboxes and generates user profile data, including..
Note: This information is vital for performing good quality Exchange server scaling.
The problem of course is that Exchange Server 2010 does not include WebDAV and so the EPA tool will not work. This poses an interesting problem, however I am happy to report that we have a solution…
One of the nice things about Exchange 2007 and Exchange 2010 is that we can interrogate the message tracking logs via PowerShell. This provides us with a nice way to query what the Exchange Server is doing. Usefully the message tracking logs include sufficient information for us to approximate our user profile data, without needing the EPA.
After asking around internally within Microsoft about how to gather EPA data for Exchange Server 2010, it became apparent that PowerShell would be the best way to interrogate the message tracking logs. I mentioned to a few people that I was going to write something up over the next few weeks, however before I had a chance to even put any significant thought into the task, someone sent me a copy of the following script which I have uploaded here.
Now, I must confess that despite my best efforts I have been unable to track down the original author! As such it is provided here without credit (if you wrote this, then please get in touch!).
The script basically works by parsing the messaging tracking logs of your Exchange Servers and then tabulates the information into a CSV file for analysis in Excel. To provide some data to parse I configured a loadgen test against 10 mailboxes with a heavy profile, this should approximate to around 80 messages received and 20 sent per user.
The MessageStats script has a single command line parameter which controls how many days back it will look in the tracking logs. For my lab test I only wanted a single days worth, so I just tagged “1” on the end of the PS1 script.
[PS] C:\>.\MessageStats.ps1 1
Started processing EX2K7E02
Processed 869 log records in 0.6540909 seconds
Average rate: 1322 log recs/sec.
Run time was 0.7500096 seconds.
Email stats file is email_stats_2011_09_08.csv
DL usage stats file is DL_stats.csv
So, now we have our CSV file that we can open in Microsoft Excel, however the data required some work before we can get our EPA values. The following screenshot shows the raw data open in Excel.
The best way to process the data is to convert it into a table..
You should now have a table with the following columns…
Note: Due to my test lab being very small I have added a filter to remove any non-loadgen accounts from the data analysis.
In the Total row at the bottom of your table add “AVERAGE” subtotals for “Received Total” and “Sent Unique Total”.
In the “Received MB Total” column total cell, add in an “AVERAGE” subtotal, then edit the formula in the cell and divide that value by the Total Row average for “Received Total”, then multiply the result by 1024 – this will report the average message size in KB.
In the “Sent Unique MB Total” column total cell, add in an “AVERAGE” subtotal, then edit the formula in the cell and divide that value by the Total Row average for “Sent Unique Total”, then multiply the result by 1024 – this will report the average message size in KB.
We now have all of the information that we require…
So, using this technique we have managed to approximate our user profile to a fair degree of accuracy without needing to logon to any mailboxes!. I suspect that this method is accurate to around +/- 10% which is totally acceptable in this context.
Obviously there is a caveat here that I have only performed some rudimentary testing in a fairly small lab environment, so if you do run this in production and find that it generates weird results, or that it validates your already proven EPA data, then feel free to drop me a note to let me know
NOTE: This is reposted here from HMLee’s blog back in 2007 – he is no longer posting to the blog and I wanted to keep this post just in case his blog gets deleted – after all, it is useful information, I have a brain like a sieve and this is probably as good as place to store it as any…
This /report command allows you to regenerate the Jetstress HTML report given just the BLG file and some information about the test system. This can be extremely useful in certain situations
I couldn't publish a neat command interface due to a tight schedule for this function. The program should be able to find all the arguments such as machine name, process name, process id, and start and end times. But, it currently requires you to specify so many arguments about the host machine.
D:\Jetstress\JetstressCmd.exe /c JetstressConfig.xml /report "ROSWELL; JetstressWin; 0; Performance_2007_5_30_10_13_54.blg; 5/30/2007 10:15:49 AM; 5/30/2007 12:15:51 PM”
5/31/2007 12:59:31 AM -- Command Line: D:\Jetstress\JetstressCmd.exe /c JetstressConfig.xml /report "ROSWELL; JetstressWin; 0; Performance_2007_5_30_10_13_54.blg; 5/30/2007 10:15:49 AM; 5/30/2007 12:15:51 PM" 5/31/2007 12:59:31 AM -- Database read latency thresholds: (average: 0.02 seconds/read, maximum: 0.05 seconds/read). 5/31/2007 12:59:31 AM -- Log write latency thresholds: (average: 0.01 seconds/write, maximum: 0.05 seconds/write). 5/31/2007 12:59:32 AM -- Creating test report ... 5/31/2007 12:59:37 AM -- Volume F: has 0.0000 for Avg. Disk sec/Read. 5/31/2007 12:59:37 AM -- Volume G: has 0.0000 for Avg. Disk sec/Read. 5/31/2007 12:59:37 AM -- Volume H: has 0.0000 for Avg. Disk sec/Write. 5/31/2007 12:59:37 AM -- Volume H: has 0.0000 for Avg. Disk sec/Read. 5/31/2007 12:59:37 AM -- Volume I: has 0.0000 for Avg. Disk sec/Write. 5/31/2007 12:59:37 AM -- Volume I: has 0.0000 for Avg. Disk sec/Read. 5/31/2007 12:59:37 AM -- Test has 0 Maximum Database Page Fault Stalls/sec. 5/31/2007 12:59:37 AM -- Test has 0 Database Page Fault Stalls/sec samples higher than 0. 5/31/2007 12:59:37 AM -- Performance_2007_5_30_10_13_54.xml has 479 samples queried.
ROSWELL is the machine name that is part of performance counters and instances.
JetstressWin is the process name that is part of Jet database performance counter instance names. You have to specify JetstressCmd if JetstressCmd has the performance log generated.
0 is the process id that is part of Jet database performance counter instance names. You don’t have to specify the process id unless you used NAS (network attached storage).
Announcing the re-launch of the MCS UK Unified Communications blog at http://blogs.technet.com/b/msukucc/
It has been some time since this blog was updated, but that's about to change. What is this blog for? As it says at the top: All things UC related from Microsoft Consulting Services UK : architecture, best practice, news for Exchange and Lync, both on-premise and cloud. Myself and a number of my fellow Exchange and Lync consultants here in MCS UK will be adding posts to this blog. I have given the blog a makeover and it now has the Microsoft Services branding applied. You will see on the right-hand side that we have a number of links for social media and I detail those here:
Twitter : http://twitter.com/msukucc
LinkedIn : http://www.linkedin.com/groups/?gid=3994667
A Facebook page is coming very soon. At the moment, the current link directs you to the Facebook page for Microsoft Consulting Services UK, so why not have a look around that?
Also down the right-hand side, you will find links to other MCS UK blogs such as Solution Development, The Sharepoint Guys, Business Intelligence, The Deployment Guys and The CRM Guys.
So, bookmark the blog, subscribe to the RSS feed, follow the Twitter account, join the LinkedIn group and keep an eye out for the Facebook page for all things Exchange and Lync related from Microsoft Consulting Services UK.
I have been working with a number of customers and consultants recently who have been keen to explain to me just how difficult they are finding the configuration of Exchange Rich Coexistence or Hybrid Deployment as its now known with Office 365, and to be fair I agree, its definitely not as simple as we would like. We do have improvements coming in Exchange Server 2010 SP2 that will simplify this process, but I thought that I would post this to attempt to help out with some basics…
During the early adopter programs and beta this was known as Exchange Rich Coexistence and it is essentially a way to share availability data between your on premises Exchange Organisation and your Office 365 tenant. This type of deployment is extremely useful both for large migrations and where organisations wish to split their users permanently into a hybrid configuration, where some users are provisioned in the Office 365 service and some remain on premises. The basic idea behind the solution is that users shouldn't need to know where their mailbox is located and instead should just be able to arrange meetings and see availability data for everyone, regardless of their location.
To keep this post relatively brief I have decided not to walk through tasks that are relatively well understood, such as installing Exchange Server 2010 and publishing EWS. Instead I have assumed that several tasks have already been completed…
Cross organisation availability sharing uses the availability service, which is built on Exchange Web Services (EWS). This means that your on premises Exchange organisation must have a published EWS endpoint with a valid public certificate attached, plus a few other things.
So.. with that in mind, this is what we really need…
In reality of course most Office 365 enterprise deployments require ADFS and Directory Synchronisation to meet design requirements, which adds to our list..
Note: Since all of my customers are working with ADFS and Directory Sync the rest of this post assumes that you have already configured them and they are working correctly…
It is assumed that the following tasks have been completed…
Before we begin it is important that we verify that a few things are working….
Microsoft Federation Gateway Trust To test the MFG trust we need to issue the “test-federationtrust” PowerShell commandlet from our on premises Exchange Server 2010 server… it is vital that all of the test outputs show as “Success”
Exchange Web Services To test Exchange Server 2010 EWS on premises use the Exchange Remote Connectivity Analyzer…
Complete the “Microsoft Exchange Web Services Connectivity Tests”
TIP: The EWS test requires an empty mailbox, so create a new “ewstest” mailbox and logon to it via OWA or Outlook prior to running the test to ensure that it is functioning properly… once you have logged on to the mailbox and checked that it is empty, then progress on to the RCA EWS test…
OK, so now we are ready to begin some configuration… we will follow this order to get things up and running…
Create service domain
The service domain is used primarily for forwarding SMTP E-mail from on premises to the Office 365 tenant. We cannot use the *.onmicrosoft.com namespace given to all users since that name is blocked from the Office 365 DIRSYNC process. This is a problem since we need to stamp the service domain as a proxyAddress for all on premises users to ensure that when we migrate a user and set the service domain to be the targetAddress it matches the right user in the Office 365 tenant. The service domain also acts as targetAddress for availability requests for Office 365 mailbox.
TIP: To make things easier it is recommended to use a subdomain of your primary SMTP domain for the service domain. In my lab the primary smtp domain is groovycloud.co.uk, so I will use service.groovycloud.co.uk as my service domain.
Use the following blog to establish a remote PowerShell session to your Office 365 Tenant –
NOTE: This is NOT the same as connecting to your Exchange tenant PowerShell in Office 365..
Tip: This is a useful thing to remember, so save the blog URL for future administration tasks with Office 365…
Once you have established your Office 365 remote PowerShell session, lets check some settings by running…
We are looking primarily for the authentication type of the parent domain, in my case it is a federated domain that passes authentication requests back to an on premises ADFS 2.0 infrastructure.
Now we can create our service domain. Note that you need to replace the Authentication type to be the same as reported in the previous step.
Note: Since this is a subdomain of a previously verified domain, Office 365 will not require that you re-verify the domain.
Now we have our service domain, we need to provide it with DNS information for MX record to ensure that SMTP traffic destined for the domain is routed appropriately and an Autodiscover CNAME record to ensure that Autodiscover works correctly…
Contact your DNS Registrar and ask for the following records to be created…
Note: You can continue the configuration while waiting for these records to be created.
The final things we need to do with our service domain is to add it as a Remote Domain, Accepted Domain and add a proxyAddress for our on premises Exchange users.
Adding the Service Domain as an Accepted Domain…
Adding the Service Domain as a Remote Domain and setting it as our “BPOS” Domain (TargetDeliveryDomain $true)
Adding the service domain to all users proxyAddresses…
The easiest way to achieve this is to edit the E-mail Address Policy. In my case I only have a single “Default Policy”, so I will add the service domain in there…
Tip: If this is the first time you have attempted to edit your E-mail Address policy since installing Exchange Server 2010 you may need to upgrade it. If like me you only have a single policy you can upgrade with the following Exchange Server 2010 PowerShell command…
Once upgraded, edit the policy and apply to all users.
Note: If you cannot do this, then you will need to either script the proxyAddress update or perform a manual update on each user.
Once all of these tasks are completed, either wait for your scheduled directory synchronisation to complete or force directory synchronisation by following the instructions here..
This process is performed on premises in the Exchange 2010 Management Console. However, before we continue we need to know what our Office 365 tenant POD name and EWS namespace is…
Configuring the Org Relationship…
Once the Org Relationship has been created we need to modify a few settings in PowerShell…
Firstly we need to see what the settings are…
Set TargetSharingEPR… This value overrides the autodiscover URL and instead hard codes the EWS endpoint that will be used. This is not required, however hard coding this URL has proven to be more reliable in most deployments. Note, use the “POD” server name that you recorded earlier via OWA.
Enable Mailtips… Thesesettings enables mailtips to work between Exchange Org’s
Check that the correct domain names are listed on the Organisation Relationship… As general rule the following should be listed as a minimum…
Use the following command to write the correct domain names on the Org Relationship…
Set the TargetOwaURL to enable OWA redirection… Note that the URL should end with your federated namespace to ensure that users are directed to the correct authentication platform. This should match the domain portion of the UPN that users use to login to Office 365 with.
Example of a working OnPrem Organisation Relationship…
[PS] C:\Windows\system32>Get-OrganizationRelationship | fl
RunspaceId : 75c03ec6-47bb-4070-807d-ec2a09d112f1
DomainNames : {service.groovycloud.co.uk, groovycloud.co.uk, neiljohn.onmicrosoft.com}
FreeBusyAccessEnabled : True
FreeBusyAccessLevel : LimitedDetails
FreeBusyAccessScope :
MailboxMoveEnabled : False
DeliveryReportEnabled : True
MailTipsAccessEnabled : True
MailTipsAccessLevel : All
MailTipsAccessScope :
TargetApplicationUri : outlook.com
TargetSharingEpr : https://pod51007.outlook.com/EWS/Exchange.asmx
TargetOwaURL : http://outlook.com/owa/groovycloud.co.uk
TargetAutodiscoverEpr : https://autodiscover-s.outlook.com/autodiscover/autodiscover.svc/WSSecurity
OrganizationContact :
Enabled : True
ArchiveAccessEnabled : False
AdminDisplayName :
ExchangeVersion : 0.10 (14.0.100.0)
Name : Office 365
DistinguishedName : CN=Office 365,CN=Federation,CN=GroovyCloud,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=groovycloud,DC=local
Identity : Office 365
Guid : 3da3da88-22fa-444a-ac4e-4eaafe84d917
ObjectCategory : groovycloud.local/Configuration/Schema/ms-Exch-Fed-Sharing-Relationship
ObjectClass : {top, msExchFedSharingRelationship}
WhenChanged : 15/08/2011 10:39:28
WhenCreated : 15/08/2011 07:23:48
WhenChangedUTC : 15/08/2011 09:39:28
WhenCreatedUTC : 15/08/2011 06:23:48
OrganizationId :
OriginatingServer : DC1.groovycloud.local
IsValid : True
Now we need to create a relationship between our Office 365 tenant and our Exchange on premises infrastructure. This process is very similar to the previous Org Relationship…
First we need to create a remote PowerShell session to our Exchange tenant in Office 365, follow the instructions in Mike’s post here to sort that out..
Once we have that session established we need to create a new Org Relationship…
Once created we need to check the same settings as previously..
Again, we will need to configure some settings here to make the relationship work…
TargetSharingEPR Set this to be your published on premises EWS endpoint that you verified with the Exchange Remote Connectivity Analyzer earlier in the process…
Set TargetApplicationURI to delegated OnPrem Namespace If you followed the documentation to establish your federation trust this will be "federation.<primary SMTP domain>”
Enable Mailtips This setting enables mailtips to work between Exchange Org’s
Enable Free/Busy Access
Check that the correct domain names are listed on the Organisation Relationship As general rule the following should be listed as a minimum…
Example of a working Office 356 Organisation Relationship…
PS C:\LiveMesh\Tools\RemotePS> Get-OrganizationRelationship -Identity "Exchange OnPrem" | fl
RunspaceId : 94f72750-98a0-495d-91cc-bb26a88611da
DomainNames : {groovycloud.co.uk}
TargetApplicationUri : exchangedelegation.groovycloud.co.uk
TargetSharingEpr : https://mail.groovycloud.co.uk/EWS/Exchange.asmx
TargetOwaURL :
TargetAutodiscoverEpr :
Name : Exchange Onprem
DistinguishedName : CN=Exchange Onprem,CN=Federation,CN=Configuration,CN=neiljohn.onmicrosoft.com,CN=ConfigurationUnits,CN=Microsoft Exchange,CN=
Services,CN=Configuration,DC=eurprd02,DC=prod,DC=outlook,DC=com
Identity : Exchange Onprem
Guid : e3f00a9d-1534-479d-a439-d187aa02e05a
ObjectCategory : eurprd02.prod.outlook.com/Configuration/Schema/ms-Exch-Fed-Sharing-Relationship
WhenChanged : 15/08/2011 09:48:18
WhenCreated : 15/08/2011 07:39:43
WhenChangedUTC : 15/08/2011 08:48:18
WhenCreatedUTC : 15/08/2011 06:39:43
OrganizationId : eurprd02.prod.outlook.com/Microsoft Exchange Hosted Organizations/neiljohn.onmicrosoft.com - eurprd02.prod.outlook.com/Config
uration/Services/Microsoft Exchange/ConfigurationUnits/neiljohn.onmicrosoft.com/Configuration
OriginatingServer : AMSPRD0202DC004.eurprd02.prod.outlook.com
Ok, so now were all set up so the next step is to logon as some users to see what happens…
I am going to use the following accounts
For this test I am going to log on to my ewstest account via Outlook 2010 and attempt to view availability data for my Office365User1 account. I have created a test meeting in each mailbox and set the default calendar permission level to “Free/Busy time, subject, location”.
A new meeting request running on premises logged on as user ewstest with Office 365 user Office 365 User 1 added as an attendee. You can clearly see that both users are returning rich availability data… yay!
For this test I am going to log on to my Office365User1 account via Outlook 2010 and attempt to view availability data for my ewstest account. I have created a test meeting in each mailbox and set the default calendar permission level to “Free/Busy time, subject, location” the same as before…
As you can clearly see, the experience is exactly the same for an Office 365 user collaborating with an on premises user…. double yay!
Hopefully this article shows that it is possible to get availability to work cross premises. It does show however that even in this extremely simple example where I only have a single Exchange 2010 server and a handful of users, there were still a number of steps to complete and plenty of scope to get something wrong. I for one cant wait for Exchange Server 2010 SP2 to come along and simplify the whole thing!
In my experience working in labs and with customers the following are the most common areas of misconfiguration…
For further information and detail around more complex configurations please see
All the way back in May 2009 I had just completed my Exchange 2007 Microsoft Certified Master rotation in Redmond, on my return I decided that I would write up my experience… it was actually quite emotional re-reading this post, many of the people I went through this experience with are now good friends, which is one thing I didn't expect when I turned up at building 40 in 2009…
The reason for this post is that we have had a few new starters into the MCS UC and Messaging team recently and all except one (who is already MCM) they all want to go through the Microsoft Certified Master program. This has given me cause to reflect on the program itself and what it has meant to me over the last two years… firstly to assess if it was worthwhile me attending and secondly to help me figure out if its necessarily right for everyone…
Well, I have certainly been involved in some things that I don't think that I would have been without being a part of the MCM Exchange community. From a community perspective I was able to write the Jetstress Field Guide, which was only really opened up to me by having a number of storage performance cases in the UK escalated and as the only Exchange Server 2010 MCM in MCS UK at the time I was the person tasked with sorting them out… working through these cases I was able to work with the MCM Exchange community and the Exchange product group and eventually the field guide was born… likewise working with Ross Smith IV on the database copy distribution section of his Mailbox Role Calculator mostly came about from discussions within the MCM community (of which Ross is a long standing member).
In addition to the community work I have been involved with some of the largest and most interesting Exchange challenges in MCS UK – it is a common practice now for our more demanding enterprise customers to request a “Ranger/MCM” for their projects. Being a part of such a community helps to keep you in demand… which is always good in consulting!
I also attended the Exchange Server 2010 MCM upgrade back in June 2010 – this was an experience I will never forget and made the main rotation seem quite relaxed in comparison.. essentially the upgrade rotation ran through the full 2010 rotation materials and exams, but condensed it into 5 days rather than the usual 15… by the end of it we were all like zombies!
Recently I also went through the MCM Office 365 training… despite working with Office 365 since the very early CTP days almost 12 months ago, this 5 day training course in Redmond was extremely rewarding. It was run at the usual very high MCM level and the class only consisted of other Exchange MCM’s. Access to this kind of training is invaluable – it simply does not exist anywhere else in the world…
When I think about the cost of the program its obviously expensive, both in terms of the outright cost and in terms of your own emotional and personal commitment. At the time I went through MCM in 2009 it was provided free for internal candidates, this is no longer the case and we have to generate a business case for attending the rotation, additionally I think that many people do not realise just how much personal commitment is required to be successful with this qualification. Simply having some Exchange experience and attending the course is unlikely to result in a happy ending… strangely most of the criticism that I see aimed at the MCM program seems to suggest that you are buying a qualification… I can assure you that this is most certainly not the case and I know of a number of candidates that attended an MCM rotation and are still working on passing the exams months later…
So… given the expense and commitment required is it worth attending? I think the answer here depends on how you view your career. For me it was definitely worthwhile, I wanted to step up from the background and establish myself as a technology leader. Passing MCM definitely provides a level of kudos with your colleagues and customers that very few other qualifications can match. Sure, it was tough to convince my management to pay for me to attend (lost revenue, flights + expenses, etc.) and it was even tougher to actually pass, but I think it has repaid me hundreds of times over by making it possible for me to engage on larger, more complex projects because I was now seen as a more established consultant due to having the MCM badge. On the other hand, if you are not aspiring to take on larger, more complex projects and engage with your community peers at a worldwide level, then I would question if MCM is the right path to take. Fundamentally you need to ask yourself why you want to join the MCM community and how much effort are you prepared to put in to join?
One of the common questions I am asked is how to best prepare for an Microsoft Certified Master rotation? On the surface this seems like a sensible question, but the problem is that the answer is often different for each person.
For sure the most difficult aspect of the Exchange MCM is the practical lab at the end of the course – sure, some people find this easier than the exam process, but for the majority it seems to be the hardest part. So, I would say that not only do you need to know the technology, you need to be competent at hands on configuration and troubleshooting… you also need to be able to do it quickly, very quickly!.
One bit of advice I do give is that before you attempt a rotation you should be in a position to be able to talk about almost any area of Exchange confidently, in front of your peers and be able to demonstrate your understanding practically. For example, could you explain how cross forest availability works and the requirements to your colleagues right now? and then actually configure it in less than an hour? Take this and apply it to pretty much any area of Exchange and when you get to a topic that you think you couldn't do it, that's something you need to learn before turning up in rainy Redmond…
On balance I think that attending the Microsoft Certified Master program was the right thing for me to do. It was an extremely demanding, but rewarding experience and the returns it has provided me in terms of career growth have been immense. So, on that basis I would say that if you were looking to move up to the next level in your career and were prepared to put the effort in, then MCM is a great way to go. However, it is not necessarily right for everyone… I tend to view MCM as the PhD of the Exchange Server world… and just as in every other field, not everyone needs to hold a Doctorate in their area of expertise to be successful…
And of course keep up to date on the MCM program by following the Master blog/.
As you will now know, the MCS UK Unified Communications team blog has been re-launched and can be found here. As an update to this, note that the associated Facebook page can be found here.
The Microsoft Online Services Module allows you manage your tenant directly and in some cases change settings you can’t change in GUI (note this can only be achieved if you’re managing accounts that have been created in the tenant e.g. not created using Dirsync/ADFS). To access Remote PowerShell to the Service Portal you will need to install the following prerequisites:-
Download the Microsoft Online Services Module
The Microsoft Online Services Module for Windows PowerShell is a download that comes with Office 365. This tool installs a set of cmdlets to Windows PowerShell (you run those cmdlets to set up single sign-on for Office 365).
In this case i want stop user(s) from being prompted to change their password. In order to do this you can run the Microsoft Online Services Module from the shortcut menu and connect to your Office 365 Tenant by running the following commands:-
You will need to enter your tenant credentials, once you have done this you can check what the current settings are by running
Note that PasswordNeverExpires is set to false, you can then change the setting for either that individual user or all users
Run the this command again to ensure that the settings have taken effect and that PasswordNeverExpires is set to True
Also if you don’t want the user t be prompted when they login you can run the following command
If you want to know a list of commands run
This is the output (so as you can see its a pretty powerful tool, for example you can automate the provisioning of licenses for example):-
Add-MsolGroupMember
Add-MsolRoleMember
Confirm-MsolDomain
Connect-MsolService
Convert-MsolDomainToFederated
Convert-MsolDomainToStandard
Convert-MsolFederatedUser
Get-MsolAccountSku
Get-MsolCompanyInformation
Get-MsolContact
Get-MsolDomain
Get-MsolDomainFederationSett.
Get-MsolDomainVerificationDns
Get-MsolFederationProperty
Get-MsolGroup
Get-MsolGroupMember
Get-MsolPartnerContract
Get-MsolPartnerInformation
Get-MsolRole
Get-MsolRoleMember
Get-MsolSubscription
Get-MsolUser
Get-MsolUserRole
New-MsolDomain
New-MsolFederatedDomain
New-MsolGroup
New-MsolLicenseOptions
New-MsolUser
Remove-MsolContact
Remove-MsolDomain
Remove-MsolFederatedDomain
Remove-MsolGroup
Remove-MsolGroupMember
Remove-MsolRoleMember
Remove-MsolUser
Set-MsolADFSContext
Set-MsolCompanyContactInform.
Set-MsolCompanySettings
Set-MsolDirSyncEnabled
Set-MsolDomain
Set-MsolDomainAuthentication
Set-MsolDomainFederationSett.
Set-MsolGroup
Set-MsolPartnerInformation
Set-MsolUser
Set-MsolUserLicense
Set-MsolUserPassword
Set-MsolUserPrincipalName
Update-MsolFederatedDomain
The Microsoft Office Services Diagnostics and Logging (MOSDAL) tool has been around for a while, certainly most BPOS customers will have come across it before… however it has now been updated to provide AD FS authentication diagnostic information so its now useful for Office 365 deployments as well!
Deploying ADFS for Office 365 is a relatively trivial process, although when things do go wrong they can be complex to troubleshoot. If you are in this position, then get the new version of MOSAL and see if it can help you out…