On The Wire

A detailed look inside the Ethernet Cable

June, 2014

Posts
  • Top 10 Tips for Optimising & Troubleshooting your Office 365 Network Connectivity

    Having performed numerous Office 365 Network assessments and reactive visits to resolve issues for customers, its apparent that the vast majority of issues are seen time and time again. So from this experience, here are my top 10 tips for you to optimise your O365 network performance and prevent issues occurring in future.

    Some of these issues, if occurring, will cost you seconds, others more, but by eliminating them all and getting your proverbial network ducks in a row, you'll ensure you are providing the best possible Office 365 experience for your users.

    As some of these issues are complex, rather than provide a detailed explanation for each scenario in this blog post, I've linked to separate blog posts which cover each in detail as and when you need them.

    1. TCP Window Scaling

    This tops my list of things to check due to the impact it can have on performance and the amount of times I see it disabled on legacy network equipment. Unfortunately there is no simple way of checking this from a client without taking a network trace but I've outlined how to do this, and the setting in much more detail in a separate blog post here

    If you check only one thing on your O365 network link, then I'd advise it's this!

    2. TCP Idle time settings

    This issue is another very common one and is caused by settings on egress points of corporate networks not being adjusted for Outlook running through it. Problems caused by this can include hangs within Outlook, especially when switching mailboxes/calendars and unexpected auth prompts. It's relatively easy to fix and I've outlined the problem, and solution in much more detail here.

    3. Latency/Round Trip Time (RTT)

    Network latency has the ability to cause real issues with O365 and it's usability. Checking your RTT to O365 is a worthwhile task regardless of whether you're having issues as it provides you with a great baseline should performance issues occur in future allowing you to isolate where the delay is occurring.

    The detailed guide on how to do this is here

    4. Proxy Authentication

    On numerous occasions I've run into unnecessary delays on connecting to O365 caused by proxy authentication. With Outlook this can cause a delay on start up, when switching mailboxes/calendars etc, anything that requires a new TCP connection to be spun up. With SharePoint this will manifest itself as slow initial page loads.

    The detailed guide on how to check this is here

    5. DNS performance

    A simple one but often forgotten. DNS performance should be checked to ensure it isn't adding additional delays to your Office 365 connections. A detailed guide to checking DNS performance can be found here.

    6. Proxy Scalability

    This issue can effect both performance and cause issues further down the line when you least expect it. Proxies are invariably in place before the move to Office 365 and are often used without much reconfiguration for the Office 365 traffic. It's worth checking your numbers here as you may find you're sailing closer to the wind than you realise.

    The more detailed description of the problem and guidance can be found here

    7. TCP Max Segment size

    A simple one to check but worth a look none the less. To ensure maximum throughput on the link between yourself and Office 365 we should be using as close to as possible the maximum TCP segment size for transferring data.

    More detail on how to check this is here

    8. Selective Acknowledgement

    Whilst you're digging around in the TCP Options in your 3-way handshake, it's worth checking Selective Acknowledgement (SACK) is enabled. This feature enables your TCP stack to deal with dropped packets more efficiently.

    A slightly more detailed explanation of SACK and how to check it can be found here.

    9. DNS Geo location

    One of the most important checks you can make, and one that can make a big difference in the performance of O365 is ensuring your DNS call are made in the same geographic location as the user is actually in. Getting this wrong means that the routing of your traffic to O365 could be sub optimal and thus affect performance. It's thankfully an easy one to check though and outlined further here.

    10. Application Level troubleshooting

    My final tip isn't so much with network troubleshooting but application layer. This blog post will give you some tips on how to look at Outlook and SharePoint in conjunction with network tracing to both baseline and troubleshoot application level issues even when the traffic is encrypted in an SSL session.

    The blog post is here

    I'll add more as and when I find them/get time, hope the first ten are of help!

  • Preventing proxy authentication from delaying your O365 connection

    A quick and easy check you can do to ensure your O365 connections complete quickly is to check proxy authentication is completing quickly, or better still not being done at all. If you're not using a proxy and are going direct, then you can move along…nothing to see here!

    It's surprisingly common and I've run into numerous customers experiencing unnecessary delays on connecting to O365, caused by proxy authentication. With Outlook this can cause a delay on start-up, or the 'polo mint' hang when switching mailboxes/calendars etc, anything that requires a new TCP connection to be spun up. With SharePoint this will manifest itself as slow initial page loads.

    To view this proxy authentication stage of a TCP connection to Office 365 (if indeed there is one) I'd recommend using a packet capture tool on the client, such as Netmon or Wireshark. If enabled, proxy authentication needs to occur with every TCP session setup and is the first thing which has to complete after the TCP 3 way handshake, usually triggered with the first GET or CONNECT request. We should expect this process to complete in milliseconds. This is what it looks like in Netmon:

    In Netmon it's wise to add the 'NTLM SSP Summary' column to show what stage of authentication we're at, also add the "Time Delta" column to show you the time delay from the previous packet shown.

    The easiest way to find the session is to

    • Close all browser windows and open a single one on a new tab.
    • For Outlook close the application completely
    • Start Netmon
    • Connect to your SharePoint page or start Outlook
    • On the left hand side, Netmon should show your browser (or Outlook) with its TCP connections to your site as follows:

     

    For each of these examples, you'll see multiple TCP sessions per process, each will perform proxy authentication if enabled. The number will differ on the version of Outlook or browser you are using.

    Initially we'll connect with no authentication and be told 'proxy authentication required'. In this example response in this instance takes 0.02 seconds to come back, indicating no network performance issue and no proxy performance issue per-se.

    Here is an example of this problem occurring. Showing the stages following the TCP 3-way handshake. To just see these packets and ignore the pure TCP ones, use the filter 'HTTP'.

     

    Initial connect:

    14:12:24.6483418 19.0046514 0.0003578 iexplore.exe 10.200.30.40 MyProxy-01.Contoso.sig HTTP:Request, CONNECT Contosoemeamicrosoftonlinecom-3.sharepoint.emea.microsoftonline.com:443 , Using NTLM Authorization NTLM NEGOTIATE MESSAGE

    Proxy Response:

    14:12:24.6876389 19.0439485 0.0283000 iexplore.exe MyProxy-01.Contoso.sig 10.200.30.40 HTTP:Response, HTTP/1.1, Status: Proxy authentication required, URL: Contosoemeamicrosoftonlinecom-3.sharepoint.emea.microsoftonline.com:443 NTLM CHALLENGE MESSAGE

    We then send the request again, this time with NTLM authentication for the proxy as requested:

    Second request with NTLM Auth:

    14:12:24.6883198 19.0446294 0.0004838 iexplore.exe 10.200.30.40 MyProxy-01.Contoso.sig HTTP HTTP:Request, CONNECT Contosoemeamicrosoftonlinecom-3.sharepoint.emea.microsoftonline.com:443 , Using NTLM Authorization NTLM AUTHENTICATE MESSAGE Version:NTLM v2, Domain: headoffdom, User: paul.collinge, Workstation: W7TEST20

    200 OK response from proxy but this takes 3 seconds.

    14:12:27.7859643 22.1422739 3.0878394 iexplore.exe MyProxy-01.Contoso.sig 10.200.30.40 HTTP HTTP:Response, HTTP/1.1, Status: Ok, URL: Contosoemeamicrosoftonlinecom-3.sharepoint.emea.microsoftonline.com:443

    Subsequent call:

    Once the above is complete, subsequent calls are back to millisecond response times.

    14:12:27.7868062 22.1431158 0.0008419 iexplore.exe 10.200.30.40 MyProxy-01.Contoso.sig TLS TLS:TLS Rec Layer-1 HandShake: Client Hello.

    14:12:27.8445642 22.2008738 0.0485304 iexplore.exe MyProxy-01.Contoso.sig 10.200.30.40 TLS TLS:TLS Rec Layer-1 HandShake: Server Hello.; TLS Rec Layer-2 Cipher Change Spec; TLS Rec Layer-3 HandShake: Encrypted Handshake Message

     

    As the delay is only seen when performing the proxy authentication stage of the session setup , this indicates the delay is caused by the process of authentication itself.

    As we're using NTLM in this example and in this instance the Proxy is in a different domain to the users, it's feasible this delay could be caused by congestion on the secure channels between the proxy and it's DC, or its DC to the users DC.

    This would have to be investigated separately to see if it this, or something else is causing the delay but here is some information on this and the fix (maxconcurrentapi registry key).

    http://blogs.technet.com/b/ad/archive/2008/09/23/ntlm-and-maxconcurrentapi-concerns.aspx

    http://support.microsoft.com/kb/975363

    Regardless of the root cause, this behaviour, if occurring will be causing intermittent performance issues with both Office 365 and Internet browsing. When the above is the cause, you may find that the response times are good at times and very bad at others. The slow times seemed may coincide with high utilisation times, such as first time in the morning and after lunch and as such you should run these tests at various times of the day.

    The issue could also be occurring due to loading issues on the proxy itself, however, delays would be apparent in more than just the authentication packets.

     

    From an Office 365 perspective, we can easily remove this problem by following the recommended setup and making an exception in the proxy for authentication on the Office 365 urls as per: http://support.microsoft.com/kb/2637629

     

    Firewall or proxy servers require additional authentication

    To resolve this issue, configure an exception for Microsoft Office 365 URLs and applications from the authentication proxy. For example, if you are running Microsoft Internet Security and Acceleration Server (ISA) 2006, create an "allow" rule that meets the following criteria:

    • Allow outbound connections to the following destination: *.microsoftonline.com
    • Allow outbound connections to the following destination: *.microsoftonline-p.com
    • Allow outbound connections to the following destination: *.sharepoint.com
    • Allow outbound connections to the following destination: *.outlook.com
    • Allow outbound connections to the following destination: *.lync.com
    • Allow outbound connections to the following destination: osub.microsoft.com
    • Ports 80/443
    • Protocols TCP and HTTPS
    • Rule must apply to all users.

    HTTPS/SSL time-out set to 8 hours

    With these bypassed, we've removed one possible cause of a delay on your Office 365 connections, and also taken away some load from both your proxies, and your DCs.

    So in summary, if you're using a proxy, ensure there is no authentication performed on the TCP session out to Office 365 by whitelisting the URLs above. If you must use proxy auth, make sure it's completing quickly, especially at peak times. It should complete in milliseconds, i.e. not much more than the time between the initial SYN and SYN ACK. Even a delay as small as 2-3 seconds like the one demonstrated, will have a noticeable impact for your users.

  • Checking your TCP Packets are pulling their weight (TCP Max Segment Size or MSS)

    This is a quick one to check to ensure your TCP packets are able to contain the maximum amount of data possible, low values in this area will severely affect network performance.

    Maximum Segment size or MSS is a TCP level value which is the largest segment which can be sent on the link minus the headers. To obtain this value take the IP level Maximum Transmission Unit (MTU) and subtract the IP and TCP header size.

    So for a standard Ethernet connection with minimum size IP and TCP headers we subtract 40 bytes from the 1500 byte standard packet size (minus the Ethernet Header) leaving us with an MSS of 1460 bytes for data transmission.

    So to get the most efficient use of a standard Ethernet connection we want to see an MSS of 1460 bytes being used on our TCP sessions.

    This setting is agreed in the TCP 3-way handshake when a TCP session is set up. Both sides send an MSS value and the lower of the two is used for the connection.

    It's easy to check this, take a Netmon or Wireshark trace and find the connection you're interested in, Netmon will filter the connections by process on the left hand side for you.

    Once you've found the connection (ensuring you've started tracing before initiating the connection) then you just need to open the first to frames of the connection, the SYN & SYN ACK. Indicated by an S followed by an A..S in the description of the frame. To capture the 3-way handshake make sure you start tracing, then start Outlook, or connect to your SharePoint site in a new Browser window.

    Once you've clicked on the first packet, the SYN, then in the frame details down on the bottom, open up TCP Options and the MSS can be clearly seen.

    Here we see the MaxSegmentSize shown as 1460.

     

    Repeat this with the SYN ACK which should be the second frame if you've filtered the connection away from other traffic. The lower of the two values will be your MSS. If it's 1460 then you're configured to use a full sized data payload.

    One caveat to this, it doesn't mean that this value can actually be used, it's possible a network segment along the route has a lower MTU than we're aware of. If this is the case, if all is well we'll get an ICMP message back from the router at the edge of this link when we send a 1460 byte packet with the do not fragment bit set. This packet will tell us what the MTU is on the link and we'll adjust accordingly. However it's always worth checking this value is set to a high value and we can see the TCP payload throughout the trace is at 1460 (on full packets) and hasn't dropped down to a lower value.

    It's common to see this value lower than the maximum of 1460 (for an Ethernet network), if for example we know a network segment along the route has a lower MTU, one with an encryption overhead for example, but the value shouldn't be significantly lower. 576 Byte packets are a sure sign we've hit problems and dropped down to the minimum packet size so keep an eye out for those.

    Also, remember, if you're using a proxy, you'll have to check this both on the client, and a trace on the proxy or NAT device if used as there will be two distinct TCP sessions in use and you won't see the problem if it is beyond the proxy/NAT unless you trace there for that second TCP connection.

    It's rare to see an issue with this, but it's always worth a quick check to ensure it's working as expected.

  • DNS geolocation for Office 365, connecting you to your nearest Datacenter for the fastest connectivity

    One of the main things we need to get right to ensure the most efficient and speedy connectivity to O365 is where in the world your DNS call is being completed. You'd think this wouldn't matter, you do a DNS lookup for your O365 tenant, get the address then connect right? Well, normally yes, but with O365, especially with Outlook, we do some pretty clever stuff to utilise our worldwide array of datacenters to ensure you get connected to your data as efficiently as possible.

    Your Outlook connection will do a DNS lookup and we use the location of that lookup to connect you to your nearest Microsoft Datacenter. With Outlook we'll connect to a CAS server there and use our fast Datacenter to datacenter backbone network to connect you to the datacenter where your exchange servers (and data) are located. This generally works much quicker than a direct connection to the datacenter where your tenant is located due to the speed of the interconnecting networks we have.

    http://technet.microsoft.com/en-us/library/dn741250.aspx outlines this in more detail but a diagram nicked from this post shows how this works for Outlook/Exchange connectivity when the Exchange mailbox is located in a NA datacenter but the user is physically located in EMEA. Therefore the DNS lookup is performed in EMEA, we connect to the nearest EMEA datacenter, which then routes the connection through to your mailbox over our backbone network, all in the background and your Outlook client knows nothing about this magic going on behind the scenes.

     

    If your environment is making its DNS calls in a location on a different continent to where the user is physically located then you are going to get really bad performance with O365. Take an example where the user and Mailbox is located in EMEA. Your company uses DNS servers located in the USA for all calls, or the user is incorrectly set to use a proxy server in the USA, thus we're given the IP address of a USA based datacenter as that's where we think your user is located. The client will then connect to the USA based datacenter which will route the traffic to the EMEA datacenter which will then send the response back to the USA based datacenter which will then respond to the client back in EMEA. So with this scenario we've got several unnecessary trips across the pond with our data.

    It is therefore vitally important to get the DNS lookup right for when you move to Outlook on Office 365.

    So how do you check this? Well it could be a bit tricky as although we release a list of IP addresses used for O365, we don't tell you which ones map to where, for many reasons including the fact they change regularly. Thankfully one of my Microsoft colleagues has shown me an easy way to check you're connecting to a local datacenter.

    All you need to do is open a command prompt on the client and ping outlook.office365.com and the response will tell you where the datacenter is you'll connect to. So sat here in the UK at home, I get EMEAWEST

     

    If I connect to our Singapore VPN endpoint and turn off split tunnelling and force the DNS call down the VPN link (our Internal IT do a great job of making these things configurable for us techies) then I get directed to apacsouth.

    And if I connect via VPN to the mothership in Seattle, my DNS call is completed there and thus I get directed to namnorthwest.

    So it's a quick and easy check, just make sure the datacenter returned is in the same region as you're physically located in.

    SharePoint is currently directed to the datacenter where your tenant is located so it doesn't matter so much where the call is made for this (although it should still preferably be local to the user for the portal connection). Lync is slightly different and is outlined in this article in more detail.

    It's also worth ensuring all your clients are using a proxy in the same region as where they are located, as if not, they could hit the problem outlined above and thus be getting unnecessarily poor O365 performance.

  • Ensuring your Proxy server can scale to handle Office 365 traffic

    Proxy servers are often in place at customer sites, happily ticking away handling Internet traffic for years before Office 365 came along. As Office 365 generally travels over port 443 (for Outlook and SharePoint at least) then what's to think about? Your proxy can handle this like any other SSL traffic right?

    Well, yes technically speaking this is indeed the case, but one thing you need to consider is the way Office 365 connects, it uses multiple, long life connections. This is not the same as normal web browsing as these sessions tend to be multiple yes, but not long life, they are generally torn down after the page is loaded/finished with. Also they aren't all going to the same remote IP address. So we've got to take into account both that each user will be using more, multiple TCP sessions than previously and that those sessions will in some cases be kept open for an extensive period of time (i.e. Outlook connections).

    This Article outlines the expected number of TCP connections for older versions of Outlook. You can see in the table below, in Cached mode 8 connections per client is possible. I've seen more than this when you add multiple mailboxes and calendars (think your Exec PA's). Generally the newer versions of Outlook use a lower number of connections as they are designed with the Cloud in mind, but again, power users can push the number of connections up above the norm.

     

    Let's take an example, Contoso has a single Proxy with a single IP, which has been working fine for years. They introduce Office 365 gradually for 6000 clients, including Outlook and SharePoint

    Whilst the proxy server is able to cope with the load at present, it is presenting itself to Office 365 via a single IP address.

    Using the calculations outlined in this article we believe an absolute maximum of 6000 clients can be supported by the current setup although I would err on the side of caution and estimate this to be nearer 4000. This issue stems from the available ephemeral ports available to connect to Office 365. Outlook can, and does open many connections per user.

     

    • Maximum supported devices behind a single public IP address = (64,000 – restricted ports)/(Peak port consumption + peak factor)
    • For instance, if 4,000 ports were restricted for use by Windows and 6 ports were needed per device with a peak factor of 4:
    • Maximum supported devices behind a single public IP address = (64,000 – 4,000)/(6 + 4)= 6,000

       

    So Contoso here would find that with 6000 clients running Outlook 2007, not only would Office 365 connections start to fail at random as we approached the limit, general Internet connections would start to fail as there are no resources available, and the proxy would be under enormous load. This because the normal internet traffic is going through the proxy and we're using many thousands of long lasting connections to Office 365, from a single IP. Using a more modern Outlook client may give you some more leeway in this scenario but you're still sailing close to the wind with the proxy's limitations when handling Outlook, SharePoint plus normal web traffic.

    Although Microsoft recommend a proxy is not used and traffic for office 365 is sent direct due to this, and performance concerns, we are aware this is not an easy solution for many customers who prefer to use a proxy.

    The article below outlines a solution to this problem by segmenting the network to multiple proxies. Another might be to load balance multiple proxies, however the load balancer would have to ensure stickiness to the client as every connection from Outlook to Office 365 needs to come from a single IP.

    http://technet.microsoft.com/en-us/library/hh852542.aspx

    So in summary, it's wise to check how many clients you've got connecting to Office 365 and ensure you have enough proxies, and IP addresses on those proxies to be able to scale to the number of ports required whilst still efficiently serving normal internet traffic. Don't presume your faithful old proxy is going to be able to handle the load, and new type of long standing TCP connections that Office 365 uses alongside its normal handling of other web traffic.

  • Ensuring your TCP stack isn’t throwing data away

    Fw

    In my previous blog post, I discussed checking the MSS to ensure full sized packets are used. Well, whilst you're digging around in the TCP Options of the SYN-SYN/ACK packets, it's worth checking another option SACK or Selective Acknowledgement.

    As you most likely know, TCP is a reliable protocol, in that it ensures delivery of all data. It does this by the ACK's indicating it's received up to a certain point in the data stream. This data stream is essentially a sequence of numbers, called….the sequence numbers.

    As an example, if we send 1460 bytes and our last sequence number was 40000 then the ack sent back to the machine which sent those 1460 bytes, will be 41460 and so on, as the sequence number is incremented by the byte size received and thus the sender knows the data arrived safely.

    However, we generally send a burst of these packets and the receiver acks every other one, what happens if we send 6 packets and packet 3 goes missing en route? Let's call these packets 1,2,3,4,5 & 6. If we receive packets 1,2,4,5,6 without SACK we'd have to drop packets 5 & 6 and ack 2 to indicate to the sender that that's the point we'd got up to until we'd noticed a packet missing. The sender would then have to retransmit packet #3 followed by 4,5 & 6 which obviously isn't efficient as we'd already received them but had to drop them. This also takes time and thus slows data transfer.

    With SACK enabled we're able to tell the sender we're missing a packet and also what other packets we've got. So in essence we can say to the sender, "Hey, I've got packets 1-2, and also 4,5 & 6" the sender can therefore retransmit just packet #3 and thus we save having to retransmit 4,5,6 (and any other subsequent packets which arrived before the retransmission of 3 arrived).

    Hope that explanation makes sense for the purposes of this, obviously the real implementation is a little more detailed, if you can't sleep then the detailed RFC is here

    This greatly increases the efficiency of the TCP protocol and is therefore enabled by default in Windows and most other TCP implementations. However, there can be occasions where devices are disabling this feature so it's always worth a quick check.

    As with the Scale Factor, MSS and Scale Factor, this setting is negotiated in the SYN and SYN/ACK packets and can be found in the TCP options area of the packets. If you're using a proxy or NAT device, it's worth tracing on the egress point to ensure the TCP connection outside your environment also has the setting enabled.

    Ensure this is enabled on both the Syn & SYN ACK, and you're good to go!

  • Checking your DNS performance isn’t delaying your O365 connections

     

    One of the initial things which should be checked is name resolution, a point which is often forgotten when doing performance tests. If the name resolution takes time, then this will manifest itself as initial page load time slowness in SharePoint. It's less visible with Outlook but little delays in DNS, and proxy authentication etc etc when added up can mean a poorly performing O365 infrastructure.

    Checking this is easy, and again involves a quick network capture on a test client

    It is always advisable to flush the DNS cache by running ipconfig /flushdns before taking any traces and the steps would be:

     

    • Install Netmon or Wireshark on a test client
    • Start tracing
    • Run ipconfig /flushdns to clear the DNS cache
    • Start Outlook or connect to your SharePoint site
    • Once connected stop the trace
    • Use the filter 'DNS' to show all DNS traffic in the capture tool.

     

    Netmon handily gives each DNS call and response (and any other protocol for that matter) a unique ID number which we can use to filter if we wish. Here it's DNS conv id 124 so I'd write conversation.DNS.ID==124 in the netmon filter to see just this DNS call and it's response. Alternatively you can right click over a frame of interest on a saved trace and click Find Conversations > DNS. Or, we could use the DNS query ID, in this case it'd be 'DNS.QueryIdentifier == 0x5b9f'

     

     

     

    In the following example we can see the Contoso DNS servers taking up to 3.7 seconds to respond to a DNS call. This would undoubtedly manifest itself as slowness in an initial connection to Office 365.

     

    13:52:52 16/04/2013 31.2765664 0.0000000 10.200.30.40 10.214.2.129 DNS:QueryId = 0xE41, QUERY (Standard query), Query for Contosoemeamicrosoftonlinecom-3.sharepoint.emea.microsoftonline.com of type A on class Internet

    13:52:56 16/04/2013 35.0579179 3.7813515 10.214.2.129 10.200.30.40 DNS:QueryId = 0xE41, QUERY (Standard query), Response - Success, 10.123.123.124 ...

    Another DNS server can be seen here responding in a slow manner

    13:52:54 16/04/2013 33.3042446 0.0000000 10.200.30.40 uk1.headoffdom.uk.Contoso.com DNS:QueryId = 0xE41, QUERY (Standard query), Query for Contosoemeamicrosoftonlinecom-3.sharepoint.emea.microsoftonline.com of type A on class Internet

    13:52:56 16/04/2013 35.0583415 1.7540969 uk1.headoffdom.uk.Contoso.com 10.200.30.40 DNS:QueryId = 0xE41, QUERY (Standard query), Response - Success, 10.123.123.124

    However, some other queries can be seen answered by the DNS server in a much faster manner:

    13:52:57 16/04/2013 35.6045648 0.0000000 10.200.30.40 10.214.2.129 DNS :QueryId = 0xE77C, QUERY (Standard query), Query for login.microsoftonline.com of type A on class Internet

    13:52:57 16/04/2013 35.6049028 0.0003380 10.214.2.129 10.200.30.40 DNS:QueryId = 0xE77C, QUERY (Standard query), Response - Success, 49, 0

    Under optimal conditions I would expect a return on a DNS call in less than 100ms. Ideally much less. Any delay in this phase would manifest itself as poor initial performance when loading a page. In theory (presuming we don't need to resolve any further addresses) the connectivity should be quicker once the initial page is loaded.

    If you see a slow response like the one above, it's worth first checking what the psping times to the DNS server on TCP port 53 (Most calls will be over UDP but the server should be listening on TCP 53 too). The method to do this is outlined here. If the PSPING time is similar to that seen in the DNS response, then it's possibly a network delay between you and the server. If it's much quicker consistently, its more likely an application (DNS) level issue you should investigate on the server and any forwarders if used.