• Bookmark - SMTP NDR CODEs (Technet)

    4.3.1

    Out-of-memory or out-of-disk space condition on the Exchange server. Potentially also means out-of-file handles on IIS.

    4.3.2

    Message deleted from a queue by the administrator via the Queue Viewer interface in Exchange System Manager.

    4.4.1

    Host not responding. Check network connectivity. If problem persists, an NDR will be issued.

    4.4.2

    Connection dropped. Possible temporary network problems.

    4.4.6

    Maximum hop count for a message has been exceeded. Check the message address, DNS address, and SMTP virtual servers to make sure that nothing is causing the message to loop.

    4.4.7

    Message expired. Message wait time in queue exceeds limit, potentially due to remote server being unavailable.

    5.0.0

    Generic message for no route is available to deliver a message or failure. If it is an outbound SMTP message, make sure that an address space is available and have proper routing groups listed.

    5.1.0

    Message categorizer failures. Check the destination addresses and resend the message. Forcing rebuild of Recipient Update Service (RUS) may resolve the issue.

    5.1.1

    Recipient could not be resolved. Check the destination addresses and resend the message. Potentially e-mail account no longer exists on the destination server.

    5.1.3

    Bad address.

    5.1.4

    Duplicate SMTP address. Use LDIFDE or script to locate duplicate and update as appropriate.

    5.2.1

    Local mail system rejected message, “over size” message. Check the recipient’s limits.

    5.2.3

    Message too large. Potentially the recipient mailbox is disabled due to exceeding mailbox limit.

    5.3.3

    The remote server has run out of disk space to queue messages, possible SMTP protocol error.

    5.3.5

    Message loopback detected.

    5.4.0

    Authoritative host not found. Check message and DNS to ensure proper entry. Potential error in smarthost entry or SMTP name lookup failure.

    5.4.4

    No route found to next hop. Make sure connectors are configured correctly and address spaces exist for the message type

    5.4.6

    Categorizer problems with recipient. Recipient may have alternate recipient specified looping back to self.

    5.4.8

    Looping condition detected. Server trying to forward the message to itself. Check smarthost configuration, FQDN name, DNS host and MX records, and recipient policies.

    5.5.0

    Generic SMTP protocol error.

    5.5.2

    SMTP protocol error for receiving out of sequence SMTP protocol command verbs. Possible to low disk space/memory of remote server.

    5.5.3

    Too many recipients in the message. Reduce number of recipients in message and resend.

    5.7.1

    Access denied. Sender may not have permission to send message to the recipient. Possible unauthorized SMTP relay attempt from SMTP client.

    Source:

    https://www.microsoft.com/technet/prodtechnol/exchange/guides/ExMgmtGuide/fb7830cf-23c6-48a6-8759-526b7854ae39.mspx?mfr=true
  • How to take Exchange traces and dumps for Microsoft Engineers – Example for STORE and MSExchange Transport components

    Prerequisites

    We’ll use the ExTra.exe and ProcDump Tools to take traces that will capture the issue.

    To take memory dump of Store.exe and EdgeTransport.exe for deep code level analysis.

    Download ProcDump: http://technet.microsoft.com/en-US/sysinternals/dd996900

    The instructions

    Scenario

    Mails get stuck in some servers, not always the same, but the common point between the servers is that it’s always queues which point to the same database (messages for users in a particular database).

    Event Log analysis does not show any issues, event with Event Logging set to “Expert” for MSExchange Transport and MSExchangeIS store delivery components.

    We need to take traces that will capture what Exchange is trying to do, and that will enable Microsoft to tell what object is blocking the queue(s).

    Traces

    Traces: take both ExTra traces and Dumps from Store and Transport (or other process if it’s for a different scenario)

    These will be analyzed by an accredited Microsoft Engineer.

    1 of 2 > Extra.exe

    Launch ExTra.exe (available by default on Exchange 2010 and Exchange 2013) and choose the “Trace Control” option

    image

    Then click on “Set manual trace tags” on the next screen:

    image

    Then select the components we with to trace (for our scenario example, Transport and StoreDriver)

    image

    and click on the below “Start Tracing” link.

    To stop the tracing, when the issue will occur and when the Dumps below will be taken, click on “Stop Tracing now”

    image

    2 of 2> process memory dump - NOTE : do not copy/paste the below lines, just retype them as for some reason it takes invisible special characters:

    • On one HUB, run this from the ProcDump directory:

    procdump.exe -ma edgetransport.exe –n 3 –s 15 -accepteula c:\dumps

    • On the Mailbox Server, run this from the ProcDump directory - NOTE the "-mp" parameter instead of the "-ma", because otherwise we'll get 10s GBs of data that we don't necessarily need for store.exe debug:

    procdump.exe -mp store.exe –n 3 –s 15 -accepteula c:\dumps

    Note: you may not want to place the dump output files in the C:\ drive, you can specify any path you want replacing c:\dumps by any other directory.

    More great and very useful information on this link :

    Procdump: How to PROPERLY gather dump (.dmp) files for crashes and hangs, CPU spikes, etc, including gathering PERF data, for Exchange issues – by Kris Waters, Premier Field Engineer in Microsoft US.

    http://blogs.technet.com/b/kristinw/archive/2012/10/03/procdump-how-to-properly-gather-dump-dmp-files-for-crashes-and-hangs.aspx

    Sam.

  • Exchange 2007, 2010, 2013 on Windows 2008 / 2008 R2 – Check TCP Chimney Windows settings and status

    Hey,

    If you ever encounter some messages that takes time to be delivered from HUB servers to Mailbox servers (we call this step "Mapi Submission" or "MAPI Delivery") or even from HUB to another HUB in a remote site, and your network don't have bandwidth or latency or general reliability or other configuration issues such as wrong MTU, wrong speed set on equipments, then check for the Windows TCP Chimney and RSS settings.

    These are part of what we call the "Network Scalability Pack" (http://technet.microsoft.com/en-us/network/bb545631.aspx) and their primary use is to offload the CPU from processing network packets, offloading it down to the network card. It's supposed to remove network processing load from the CPU and give the work to the network card instead.

    In the past, when installed on Windows 2003 servers, we had issues with this setting being used with Exchange (see this : http://support.microsoft.com/kb/948496), and we recommended to deactivate these TCP chimney, RSS and sometimes network DMA features as well. But experience showed that we can encounter sporadic issues as well on Windows 2008/Exchange 2007 and 2010 (especially for HUB submission queues) under certain circumstances due to these settings; these circumstances can be the network card driver TCP offload features not very compatible with how Exchange uses the network, or simply the network card driver being too old to handle the TCP offloading features... hard to tell exactly why anyways until we do a full raw debugging, which I didn't do.

    Anyways, when we encountered network issues on Windows 2008/Exchange 2007/2010 and we also troubleshooted the issue thoroughly to ensure that were not related to bandwidth , latency , general reliability , other configuration issues such as wrong MTU, wrong speed set on equipments,bad network driver, etc..., disabling TCP chimney and RSS magically solved our issues; AND, no reboot is theorically necessary after changing these using netsh (but as we always say, when in doubt, reboot).

    • To check for the current TCP Chimney, RSS and NetDMA settings do :

    Netsh int tcp show global

    For Windows 2008 R2 it will give you the following output-ish:

    PS C:\Users\Administrator.CONTOSO> netsh int tcp show global
    Querying active state...

    TCP Global Parameters
    ----------------------------------------------
    Receive-Side Scaling State          : enabled
    Chimney Offload State               : automatic
    NetDMA State                        : enabled
    Direct Cache Acess (DCA)            : disabled
    Receive Window Auto-Tuning Level    : normal
    Add-On Congestion Control Provider  : ctcp
    ECN Capability                      : disabled
    RFC 1323 Timestamps                 : disabled

    • To print out network cards that actually have the TCP Chimney functionnality activated :

    Netsh int tcp show chimneystats

    On the below example, you see Under the “Supp” column the value “No”, meaning that TCP Chimney is not supported or not enabled on the Network card driver settings, otherwise you’d see “Yes” with some values on “TMax” and “PMax”.

    PS C:\Users\Administrator.CONTOSO> netsh int tcp show chimneystats

    Idx  Supp  Atmpt    TMax    PMax     Olc              Failure Reason
    ---  ----  -----    ----    ----     ---              --------------
      11    No  -n/a-       0       0       0                       -n/a-

      Idx           - Interface (NIC) index used by the system
      Supp          - Interface (NIC) supports TCP chimney offload
      Atmpt         - System has attempted TCP connection offload
      TMax          - Offload capacity advertized by the NIC
      PMax          - Offload capacity observed by the system
      Olc           - Number of currently offloaded connections
      FailureReason - Reason why last attempt to offload a connection failed

    Use `netsh int tcp show chimneystats <Idx>' for more details.

    • To disable TCP Chimney and RSS:

    netsh int tcp set global chimney=disabled

    netsh int tcp set global rss=disabled

    For details and reference about the above commands, see http://support.microsoft.com/kb/951037