Blog du Tristank

So terrific that 3 of 4 readers rated it "soporific"

MaxUserPort - what it is, what it does, when it's important

MaxUserPort - what it is, what it does, when it's important

  • Comments 4
  • Likes

What can we say about MaxUserPort that hasn't already been said? Not a lot, it would seem. He's a beautiful dancer, perhaps? Ahh, such gentle humour, and nary a kitten drowned anywhere.

But TCP port shenanigans are fairly frequently misunderstood, so let's talk about the very basics of MaxUserPort.

NB: This is all pre-Vista behaviour - applicable from NT4 through to Windows Server 2003, including all the little NT-flavoured stops on the way.

 

MaxUserPort controls "outbound" TCP connections

MaxUserPort is used to limit the number of dynamic ports available to TCP/IP applications.

(I don't know why , I just know it is . Probably something to do with constraining resource use on 16MB machines, or something.)

It's never going to be an issue affecting inbound connections. MaxUserPort is not the right answer if you think you have an inbound connection problem.

To further simplify: it's typically going to limit the number of outbound sockets/connections that can be created. Note: that's really a big fat generalization, but it's one that works in 99% of cases.

If an application asks for the next available socket (a socket is a combination of an IP address and a port number), it'll come from the ephemeral port range allowed by MaxUserPort. Typically, these "next available" sockets are used for outbound connections.

The default range for MaxUserPort is from 1024-5000, but the possible range is up to 65534.

 

When You Fiddle MaxUserPort

So, why would you change MaxUserPort?

In the web server context (equally applicable to other application servers or even client programs), you'd usually need to look at MaxUserPort when:

- your server process is communicating with some type of other system {as a client}  (like a back-end database, or any TCP-based application server - quite often http web servers)

And:

- you are not using socket pooling , and/or

- your request model is something like one request = one outbound TCP connection (or more!)

In this type of scenario, you can run out of ephemeral ports (between 1024 and MaxUserPort) very quickly, and the problem will scale with the load applied to the system , particularly if a socket is acquired and abandoned with every request.

When a socket is abandoned, it'll take two minutes to fall back into the pool.

Discussions about how the design could scale better if it reused sockets rather than pooling tend to be unwelcome when the users are screaming that the app is slow, or hung, or whatever, so at this point, you'd have established that new request threads are hung waiting on an available socket, and just turn up MaxUserPort to 65534.

 

What Next? TcpTimedWaitDelay, natch

Once MaxUserPort is at 65534, it's still possible for the rate of port use to exceed the rate at which they're being returned to the pool! You've bought yourself some headroom, though.

So how do you return connections to the pool faster ?

Glad you asked: you start tweaking TcpTimedWaitDelay .

By default, a connection can't be reused for 2 times the Maximum Segment Lifetime (MSL), which works out to 4 minutes, or so the docs claim , but according to The Lore O' The Group here, we reckon it's actually just the TcpTimedWaitDelay value, no doubling of anything.

TcpTimedWaitDelay lets you set a value for the Time_Wait timeout manually.

As a quick aside: the value you specify has to take retransmissions into account - a client could still be transferring data from a server when a FIN is sent by the server, and the client then gets TcpTimedWaitDelay seconds to get all the bits it wants. This could be sucky in, for example, a flaky dial-up networking scenario, or, say, New Zealand, if the client needs to retransmit a whole lot... and it's sloooow. (and this is a global option, as far as I remember).

30 seconds is a nice, round number that either quarters or eighths (depending on who you ask - we say quarter for now) the time before a socket is reusable (without the programmer doing anything special (say, SO_REUSEADDR)).

If you've had to do this, at this point, you should be thinking seriously about the architecture - will this scale to whatever load requirements you have ?

The maths is straightforward:

If each connection is reusable after a minimum of N (TcpTimedWaitDelay) seconds
and you are creating more than X (MaxUserPort) connections in an N second period...

Your app is going to spend time "waiting" on socket availability...

Which is what techy types call "blocking" or "hanging". Nice*!

Fun* KB Articles:
http://support.microsoft.com/kb/319502/
http://support.microsoft.com/kb/328476

Comments
  • Hi,

    do you happen to know the default / need of this in 2008 R2 and Win 2012?

    Is this correct?

    "The new default start port is 49152, and the default end port is 65535"

  • Yes, you still need to adjust the port range if your app software isn't doing it for you. The default outbound port range is still limited by default.

    The legacy MaxUserPort registry value is still respected, so if an app sets it while installing, it should still work. These days, if you're configuring it manually, you should use NetSH to set the dynamic port range (see support.microsoft.com/.../929851) with netsh int ipv4 set dynamicport tcp (etc).

    At least as far up as Windows Server 2008 R2, the reg key works; I haven't tried it with Windows Server 2012 yet (just due to a lack of need), but I'd guess it would.

  • Hi,

    Where these registry entries need to be set? on client (which is making the request) or on server (which is responding to the requests)? also does it require a reboot?

    We are facing this issue when we try to access a SharePoint web service. so, where we need to set this registry values ? on client machine or SharePoint server?

    Thanks in advance
    Francis.

  • Ah, the comment box is at the top.

    @Francis: As the text implies, it's a client-side setting, i.e. whatever's making the requests you think are bottlenecking.

    To use your example and chat a bit about it:

    - If you think a single client is exceeding 5000 ports when talking to Sharepoint, I'm dubious that your client is written correctly :)

    - If you think your Sharepoint server is making 5000 outbound connections to an upstream Web Service, that's much more plausible, especially with a large number of clients.

    But *just based on the description*, it doesn't mean that this is the limit you're hitting. There are many possible choke points, including the target web server. This one's easy to "set and forget" to test, though, so by all means give it a try, and if it's not this, then it might be a Sharepoint or .Net level throttle (like maxconnection or similar http://stackoverflow.com/questions/7849884/what-is-limiting-the-of-simultaneous-connections-my-asp-net-application-can-ma)

    Happy hunting!

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment