I am somewhat new to networking. I have interest in running a
Bittorrent tracker. I ran one for a bit, and my one Linux box
running Opentracker gets overloaded. My connection is good, and
most of it isn't being used. Just a lot of people connect, and use
up all the 65k "free connections". I tried messing with the
sysctls, but it didn't help too much (and just degraded the
connection quality for everyone). It is not a malicious attack
either as there is only a few connections per IP and they are
sending proper Bittorrent tracker requests...
So what can I do? How can I have have open more than 65k concurrent
connections on standard GNU/Linux?
Thanks for any ideas and suggestions.
you have only 16-bits for port numbers.
This is not a networking (=moving IP packets) problem, this is a Linux problem. I'm sure it can be done, but nanog is not the place to look for it.
Jorge Amodio (jmamodio) writes:
you have only 16-bits for port numbers.
65k port numbers != number of connections.
The number of open connections (if we're talking TCP) is
limited by the number of max file descriptors in the kernel
You could have hundreds of thousands of connections to
the same (destination IP, destination port).
In practice, there are other limitations:
The C10K problem is good reading, even though
it is a few years old.
Hint: That gives you 65K connections *per interface*. You can listen
on more than one interface.
This is probably off topic for this list though. The OP needs to find
a network *programming* mailing list or forum.
An incoming connection chews up an file descripter but does not require
an ephemeral port.
You can trivially have more that 65k incoming connections on a linux
box, but you've only got 64511 ports per ip on the box, to use for
I've seen boxes supporting more than a million connections with tuning
in the course of normal operation.
this has nothing to do with ports. as others have said, think of a web server. httpd listens on tcp80 (maybe 443 too) and all the facebooker's on earth hit that port. could be hundreds of thousands, and only one port. Available memory and open files will be the limiting factor as to how many established connections you can maintain with one host, providing there are not any external limitations such as port speed.
and do not forget the ulimit and select limit of maximum open selects - but can be tuned.
As long as you're not connecting to the same destination IP/port pair,
the same source IP/port pair can be reused. So even for outgoing
connections there is virtually no limit.
I suspect it has more to do with NAT connection tracking on his DSL router.
I believe the original poster was specifically requesting how to increase the File descriptor limits (ulimit -n) past 65k. This is where the limitation would come in most likely for connections he is talking about.
As someone else said, probably not the best place for this, however you can look at /etc/security/limits.conf and play with soft and hard nofile limits. Try unlimited maybe.
On Thu, 14 Oct 2010 12:54:05 -0400
this has nothing to do with ports. as others have said, think of
a web server. httpd listens on tcp80 (maybe 443 too) and all the
facebooker's on earth hit that port. could be hundreds of thousands,
and only one port. Available memory and open files will be the
limiting factor as to how many established connections you can maintain
with one host, providing there are not any external limitations such
as port speed.
You are correct. Brain fart here. I actually had to pull Stevens off
the shelf for a quick refresher. Of course, every TCP connection is
different but only includes one port on the server. The five-tuple
that defines the connection includes the remote host (client) and port
which is always unique at any one time. Other than local resource
limits the total combinations is technically 256**6, i.e. every IP
address times the number of ports. That's not even including IPV6.
Still off-topic here though. The OP still needs to find the correct
group to figure out his real problem.