>>> Operators are probably more interested in the "fairness" part of
>>> "congestion" than the "efficiency" part of "congestion."
>>
>> TCP's idea of fairness is a bit weird. Shouldn't it be per-user, not
>> per-flow?
>
> How would you define "user" in that context?
Operators always define the "user" as the person paying the bill. One
bill, one user.
It's easy to imagine a context where authentication at the application
layer determines "user" in a bill-paying context. Passing that
information into the OS, and having the OS try to schedule fairness
based on competing applications' "guidance," seems like a level of
complexity that adds little value over implementing fairness on a
per-flow basis. In theory, any such notion of "user" is lost once the
packet gets out on the wire - especially when user is determined by
application-layer authentication, so I don't consider 802.1X or the
like to be helpful in this instance.
Its fun to watch network engineers' heads explode.
What if the person paying the bill isn't party to either side of the
TCP session?
Stephen
Operators always define the "user" as the person paying the bill. One
bill, one user.
It's easy to imagine a context where authentication at the application
layer determines "user" in a bill-paying context. Passing that
information into the OS, and having the OS try to schedule fairness
based on competing applications' "guidance," seems like a level of
complexity that adds little value over implementing fairness on a
per-flow basis. In theory, any such notion of "user" is lost once the
packet gets out on the wire - especially when user is determined by
application-layer authentication, so I don't consider 802.1X or the
like to be helpful in this instance.
Money and congestion are aggregated on many different levels. At the dorm
level, money and congestion may be shared on a per-student basis while at
the institution level money and congestion may be shared on a per-department basis, and on a backbone level money and congestion may be
shared on a per-institution basis.
That's the issue with per-flow sharing, 10 institutions may be sharing a
cost equally but if one student in one department at one institution generates 95% of the flows should he be able to consume 95% of the capacity?
Its fun to watch network engineers' heads explode.
What if the person paying the bill isn't party to either side of the
TCP session?
The person paying the bill is frequently not a party to either side of
individual TCP sessions, that is why you also frequently have disputes
over which TCP session should experience what level of congestion.
The big problem with this line of reasoning is that the student isn't visible at the network layer; at most, the IP address s/he is using is visible. If the student has an account at each of the universities s/he might be using all of them simultaneously. To the network, at most we can say that there were some number of IP addresses generating a lot of traffic.
One can do "interesting" things in the network in terms of scheduling capacity. My ISP in front of my home does that; they configure my cable modem to shape my traffic up and down to not exceed certain rates, and lo and behold my families combined computational capacity doesn't exceed those rates. One could similar do such things on a per-address or per-port basis in an enterprise network. That's where the discussion of per-address WFQ came from a decade ago - without having to configure each system's capabilities, make the systems using a constrained interface share it in some semi-rational manner automatically. That kind of thing is actually a lot harder on the end system; they don't talk with each other about such things. Can that be defeated? Of course; use a different IP address for each BitTorrent TCP session for example. My guess is "probably not on a widespread basis". That kind of statement might fall in the same category as "640K is enough", though.
Can you describe for me what problem you would really like solved? Are you saying, for example, that BitTorrent and similar applications should be constrained in some way so that the many TCPs from one system typically gets no more bandwidth than the single TCP on the system next door? Or are you really trying to build constraints on a per-user basis?
Well, in case you're being DDoS:ed at 1 gigabit/s, you'll use more resources in the backbone than most, by some definition of "you".
So my take is that this is impossible to solve in the core because routers can't keep track of individual conversations and act on them, doing so would increase cost and complexity enormously.