[lisp] Anybody can participate in the IETF (Was: Why is IPv6 broken?)

Let me make sure I understand your point here. You don't seem to be
disagreeing with the assertion that for most sites (even things like very
large universities, etc), their 'working set' (of nodes they communicate)
with will be much smaller than the network as a whole?

Why would you assume this to be true if LISP also promises to make
multi-homing end-sites cheaper and easier, and independent of the
ISP's willingness to provide BGP without extra cost? You see, if
every SOHO network and "power user" can suddenly become multi-homed
without spending a great deal of money on a powerful router and ISP
services which support BGP, many of these networks will do so.

The working sets of a scaled-up, LISP future will make the BGP DFZ of
today look small.

So only the very largest content providers (YouTube, etc) will have
'working sets' which include a fairly large share of the entire Internet?

No, any end-site of interest to a DoS attacker must be able to deal
with a working set which includes the entire Internet. The reason for
this is obvious: it will be the best way to attack a LISP
infrastructure, and it will not be difficult for attackers to send
packets where each packet's source address appears to be from a
different mapping system entry.

Some people have commented that LISP hopes to prevent source address
spoofing through technical means that have not been fully explored.
This is a good goal but it must require the ETR doing address
validation to look-up state from the mapping system. It will have the
same cache churn problem as an ITR subject to a reflection attack (or
an outbound DoS flow meant to disable that ITR.)

So there is no practical means of doing source address validation on
ETRs (under DoS.) Even if you did that, the ITR must still be subject
to the occasional large flow of outbound traffic from a compromised
host (dorm machine, open wireless, hacked server, etc.) which is
intended to disable the ITR.

I have previously commented that such sites have lots of specialized
infrastructure to handle their traffic loads - do you think it will be
infeasible for them to have specialized LISP infrastructure too? (Leaving
aside for a moment what that infrastruture would look like - it's not
necessarily separate hardware, it might be integrated into existing boxes
on the periphery of their site.)

Again, every content shop will need to have that specialized
infrastructure. Every site that someone might have a motive to launch
a DoS attack against must be able to withstand at least trivial DoS.
If you think only the super-huge sites will have a large working set,
you are again ignoring DoS attacks.

The same is true of ISP subscriber access platforms. If my ISP's BRAS
effectively goes down regularly, I won't keep that ISP service very
long, I'll change to a competitor. The more subscribers on one BRAS,
the more likely it will receive frequent DoS attacks.

So in reality, the common cache size needed to achieve a high hit rate
really does not matter, unless you wish to ignore DoS (which you seem
to want to do very badly.)