The tragedy of the commons exist because there is a limited resource,
incentive to do the wrong thing, and disincentives to do the right
thing. Until there are disincentives to do the wrong thing, e.g., filter
routes, apply a charge to routes in the DFZ to encourage aggregation, etc.,
incentives to do the right thing, and/or the limitations in the DFZ are
removed, you _will_ get a tragedy of the commons.
Rgds,
-drc
Speaking only for myself
The limited resource is the fixed upper bound on numbers. There are concerns
w/ the current technological limitations on mgmt of the route table and the
weaknesses in the current routing assumptions. As friend Bush has indicated,
the IRTF and in the IETF, much thought is being given to how to migrate from
BGP to something new. Perhaps heirarchical routing itself is flawed and we
need something new. This problem is not new.
As a data point, I would ask that those whom are allowed to participate in
the design discussions and are willing to be active in them, to take this
request into those discussions. I would like to see the ability to have
the routing system support 2-32nd entries in the "DFZ" (whatever that is...
Your comments regarding the hierarchical routing
may be valid, I believe, -- especially taking into
consideration that the whole subject of this
thread is just one of the drivers inevitably
flattening the Internet topology
But look what's going on in MANET, for example
(LANMAR being particularly amazing). This at least
suggests that some scenarios may exist where you
have neither strictly hierarchical routing nor '2-32nd
entries in the "DFZ"'.
4 billion routes is not impossible, although I don't think one out of
every two people on the entire planet is going to multihome. 100 million
seems more reasonable. In either case, this means we have to find a
completely new way to look at routes. The current paradigm is that every
route is very important, so we should store as much information about it
as possible. This will have to change. If we remove all non-essential
information from a route, we finally arrive at the single thing that must
always be encoded for each route individually: whether it is reachable or
not. If we assign a bit of memory to every possible route, it is possible
to store the reachability state of the entire Internet as /24s in just two
megabytes. Or as individual /32s in 512 MB.
Obviously, a lot of work has to be done to apply this to the real world.
An idea would be to assign /16s to geographic areas. Each ISP that has
customers in that area would announce the /16, just like they would do
now, but with an attached bitmap that indicates for which /24s this
announcement is valid and for which it isn't. So 10 ISPs in one area would
all announce the /16 with a 256 bit bitmap, so just 10 routes end up in
the default-free zone instead of 500.