Scott Huddle <firstname.lastname@example.org> writes:
This is yet another problem/opportunity, in that providers want/need
to implement policies (such as no default, or only accept traffic
from the following ASs, or only accept traffic for the following ASs,
or consistent announcement) that are are difficult to enforce and/or
automatically measure/detect with today's technology.
Router/switch vendors and potential interconnect
providers take note!
In fact, OFRV has taken note.
Remember that rate limiting exists and is workable.
You should be able to rate limit now based on MAC address.
Yakov Rekhter and others correctly observed some time ago
that it is possible to forge MAC addresses to get around
Enter the wonderful world of mixing RPF with rate limiting.
Gee, that IP address shouldn't have come from you, so I'll
toss it through the "not in profile" rule.
Then you either play fascist and drop not in profile
traffic or you play Dave Clark and set off alarm bells and
drop any such traffic if there is congestion.
I actually like the latter approach somewhat more (go figure).
My implementation idea was to give any particular peer at
a public exchange point a particular amount of "free"
bandwidth, perhaps 1.5Mbps or so across a FDDI, and
declare anything more than that "out of profile", and pass
the traffic through a queue against which one would apply
aggressive wred killing off anything that didn't look like
well-behaved TCP. Then if there was still lots of
traffic, one could begin negotiating terms for increasing
the "floor" bandwidth.
This has several advantages. Firstly, you can avoid
completely or at least minimize the effects of overbooking
your connection to any given public interconnect, no
matter what the implementation is. (This is effectively
like using an ATM fabric and carefully laying out VCs,
only it probably works better ). Secondly, a tariff
for bandwidth settlements is much easier when you can
actually limit the bandwidth.
I belive more than one provider is doing this now. (We are.)
Moreover the rate limiting code is rather clever in that
you can apply different dampening for different traffic
profiles. You essentially want to be able to do something
-- if destination is me, use "peer" profile
-- if destination is a peer of mine, use "transit"
where the peer and transit profiles might be:
-- pass only N bits/second, M packets/second
with varying values for N and M, and likely higher prices
for higher values of N and M in the transit profile and
possibly in the peer profile.
Answering the questions, "is this for me" vs "is this for
a peer" is apparently about to become much easier.
A long time ago I thought this was pretty clever, now I
think this kind of rate limiting is absolutely essential.
> a senior engineer at a well-known provider just pointed out to us that
> a weenie provider at mae-east was
> o not rewriting next-hop
> o sending our routes to others
> o sending others' routes to us
> o likely pointing default at us
Gee this sounds awfully familiar
So, the "others" you don't peer with who are sending you
traffic fall into the "drop" profile based on MAC address
or RPF or some other mechanism (offline talk if you like).
This was the original impetus for suggesting rate limiting
to Fred Baker (to whom I am eternally indebted for having
gotten it implemented pretty quickly), since this sort of
things was happening to Sprint quite alot...
> when the larger providers decline to peer with the smaller, there is a sad
> reason. traceroute -g is your friend.
That too. It is particularly enlightening to traceroute
-g towards a network that is not in anyone's routing
tables, to see if you are really being defaulted towards
of if someone is simply providing transit and exploiting
the next hop is on same LIS mechanism.