IP cannot provide reliability on the public Internet

The persistent furor over route filtering does make a depressing point:
in general, IP cannot be used to provide reliability or load distribution
over the public Internet.

It seems that every significant ISP enforces filtering on routes that goes
beyond the filtering necessary for correctness. The cumulative effect is
that the reliability features inherent to IP and dynamic routing protocols
cannot be enjoyed by the Routing Underpriviledged -- those organizations
that are small enough (or frugal enough) to live with a small IP space.

And there is not any real, fundamental reason for this. It's such a
common practice that it doesn't seem to get its fair share of critical
analysis. (Just when *will* routers be capable?)

From my perspective as a developer, the widespread long-prefix filtering

undermines the value of much of the original work done to develop the
Internet. Problems that have already been solved at the IP layer have
to be solved again at a higher layer; we're likely to find ourselves
working in a soup of protocols where a problem may be solved well at one
layer, or kludged poorly at several other layers, and in which we're
forced to kludge poorly because of antiquated ideas of propriety or

I welcome comments.

The overall effect is a lot of extra packet movement (like the _thermal_
movement) in the Internet - data packets usially arrive just to the destination,
but are travelling by the very strange ways...

All those who made this experiments (I do not speak about tthe simular
over-internet filtering in accordance with the RFC's) must understand one truths
- such filtering disturb the normal internet functionality, and must be avoided
if possible. I understand the reasoins why they did such filtering, but if they
was not in the trouble due to extra prefixes in the routing table, they better
prevent such filtering. At least, they should try to establish RFC first.

Alex R.