IAB concerns against permanent deployment of edge-based filtering

> I think the IAB has a legitimate point.
>

  perhaps. but last I checked, it was the Internet Architecture Board
  not the Internet Operations Board. So form an architectural purity
  perspective, sure, don't filter (and by extention, pull out firewalls
  and NATS.... :slight_smile:

> There is a real danger that long-term continued blocking will lead
> to "everything on one port"

  fair amount of handwaving there.

  prudent/paranoid folk over the years have persuaded me that
  it makes the best sense to only run those applications/services
  that I need to and shut off everything else - until/unless there
  is a demonstrated need for it.

--bill

Question: Why was RFC3093 published? (Think(*) for a bit here...)

About a month later, there was a *major* flame-fest on the IETF list due to
this message:

http://www.ietf.org/mail-archive/ietf/Current/msg11918.html

Yes, the basic reason for this proposal was because many firewalls will pass HTTP
but not BEEP.

What major P2P applications have included a "run over port 80" option to let
themselves through firewalls?

It's not just handwaving.

(*) Remember - satire isn't funny if it isn't about something recognizable...

Date: Sat, 18 Oct 2003 11:14:42 -0700 (PDT)
From: bmanning@...

  perhaps. but last I checked, it was the Internet Architecture Board
  not the Internet Operations Board. So form an architectural purity
  perspective, sure, don't filter (and by extention, pull out firewalls
  and NATS.... :slight_smile:

Ports < 1024 are "privileged" and tend not to be used as a source
port for outgoing packets. This in turn affects packet filters.
Life might be easier if a port range had been reserved for
passive FTP connections.

It would seem architecture and operations are at least somewhat
coupled. Should there not be interaction between the two?

"Here is what we built; deal with it!" doesn't appeal to me.
(Judging from the wildcard threads, it doesn't seem to appeal to
others, either.) I'd like the arch folks to listen to the ops
crowd, and I see no reason why it shouldn't go the other way too.

Eddy

Valdis hits the nail on the head. And this boils down to something that I believe is attributable to someone commenting on the old FSP protocol, perhaps Erik Fair:

    The Internet routes around damage.

Damage can take the form of a broken link, or it can take the form of an access-list. In the early '90s, NASA attempted to protect its links from "unauthorized use" (which in this particular case was porn). That caused a whole protocol to be developed (proving the old adage). Well, nowadays you don't even need to build a whole protocol- you can just use HTTP.

And that was the point of Keith's & Ned's RFC on HTTP as a substrate. Excessive restrictions in firewalls bring about this use, and that makes the HTTP implementations fairly complex, and it will subvert the intentions of network administrators.

So as a temporary measure during an active attack, access-lists make sense. Over the long haul, however, unless you're going to block downstream TCP packets with SYN only and ALL OTHER TRAFFIC, IP can run

prudent/paranoid folk over the years have persuaded me that
it makes the best sense to only run those applications/services
that I need to and shut off everything else - until/unless there
is a demonstrated need for it.

very true for a host, even somewhat true for a site. very untrue
for a backbone.

randy