those who do not understand end-to-end are doomed to reimplement it,
poorly.
End-to-end requires that people writing the software at the end learn about
buffer overruns (and other data-driven access violations) or program using
tools that prevent such things. It is otherwise an excellent idea.
Unfortunately, the day that someone decided their poorly-designed machine
and operating system would be safer sitting behind a "firewall" pretty much
marked the end of universal end-to-end connectivity, and I don't see it
coming back for a long long time. Probably not on this Internet. IPv6 or
not.
Combine that with ISP pricing models (helped by registry policy) that
encourage <=1 IP address per household, and the subsequent boom in NAT
boxes, and the fate is probably sealed.
Matthew Kaufman
matthew@eeph.com
End-to-end requires that people writing the software at the end learn about
buffer overruns (and other data-driven access violations) or program using
tools that prevent such things. It is otherwise an excellent idea.
A lack of end-to-end just obscures the problem, it does not remove it.
Host based firewalling or tcpwrapper style approaches address this point
just as well, if not much better. At least if systems shipped with them
defaulting to DENY.
Unfortunately, the day that someone decided their poorly-designed machine
and operating system would be safer sitting behind a "firewall" pretty much
marked the end of universal end-to-end connectivity, and I don't see it
coming back for a long long time. Probably not on this Internet. IPv6 or
not.
Unfortunate exceptions to the correct design methodology are not an
acceptable reason to ignore the correct solution.
Most NAT workaround methods currently used by applications fail horribly
when both endpoints are behind a NAT, so we are only beginning to feel the
initial impact of our reality of slightly broken end-to-end. Imagine an
internet that never had end-to-end as a goal... where 'circuits' had to be
manually provisioned across multiple carriers networks... where address
translation happens at every intra-agency link.
Oh wait, we call that the public switched telephone network... and I'm
sure we're all already well aware of the amount of innovation that
infrastructure affords us, and it's highly economic pricing model as well!
I think it's amusing that I see the largest arguments against end-to-end
coming from people who ran the networks that the end-to-end internet made
largely obsolete.
Combine that with ISP pricing models (helped by registry policy) that
encourage <=1 IP address per household, and the subsequent boom in NAT
boxes, and the fate is probably sealed.
ISPs simply respond to demand. We're all market whores on this list. Where
there is a competitive advantage to offer multiple IPs per customer the
ISPs will provide it: we already see this in highly competitive markets.
Providing many IPs per household requires no major change to
infrastructure, it's simply a policy decision.
Matthew Kaufman wrote:
End-to-end requires that people writing the software at the end learn about
buffer overruns (and other data-driven access violations) or program using
tools that prevent such things. It is otherwise an excellent idea.
There is supposedly some magic going into this in the next "Service Pack" of a mentioned
major exploding Pinto. Not sure if it�s just flipping the joke of firewall on by default or something
more comprehensive/destructive like non-executable stack. Or a completely new invention like
bug free code
Unfortunately, the day that someone decided their poorly-designed machine
and operating system would be safer sitting behind a "firewall" pretty much
marked the end of universal end-to-end connectivity, and I don't see it
coming back for a long long time. Probably not on this Internet. IPv6 or
not.
Last I checked most "firewall"s don�t make these machines safe, it might make them safer,
so only two out of three malwares hit them. Does not really help too much.
Combine that with ISP pricing models (helped by registry policy) that
encourage <=1 IP address per household, and the subsequent boom in NAT
boxes, and the fate is probably sealed.
Here I�ve observed opposite trend, most ISP�s are getting rid of NATting because it�s failure
prone and expensive to implement and keep running.
Pete
I think if program design criterion would change, to coding secure applications then
the problem would be reduced dramatically
-Henry
and operating system would be safer sitting behind a "firewall"
pretty much marked the end of universal end-to-end connectivity,
and I don't see it
An OS-level (software) firewall doesn't preclude end-to-end connectivity,
and even a per-machine hardware firewall doesn't given it can pass inbound
traffic through. Most servers on the Internet are also behind hardware
firewalls, and they don't hinder end-to-end connectivity.
Last I checked most "firewall"s don�t make these machines safe, it
might make them safer, so only two out of three malwares hit them.
A probably configured firewall will protect a machine against everything but
it's user, and therein lies a problem you will likely never solve.
Adam
The real problem is that we have an environment where the malware can figure
out how to disable the firewall but the user can't.
Hopefully, one day software will be good enough the user won't have to be
smart.
And even clever malware will need some good luck to do its job.
DJ
Date: Tue, 28 Oct 2003 21:51:01 -0500
From: Valdis.Kletnieks@...
The real problem is that we have an environment where the
malware can figure out how to disable the firewall but the user
can't.
And part of why the current Internet has so much peer-to-peer
traffic on it.
Eddy