The End-To-End Internet (was Re: Blocking MX query)

It is regularly alleged, on this mailing list, that NAT is bad *because it
violates the end-to-end principle of the Internet*, where each host is a
full-fledged host, able to connect to any other host to perform transactions.

We see it now alleged that the opposite is true: that a laptop, say, like
mine, which runs Linux and postfix, and does not require a smarthost to
deliver mail to a remote server *is a bad actor* *precisely because it does
that* (in attempting to send mail directly to a domain's MX server) *from
behind a NAT router*, and possibly different ones at different times.

I find these conflicting reports very conflicting. Either the end-to-end
principle *is* the Prime Directive... or it is *not*.

Cheers,
-- jra

Just because something is of extremely high importance does not mean it still can't be overridden when there's good enough reason.

In this case, in the majority of "random computer on the internet" IP blocks the ratio of spambots to legitimate mail senders is so far off balance that a whitelisting approach to allowing outbound port 25 traffic is not unreasonable. Unlike the bad kinds of NAT, this doesn't also indiscriminately block thousands of other uses, it exclusively affects email traffic in a way which is trivial for the legitimate user to work around while stopping the random infected hosts in their tracks.

Many providers also block traffic on ports like 137 (NetBIOS) on "consumer" space for similar reasons, the malicious or unwanted uses vastly outweigh the legitimate ones.

The reason bad NATs get dumped on is because there are better solutions both known and available on the market. If you have an idea for a way to allow your laptop to send messages directly while still stopping or minimizing the ability of the thousands of zombies sharing an ISP with you from doing the same the world would love to hear it.

That's what firewalls *are for* Jay. They intentionally break
end-to-end for communications classified by the network owner as
undesirable. Whether a particular firewall employs NAT or not is
largely beside the point here. Either way, the firewall is *supposed*
to break some of the end to end communication paths.

Regards,
Bill Herrin

From: "Owen DeLong" <owen@delong.com>

I am confused... I don't understand your comment.

It is regularly alleged, on this mailing list, that NAT is bad *because it
violates the end-to-end principle of the Internet*, where each host is a
full-fledged host, able to connect to any other host to perform transactions.

We see it now alleged that the opposite is true: that a laptop, say, like
mine, which runs Linux and postfix, and does not require a smarthost to
deliver mail to a remote server *is a bad actor* *precisely because it does
that* (in attempting to send mail directly to a domain's MX server) *from
behind a NAT router*, and possibly different ones at different times.

I find these conflicting reports very conflicting. Either the end-to-end
principle *is* the Prime Directive... or it is *not*.

The end-to-end design principle pushes application functions to
endpoints instead of placing these functions in the network itself.
This principle requires that endpoints be *capable* of creating
connections to each other. Network system design must support these
connections being initiated by either side - which is where NAT
implementations usually fail.

There is no requirement that all endpoints be *permitted* to connect to
and use any service of any other endpoint. The end-to-end design
principle does not require a complete lack of authentication or
authorization.

I can refuse connections to port 25 on my endpoint (mail server) from
hosts that do not conform to my requirements (e.g. those that do not
have forward-confirmed reverse DNS) without violating the end-to-end
design principle in any way.

Thus it is a false chain of conclusions to say that:
- end-to-end is violated by restricting connections to/from certain hosts
[therefore]
- the end-to-end design principle is not important
[therefore]
- NAT is good

...which I believe is the argument that was being made? ...

Ref - http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf

Correct, Bill.

Hopefully, everyone else here who thinks DNAT is the anti-Christ heard the
"largely beside the point" part of your assertion, with which I agree.

Cheers,
-- jra

The thing that has never set well with me with ISP blanket port 25
blocking is that the fate sharing is not correct. If I have a mail server
and I refuse to take incoming connects from dynamic "home" IP
blocks, the fate sharing is correct: I'm only hurting myself if there's
collateral damage. When ISP's have blanket port 25, the two parties
of the intended conversation never get a say: things just break
mysteriously as far as both parties are concerned, but the ISP isn't
hurt at all. So they have no incentive to drop their false positive
rate. That's not good.

Mike

Which has had two basic results:

First, we've raised at least two generations of programmers who cannot
write a network-facing service able to stand up in so much as a stiff
breeze. "Well it's behind the firewall, so no one will be able to see
it."

Second, we've killed -- utterly and completely -- countless promising
technologies and forced the rest to somehow figure out either how to
pretend to be HTTP or tunnel themselves. That's just sad.

But even then, we're not talking about an end user choosing not to
permit certain kinds of inbound connectivity. We're talking about
carriers inspecting and selectively interfering with (and in some cases
outright manipulating) communication in transit. That's just plain
wrong.

It is regularly alleged, on this mailing list, that NAT is bad *because it
violates the end-to-end principle of the Internet*, where each host is a
full-fledged host, able to connect to any other host to perform
transactions.

Both true. and NAT inherently breaks the end-to-end principal for all
the applications.
Blocking port 25 traffic, also breaks the possibility of end-to-end
communications on that one port.

But not for the SMTP protocol. SMTP End-to-End is preserved, as
long as the SMTP relay provided does not introduce further
restrictions.

We see it now alleged that the opposite is true: that a laptop, say, like
mine, which runs Linux and postfix, and does not require a smarthost to
deliver mail to a remote server *is a bad actor* *precisely because it does
that* (in attempting to send mail directly to a domain's MX server) *from
behind a NAT router*, and possibly different ones at different times.

Ding ding ding... behind a NAT router. The End-to-End principal is
already broken.
The 1:many NAT router prevents your host from being specifically
identified, in order to
efficiently and adequately identify, report, and curtail abuse; You
can't "break" the end-to-end principal in cases where it has already
been broken.

And selectively breaking end-to-end in limited circumstances is OK.
You choose to break it when the damage can be mitigated and the
concerns that demand breaking it are strong enough.

The end-to-end principal as you suggest primarily pertains to the
Internet protocol; IP and TCP. I believe you are trying to apply the
principal in an inappropriate way for the layer you are applying it
to.

At the SMTP application layer end-to-end internet connectivity means
you can send e-mail to any e-mail address and receive e-mail from any
e-mail address. For HTTP; that would mean you can retrieve a page
from any host, and any remote HTTP client, can retrieve an page from
your hosts; that doesn't necessarily imply that the transaction will
be allowed, but if it is refused -- it is for an administrative
reason, not due to a design flaw.

NAT would fall under design flaw, because it breaks end-to-end
connectivity, such that there is no longer an administrative choice
that can be made to restore it (other than redesign with NAT
removed).

At the transport layer, end-to-end means you can establish connections
on various ports to any peer on the internet, and any peer can connect
to all ports on which you allow. It doesn't necessarily mean that
all ports are allowed; a remote host, or a firewall under their
control, deciding to block your connection is not a violation of
end-to-end.

At the internet layer, end-to-end means you can send any datagram to
any host on the internet it will be delivered to that host; and any
host can send a datagram to you. It doesn't mean that none of your
packets will be discarded on the way, because some specific
application or port has been banned.

At the link layer, there is no end-to-end connectivity; it is at IP
that the notion first arises.

Jimmy Hess wrote:

NAT would fall under design flaw, because it breaks end-to-end
connectivity, such that there is no longer an administrative choice
that can be made to restore it (other than redesign with NAT
removed).

The end to end transparency can be restored easily, if an
administrator wishes so, with UPnP capable NAT and modified
host transport layer.

That is, the administrator assigns a set of port numbers to a
host behind NAT and sets up port mapping.

  (global IP, global port) <-> (local IP, global port)

then, if transport layer of the host is modified to perform
reverse translation (information for the translation can be
obtained through UPnP):

  (local IP, global port) <-> (global IP, global port)

Now, NAT is transparent to application layer.

The remaining restrictions are that only TCP and UDP are supported
by UPnP (see draft-ohta-e2e-nat-00.txt for a specialized NAT box
to allow more general transport layers) and that a set of port
numbers available to the application layer is limited (you may
not be able to run a SMTP server at port 25).

The point of the end to end transparency is:

      The function in question can completely and correctly be
      implemented only with the knowledge and help of the application
      standing at the end points of the communication system.

quoted from "End-To-End Arguments in System Design", the original
paper on the end to end argument written by Saltzer et. al.

Thus,

      The NAT function can completely and correctly be
      implemented with the knowledge and help of the host
      protocol stack.

            Masataka Ohta

How does the *second* host behind the NAT that wants to use
global port 7719 do it?

Jimmy Hess wrote:

NAT would fall under design flaw, because it breaks end-to-end
connectivity, such that there is no longer an administrative choice
that can be made to restore it (other than redesign with NAT
removed).

The end to end transparency can be restored easily, if an
administrator wishes so, with UPnP capable NAT and modified
host transport layer.

This is every bit as much BS as it was the first 6 times you pushed it.

That is, the administrator assigns a set of port numbers to a
host behind NAT and sets up port mapping.

  (global IP, global port) <-> (local IP, global port)

then, if transport layer of the host is modified to perform
reverse translation (information for the translation can be
obtained through UPnP):

  (local IP, global port) <-> (global IP, global port)

Now, NAT is transparent to application layer.

Never mind the fact that all the hosts trying to reach you have no
way to know what port to use.

http://www.foo.com fed into a browser has no way for the browser
to determine that it needs to contact 192.0.200.50 on port 8099
instead of port 80.

The remaining restrictions are that only TCP and UDP are supported
by UPnP (see draft-ohta-e2e-nat-00.txt for a specialized NAT box
to allow more general transport layers) and that a set of port
numbers available to the application layer is limited (you may
not be able to run a SMTP server at port 25).

You're demanding an awful lot of changes to the entire internet to
partially restore IPv4 transparency when the better solution is to deploy
IPv6 and have real full transparency.

The point of the end to end transparency is:

     The function in question can completely and correctly be
     implemented only with the knowledge and help of the application
     standing at the end points of the communication system.

That is one purpose. A more accurate definition of the greater
purpose of end-to-end transparency would be:

An application can expect the datagram to arrive at the remote
destination without any modifications not specified in the basic
protocol requirements (e.g. TTL decrements, mac layer header
rewrites, reformatting for different lower-layer media, etc.)

An application should be able to expect the layer 3 and above
addressing elements to be unaltered and to be able to provide
"contact me on" style messages in the payload based on its own
local knowledge of its addressing.

quoted from "End-To-End Arguments in System Design", the original
paper on the end to end argument written by Saltzer et. al.

Thus,

     The NAT function can completely and correctly be
     implemented with the knowledge and help of the host
     protocol stack.

            Masataka Ohta

It could be argued, if one considers "contact me on" style messages
to be valid, that the function cannot be completely and correctly
implemented in the presence of NAT.

Moreover, since NAT provides no benefit other than address
compression and the kind of additional effort on NAT of which you
speak would be a larger development effort than IPv6 at this point,
why bother?

Owen

The end to end transparency can be restored easily, if an
administrator wishes so, with UPnP capable NAT and modified
host transport layer.

How does the *second* host behind the NAT that wants to use
global port 7719 do it?

In the previous mails, I wrote:

The remaining restrictions are that ...
and that a set of port
numbers available to the application layer is limited (you may
not be able to run a SMTP server at port 25).

and Jimmy wrote:

At the transport layer, end-to-end means you can establish connections
on various ports to any peer on the internet, and any peer can connect
to all ports on which you allow. It doesn't necessarily mean that
all ports are allowed; a remote host, or a firewall under their
control, deciding to block your connection is not a violation of
end-to-end.

            Masataka Ohta

Yep.

Owen DeLong wrote:

then, if transport layer of the host is modified to perform
reverse translation (information for the translation can be
obtained through UPnP):

  (local IP, global port) <-> (global IP, global port)

Now, NAT is transparent to application layer.

Never mind the fact that all the hosts trying to reach you have no
way to know what port to use.

Quote from <draft-ohta-e2e-nat-00.txt>

   A server port number different from well known ones may be specified
   through mechanisms to specify an address of the server, which is the
   case of URLs.

http://www.foo.com fed into a browser has no way for the browser
to determine that it needs to contact 192.0.200.50 on port 8099
instead of port 80.

See RFC6281 and draft-ohta-urlsrv-00.txt.

But,

  http://www.foo.com:8099

works just fine.

The remaining restrictions are that only TCP and UDP are supported
by UPnP (see draft-ohta-e2e-nat-00.txt for a specialized NAT box
to allow more general transport layers) and that a set of port
numbers available to the application layer is limited (you may
not be able to run a SMTP server at port 25).

You're demanding an awful lot of changes to the entire internet to

All that necessary is local changes on end systems of those who
want the end to end transparency.

There is no changes on the Internet.

This is every bit as much BS as it was the first 6 times you pushed it.

As you love BS so much, you should better read your own mails.

            Masataka Ohta

You're basically redefining the term "end-to-end transparency" to suit your own
agenda. Globally implementing what is basically an application layer protocol
in order to facilitate the functioning of an upper layer protocol (i.e. IPv4)
is patent nonsense. The purpose of each layer is to facilitate the one it
encapsulates, not the other way around.

What you advocate is not end-to-end transparency but point-to-point opacity
hinging on a morass of hacks that constitute little more than an abuse of
existing technologies in an attempt to fulfil an unscalable goal.

Fortunately, it is exactly that fact which makes all of your drafts and
belligerent evangelising utterly pointless; you can continue to make noise and
attempt to argue by redefinition all you like, the world will continue to forge
ahead with the deployment of IPv6 and the *actual* meaning of the end-to-end
principle will remain as it is.

Regards,
Oliver

Despite my scepticism of the overall project, I find the above claim a
little hard to accept. RFC 2052, which defined SRV in an
experiment, came out in 1996. SRV was moved to the standards track in
2000. I've never heard an argument why it won't work, and we know
that SRV records are sometimes in use. Why couldn't that mechanism be
used more widely?

Best,

A

Hi Andrew,

Because the developer of the next killer app knows exactly squat about
the DNS and won't discover anything about the DNS that can't be had
via getaddrinfo() until long after its too late redefine the protocol
in terms of seeking SRV records. Leaving SRV out of getaddrinfo()
means that SRVs will be no more than lightly used for the duration of
the current networking API. The last iteration of the API survived
around 20 years of mainstream use so this one probably has another 15
to go.

Also there are efficiency issues associated with seeking SRVs first
and then addresses, the same kind of efficiency issues with reverse
lookups that lead high volume software like web servers to not do
reverse lookups. But those pale in comparison to the first problem.

Regards,
Bill Herrin

Oh, sure, I get that. One of the problems I've had with the "end to
end NAT" argument is exactly that I can't see how it's any more
deployable than IPv6, for exactly this reason. But the claim upthread
was (I thought) that the application _can't_ know about this stuff,
not that it's hard today. Because of the 20-year problem, I think now
would be an excellent time to start thinking about how to make usable
all those nice features we already have in the DNS. Maybe by the time
I die, we'll have a useful system!

Best,

Andrew "living in constant, foolish, failed hope" Sullivan

If browsers started implementing it, it could.

I suppose the more accurate/detailed statement would be:

Without modifying every client on the internet, there is no way for the
clients trying to reach you to reliably be informed of this port number
situation.

If you're going to touch every client, it's easier to just do IPv6.

Owen

My PS3 may want to talk to the world, but I have no control over Comcast's DNS.