> Our mail servers reject connections that don't follow the RFC. Am I
> wrong to do this?
Seth,
RFC 1122 (Requirements for Internet Hosts - Communication Layers)
section 1.2.2 (Robustness Principle):
"Be liberal in what you accept, and
That particular philosophy has done great wonders for e-mail and the spam
problem, been a key issue on both the penetration and implementation sides
of firewall design, etc.
"Liberal," when defined as "accept anything you reasonably can, and try to
deal with it," appears to be a policy that has had an overall negative
impact on protocol design and interoperability on the Internet.
"Liberal," if defined instead as "must accept anything in compliance with
RFC sender-side MUST/SHOULD/MAY's, and should reject as much of anything
else as you can figure out to," would be a better way to have defined
"liberal." I think I would have preferred the word "robust" instead of
"liberal."
This would have spared us the agony of systems that are "smart" enough to
go direct-to-MX, but not smart enough to send a valid FROM line.
conservative in what you send"
If only a more significant percentage of software was written in that
manner...
That particular philosophy has done great wonders for e-mail and the spam
problem
Joe,
I've heard similarly unsubstantiated versions of this claim over and
over. The fact is I've done quite a bit of development on anti-spam
systems and the only protocol violation that has been consistently
valuable for rejecting spam is the fire-and-forget violation. That's
the one where they pipeline the entire send-side of the conversation
in the first data packet without waiting for the banner or checking
each response as it comes back. Its a terribly tempting optimization
to the spam-sending process and not enough servers detect or reject
it.
Anti-spam activity at the protocol level is looking for behavioral
signatures unique to spammer software. Protocol-correct signatures are
just as valuable as protocol-incorrect ones but its all a game of
whac-a-mole. Once a signature is identified and promulgated, the
software exhibiting it either versions or falls out of use. A few
months later the folks still nailed are the false positives.
> conservative in what you send"
If only a more significant percentage of software was written in that
manner...
I'll second that sentiment. Seth's customer is unambiguously wrong.
Unfortunately, that doesn't make Seth right. Missing brackets has been
a common SMTP error since the inception of the protocol, second only
to incorrect end-of-line (LF instead of CRLF). If you want your
implementation to be robust, you have to silently allow those common
mistakes.
Because....we wouldn't have e-mail? Consider the pain of getting worldwide interoperability for a "notmail" system that insisted on strict validation...
The SMTP ship has already sailed, so trying to change the behavior of email would be difficult.
I do, however, reject the notion that strict validation make implementation of interoperability painful. If the specifications are clearly defined, rather than allowing interpretation by the implementer, then interoperability would be almost assured. The problem is that many specifications in RFCs are loose and left open to interpretation by the individual software programmers.
But, to the original question... If the customer's email is important to the business, then you may want to accept the email that may not be complaint to a strict interpretation of the RFC.
Let's bring this one to closure. The authors question is answered and
this is backing itself into an endless thread with arguments better
suited for the IETF vs. NANOG.
I completely agree. If it weren't for that philosophy, we wouldn't
have an email problem at all.
Because....we wouldn't have e-mail? Consider the pain of getting worldwide interoperability for a "notmail" system that insisted on strict validation...
consider the pain of getting worldwide interoperabilty for a "notmail" system where following the specs was thrown to the wind.
There's another old saying: Quality, Schedule, Features: Pick 2. It
applies to specifications as well as implementations. This is why the
robustness principle is important (and IMO ought to be followed); it
recognizes that there might be communication in the absence of perfect
specification or implementation, and that it's valuable (in general)
to let that communication proceed. (An argument to the contrary is
that this principle was introduced at a time when there was a much
lower incidence of "unwanted traffic", particularly that which
strongly correlated to protocol violations.)