Question concerning authoritative bodies.

Here's the background:

> And this is what makes DNSBLs a good deal. Mark is asking for trouble

with

> his theories. If every ISP and business issues its own scans, we only
> succeed in making scanning traffic worse than spam itself at a server
> resource level. We also increase the administration factor when mistakes

are

> made. Instead of contacting 3-5 DNSBLs, one must contact every ISP that
> happened to do a scan during the outage period. Centralizing scanning

for

> security issues is a good thing in every way. It is the responsible

thing to do.

I must reluctantly agree. (The reluctance stems from my desire not
to intrude on others' networks. However, it's been overcome by the
reluctance to be abused by those networks.)

Centralized, or quasi-centralized, scanning with appropriate safeguards
(to minimize frequency) and appropriate assignment of responsibility,
beats the heck out of having thousands of independent entities repeating
the same scans and thus adding to the collective misery.

If we agree on this (and I don't know that we all do) then the debate
shifts to "who?" and "how?".

So I'm curious what people think. We have semi centralized various things in
the past such as IP assignments and our beloved DNS root servers. Would it
not also make sense to handle common security checks in a similar manner? In
creating an authority to handle this, we cut back on the amount of noise
issued. I bring this up because the noise is getting louder. More and more
networks are issuing their own relay and proxy checks. At this rate, in a
few years, we'll see more damage done to server resources by scanners than
we do from spam and those who would exploit such vulnerabilities.

I know that this is more service level than network level, except for the
arguments continue to escalate over the rights of people to scan a network.
These arguments would be diminished if an authoritative body handled it in a
proper manner. At what point do we as a community decide that something
needs to be done? Would it not be better to have a single test suite run
against a server once every six months than the constant bombardment we see
now?

-Jack

So I'm curious what people think. We have semi centralized various things in
the past such as IP assignments and our beloved DNS root servers. Would it
not also make sense to handle common security checks in a similar manner? In

IP assignments are factual things of record - AS1312 has 198.82/16 and
128.173/16, and no amount of value judgments will change that. And yet,
there's scattered complaints about what it takes to get a /19 to multihome.

DNS servers are similarly "things of record". This organization has this
domain, and their servers are found where the NS entries point. And the
dispute resolution process is, in a word, a total mess - how many *years*
has the sex.com debacle dragged on now?

So who do you trust to be objective enough about a centralized registry
of security, especially given that there's no consensus on what a proper
level of security is? And if there's a problem, what do you do? In our
case, do you ban an entire /16 because one chucklehead sysadmin forgot to
patch up IIS (or wasn't able to - I know of one case where one of our boxes
got hacked while the primary sysadmin was recovering from a heart bypass).
Dropping a note to our abuse@ address will probably get it fixed, but often
we're legally not *ABLE* to say much more than "we got your note and we'll
deal with the user" - Buckley Amendment is one of those laws that I'm glad
is there, even if it does make life difficult sometimes.

needs to be done? Would it not be better to have a single test suite run
against a server once every six months than the constant bombardment we see
now?

I submit to you the thesis that in general, the sites that are able to tell
the difference between these two situations are not the sites that either
situation is trying to detect.

So who do you trust to be objective enough about a centralized registry
of security, especially given that there's no consensus on what a proper
level of security is? And if there's a problem, what do you do? In our
case, do you ban an entire /16 because one chucklehead sysadmin forgot to
patch up IIS (or wasn't able to - I know of one case where one of our

boxes

There are private systems in use today like NJABL which act as centralized
resources. I believe that it is possible to come to an agreement on a
standardized test suit that can be used and what the variables concerning #
of scans and how frequent should be set to. I'm not suggesting a full
security evaluation of networks, but a detection mechanism that can be used
as a resource to recognized standard issues, primarily protecting email
which is one of our most utilized resources.

I submit to you the thesis that in general, the sites that are able to

tell

the difference between these two situations are not the sites that either
situation is trying to detect.

I agree for the most part (excluding RoadRunner given recent events).
However, the sites that are able to tell the difference suffer the costs of
scans just the same while everyone tries to detect those unable to tell the
difference. And as I mentioned, you always have situations like RoadRunner
arise where a detection was needed, but they are able to detect the scans
and issue complaints even when they were in fault. The goal is to provide a
service that many require to limit the amount of noise currently generated.
I do not think that we can necessarily scan and analyze every security
problem. However, I do think that there are no-brainer security issues that
can be detected which the public demands they be protected from. In
particular open SMTP relay and unsecured proxy/socks servers. Detection, of
say, the latest sendmail or saphire exploits is not as critical. We can
passively detect these things from their own abuse. We cannot passively
detect open proxies and smtp relays.

-Jack

There are private systems in use today like NJABL which act as centralized

private systems. Plural. Because..

resources. I believe that it is possible to come to an agreement on a
standardized test suit that can be used and what the variables concerning #
of scans and how frequent should be set to. I'm not suggesting a full

Forgive my cynicism, but... you're saying this on the same mailing list where it's possible to
start a flame-fest by saying that ISP's should ingress-filter RFC1918 source
addresses so they don't pollute the net at large? :wink:

I've been participating in the Center for Internet Security development of
security benchmarks - it was hard enough to get me, Hal Pomeranz, and the
reps from DISA and NSA to agree on standards for sites to apply *to themselves*.
There's a lot of things that I think are good ideas that I don't want other
sites checking for, no matter how well intentioned they are.

I'd just *LOVE* to hear how you intend to avoid the same problems that the crew
from ORBS ran into with one large provider who decided to block their probes.
Failing to address that scenario will guarantee failure....

I'd just *LOVE* to hear how you intend to avoid the same problems that the

crew

from ORBS ran into with one large provider who decided to block their

probes.

Failing to address that scenario will guarantee failure....

Run the probes from the DNS root servers. Problem solved. Go ahead and block
them. haha.

Seriously, I do understand that some networks would block the probes. This
is to be expected. Many of these same networks block probes from current
lists or issue "do not probe" statements. A network is more likely to
concede to tests from a central authority that limits what is tested and how
often if it means the reduction of scans from numerous sources for lists
such as DSBL. The only way such a resource would work is if the largest
networks back it. Blocking the scans at a TCP/IP level is easily detectable.
Provider received email from said server, IP was submitted for testing, no
connection can be established to said server. Place it in the "wouldn't
allow scan list". Politely ask AOL to use the "wouldn't allow scan list" for
all inbound smtp connections.

People want the abuse of unsecured relays for smtp stopped. I'm afraid it is
a choice of the lesser of two evils. The scans are going to happen no matter
what. The question is, will administrators accept that a single run of a
test suite on a server that has established connections to other servers is
better than just having the entire 'net issuing their own scans? Am I wrong
in assuming that a majority of networks use smtp and do not wish the abuse
of their servers?

-Jack

> > made. Instead of contacting 3-5 DNSBLs, one must contact every ISP that
> > happened to do a scan during the outage period. Centralizing scanning
for
> > security issues is a good thing in every way. It is the responsible
thing to do.

This, IMO, is where the real headache lies. If every provider (or just
every large provider) has their own private DNSBL, and worse, doesn't do
much to document how it works...i.e. how to check if IPs are in it, how to
get IPs out of it, then it becomes a major PITA to deal with these
providers when one of your servers gets into their list. I've personally
dealt with this several times over the past couple years with Earthlink
and more recently with AOL. In each case, there was no way (other than
5xx errors or even connections refused) to tell an IP was listed. In each
case, there was no documented procedure for getting delisted. In AOL's
case, they couldn't even tell us why our mail was being rejected or our
connections to their MX's blocked and I had to wait a week for their
postmaster dept. to get to my ticket and return my call to fill me in on
what was going on.

networks are issuing their own relay and proxy checks. At this rate, in a
few years, we'll see more damage done to server resources by scanners than
we do from spam and those who would exploit such vulnerabilities.

I doubt that's possible. If an average sized ISP mail server receives
messages from, say, a few thousand unique IPs/day, and if that ISP wanted
to test every one of those IPs (with some sane frequency limiting of no
more than once per X days/weeks/months) then it doesn't take long at all
to get through the list. Suppose every one of those servers decided to
test you back. Now you're looking at a few thousand tests/day (really a
fraction of that if they do any frequency limiting). I've got servers
that each reject several hundred thousand (sometimes >1 million)
messages/day using a single DNSBL.

Also, I suspect consensus on a central authority and testing methods is
highly unlikely. People can't agree on "what is spam?" or how to deal
with providers who turn a blind eye to spammer customers (spews). How
will a single central DNSBL bring all these people with opposing views
together?

Two obvious reasons for the existence of dozens of DNSBLs are:

1) not agreeing with the policies of existing ones...thus you start your
own
2) not trusting existing ones (not being willing to give up control over
what you block to some 3rd party), so you start your own

I suspect AOL and Earthlink run their own DNSBLs primarily for the second
reason. How would you convince them to trust and give up control to a
central authority?

Even if IANA were to create or bless some existing DNSBL and decree that
all IP address holders will submit to testing or have their space revoked
(yeah, that'll happen) there would still be those who weren't happy with
the central DNSBL thus creating demand for additional ones.

network. These arguments would be diminished if an authoritative body
handled it in a proper manner. At what point do we as a community decide
that something needs to be done? Would it not be better to have a single
test suite run against a server once every six months than the constant
bombardment we see now?

Parts of the community have already decided and have helped to create
central quasi-authoratative DNSBLs. If nobody uses a DNSBL, who care's
what's in it? If a sufficient number of systems use a DNSBL, that creates
authority.

networks back it. Blocking the scans at a TCP/IP level is easily detectable.
Provider received email from said server, IP was submitted for testing, no
connection can be established to said server. Place it in the "wouldn't
allow scan list". Politely ask AOL to use the "wouldn't allow scan list" for
all inbound smtp connections.

Lots of people run outgoing mail servers that don't accept connections
from the outside. A scarey number of people run "multihomed" mail servers
where traffic comes in on one IP, leaves on another, and the output IP
doesn't listen for SMTP connections.

People want the abuse of unsecured relays for smtp stopped. I'm afraid it is

Some do. Some see absolutely nothing wrong with their running open
relays. You're going to need a serious authority figure with some
effective means of backing up their policy to change these minds.

BTW...these topics have been discussed before. Before we all get warnings
from the nanog list police, have a look at the thread I started back in
8-2001 http://www.cctec.com/maillists/nanog/historical/0108/msg00448.html

Date: Sun, 9 Mar 2003 14:59:05 -0500 (EST)
From: jlewis

In AOL's case, they couldn't even tell us why our mail was
being rejected or our connections to their MX's blocked and I
had to wait a week for their postmaster dept. to get to my
ticket and return my call to fill me in on what was going on.

Ewwww. Much better to put a semi-descriptive code in the 5.x.x
and give a contact phone number and/or off-net email box.

Parts of the community have already decided and have helped to
create central quasi-authoratative DNSBLs. If nobody uses a
DNSBL, who care's what's in it? If a sufficient number of

True. It cracks me up when someone complains about being on
Selwerd XBL.

Eddy

You may find it funny, but I do not. I get literally dozens, possibly
hundreds of calls a year about that moron. He costs us real money in lost
cycles. His inclusion in the various "master lists" also hurts the validity
of those lists (which I've often wondered over. Is it possible that Selwerd
is actually trying to point out the lunacy of [many] lists?

> In AOL's case, they couldn't even tell us why our mail was
> being rejected or our connections to their MX's blocked and I
> had to wait a week for their postmaster dept. to get to my
> ticket and return my call to fill me in on what was going on.

Ewwww. Much better to put a semi-descriptive code in the 5.x.x
and give a contact phone number and/or off-net email box.

There was a multiline message (when our connections weren't just refused
or immediately closed).

550-The information presently available to AOL indicates that your server
550-is being used to transmit unsolicited bulk e-mail to AOL. Based on AOL's
550-Unsolicited Bulk E-mail policy at http://www.aol.com/info/bulkemail.html
550-AOL cannot accept further e-mail transactions from your server or your
550-domain. Please have your ISP/ASP contact AOL to resolve the issue at
550 703.265.4670.

Trouble was, the people at 703.265.4670 can't help you. They just take
your name, number, and some other basic info, and open a ticket that the
postmaster group will "get to eventually".

On the affected system, I ended up changing the source IP for talking to
AOL's servers.

True. It cracks me up when someone complains about being on
Selwerd XBL.

xbl.selwerd.cx might be useful for a few points in a spamassassin setup.
I don't use it.

measl@mfn.org implied that some of the other DNSBLs include selwerd. I'm
not aware of any, but I'm sure there are lots of DNSBLs I've never heard
of and know nothing about.

We just had this same exact thing happen to us, but not by AOL, by
Comcast. We have alot of aliases pointing to Comcast email addresses, so
my best guess is that one or more of them had enough spam or spam bounces
going to them to trigger something. Nobody there could tell me exactly
what happened, but after a bunch of calls, the delisted our mail server
about 3 days later. In the mean time, we just routed the mail going to
comcast through another server.

Hmm...I would argue that every operator needs to run their own DNSBL.
Some operators will simply mirror a central authority, others will
ignore some or all of the data in the central authority and the simple
case for some operators is to have an empty set.

It would be very difficult to convince any operator to give up control
of defining their own DNSBL (or even not having one at all).

-ron

Hmm...I would argue that every operator needs to run their own DNSBL.

Can you elaborate on why? IMO, there are definite benefits to
centralized, shared DNSBLs, especially if testing is involved. Many can
benefit from the work done by a few and not have to duplicate the work.

If you only DNSBL IPs after you receive spam from them, you have to get
spammed by every IP before it's blocked. Why not reject mail from IPs
that have spammed others before they spam you and your customers? Though
I have problems with the way it's been run, I think that's the idea behind
bl.spamcop.net. If they could just restrict nominations to a more clueful
group of users, such a system could be very effective for blocking spam
everywhere as soon as one system gets hit. For spam from open relays and
proxies, a centralized DNSBL that tests the IPs that talk to servers using
it can be just as, if not more, effective.

It would be very difficult to convince any operator to give up control
of defining their own DNSBL (or even not having one at all).

You can use a central DNSBL without giving up total control. Shortly
after I configured servers to use a DNSBL for the first time, I recognized
the need for a local DNSWL and have continued to use one ever since. When
I setup other people's servers to use DNSBLs, I help them setup a DNSWL
and explain how to maintain it.

I expect this is different in Ron's case since in a single day he gets
enough spam to be equivlent to every IP address once. :slight_smile: So whats an extra
day right..

Now if AOL would allow their DNSBL to be mirrored...

andy

Hmm...copy of centralized DNSBL + local DNSWL = local DNSBL ? I guess
the point is that centralized data is good in some sense, but utimately
mirroring, copying, editing, or selective copy of that data will be done
by operators in effect to create their own local DNSBL.

-ron

So where can we get copies of the AOL DNSBL? :slight_smile:
I wonder how many MB the zone file is.