Providers removing blocks on port 135?

> Why do you get to decide that, I can't, from a hotel room, call my ISP and
> put up a web server on my dialup connection so someone behind a firewall
> can retrieve a document I desperately need to get to them? Why
> _SHOULDN'T_
> I run a web server to do this over a dialup connection? Why do you get

  when scp or ftp over an ssh tunnel are much easier/lighter weight?

  or you could hand out ASNs and run third-party BGP from your
  hotel room back to the trusted core... there are lots of ways
  to get your critical content to the right party, some are more
  cost effective than others.

  The name "Rube Goldburg" comes to mind here...

The distinction may be blurrier these days, but there *is* a difference
between networking and internetworking.

  true enough.

--bill

Has anyone else notice the flip-flops?

To prevent spam providers should block port 25.

If providers block ports, e.g. port 135, they aren't providing access to
the "full" Internet.

Should any dialup, dsl, cable, wi-fi, dhcp host be able to use any service
at any time? For example run an SMTP mailer, or leave Network
Neighborhood open for others to browse or install software on their
computers?

Or should ISPs have a "default deny" on all services, and subscribers need
to call for permitssion if they want to use some new service? Should new
services like Voice over IP, or even the World Wide Web be blocked by
default by service providers?

As a HOST requirement, I think all hosts should be "client-only" by
default. That includes things when acting as like hosts such as routers,
switches, print servers, file servers, UPSes. If a HOST uses a
network protocol for local host processes (e.g. X-Windows, BIFF, Syslog,
DCE, RPC) by default it should not accept network connections.

It should require some action, e.g. the user enabling the service,
DHCP-client enabling it in a profile, clicking things on the LCD display
on the front ofthe printer, etc.

If the device is low-end and only has a network connection (e.g. no
console), it may have a single (i.e. telnet or web; but not both)
management protocol enabled provided it includes a default password which
can not be discovered from the network (i.e. not the MAC address), is
different for each device (i.e. not predictable), and is only accessiable
from the "local" network (e.g. the "internal" subnet interface). A
"proprietary" protocol is not an adequate substitute. Static passwords are
inherently insecure if you get enough guesses, so the device should block
use of the default password after N failed attempts until the device is
manually reset. I recognize this is a potential denial of service, and
for non-default passwords vendors may decide to do something else. But
if the user hasn't changed the default password; they probably aren't
using it anyway.

SERVICE PROVIDERS do not enforce host requirements.

Has anyone else notice the flip-flops?

To prevent spam providers should block port 25.

I still disagree with this. To prevent SPAM, people shouldn't run open
relays and the open relay problem should be solved. Breaking legitimate
port 25 traffic is a temporary hack.

If providers block ports, e.g. port 135, they aren't providing access to
the "full" Internet.

That would be my position, yes. Even though I personally have no real use
for that port (other than possibly a honeypot), I think that is true.
Generally, I want my net uncensored by my provider. If I want them to block
something, I'll tell them. Otherwise, I expect non-blocking to be the
default.

Should any dialup, dsl, cable, wi-fi, dhcp host be able to use any service
at any time? For example run an SMTP mailer, or leave Network
Neighborhood open for others to browse or install software on their
computers?

If the person running the system in question chooses to do so, yes, they
should be able to do so.

Or should ISPs have a "default deny" on all services, and subscribers need
to call for permitssion if they want to use some new service? Should new
services like Voice over IP, or even the World Wide Web be blocked by
default by service providers?

Personally, I'm in the default permit camp with ISPs providing filtration on
demand to customer specs.

As a HOST requirement, I think all hosts should be "client-only" by
default. That includes things when acting as like hosts such as routers,
switches, print servers, file servers, UPSes. If a HOST uses a
network protocol for local host processes (e.g. X-Windows, BIFF, Syslog,
DCE, RPC) by default it should not accept network connections.

It should require some action, e.g. the user enabling the service,
DHCP-client enabling it in a profile, clicking things on the LCD display
on the front ofthe printer, etc.

I could live with that, although, having a printer reject LPD by default
doesn't make alot of sense to me.

If the device is low-end and only has a network connection (e.g. no
console), it may have a single (i.e. telnet or web; but not both)
management protocol enabled provided it includes a default password which
can not be discovered from the network (i.e. not the MAC address), is
different for each device (i.e. not predictable), and is only accessiable
from the "local" network (e.g. the "internal" subnet interface). A
"proprietary" protocol is not an adequate substitute. Static passwords are
inherently insecure if you get enough guesses, so the device should block
use of the default password after N failed attempts until the device is
manually reset. I recognize this is a potential denial of service, and
for non-default passwords vendors may decide to do something else. But
if the user hasn't changed the default password; they probably aren't
using it anyway.

I like that idea, although, I don't like saying only one service. I think
one CLI and one GUI service is reasonable. I don't want to have to use a
web interface to get to the CLI, and I'm sure a lot of other customers don't
want to know what a CLI is.

SERVICE PROVIDERS do not enforce host requirements.

I REALLY like this.

Owen

Hi, NANOGers.

] I still disagree with this. To prevent SPAM, people shouldn't run open
] relays and the open relay problem should be solved. Breaking legitimate
] port 25 traffic is a temporary hack.

I suspect that most spam avoids open relays. The abuse of
proxies, routers, and bots for this purpose is far more in
vogue. Watch out for worms such as W32.Sanper, which also
provide a built-in spam relay network. Remove all of the
open mail relays and you are left with...lots of spam.

More at NANOG... :wink:

Thanks,
Rob.

However, I'm not convinced blocking port 25 on dialups helps much with that.
What it does help with is preventing them from connecting to open relays.
The real solution in the long run will be two-fold:

  1. Internet hosts need to become less penetrable. (or at least
    one particular brand of software)

  2. SMTP AUTH will need to become more widespread and end-to-endish.

Owen

However, I'm not convinced blocking port 25 on
dialups helps much with that. What it does
help with is preventing them from connecting to
open relays.

We don't stop our dial customers from getting *to* anything.

What we do have though are (optional) *inbound* filters that make sure
no-one can connect to their privileged ports over TCP/IP, and a mandatory
filter that says only our network can deliver to their SMTP service.

We don't get problems with open-relays on dialups. We didn't have any
problems with MS-Blaster on dialups either...

I'm considering adding privileged port filters for UDP/IP too, although
again it would be optional so that customers who run their own UDP/IP
services can get their responses (i.e. cacheing DNS, IKE, NTP, etc).

Ray

I still disagree with this. To prevent SPAM, people shouldn't run
open relays and the open relay problem should be solved. Breaking
legitimate port 25 traffic is a temporary hack.

Very little spam coming off dialups and other dynamically assigned,
"residential" type connections has anything to do with open relays.
The vast majority of it is related to open proxies (which the machine
owners do not realize they are running) and machines that have been
compromised by various viruses and exploits. These are machines that
should not be running outbound mailservers, and in most cases, the
owners neither intend nor believe that their systems are sending
mail. Merely stating that people shouldn't run open relays
didn't stop spam four years ago and it is less likely to do so now.

My guess is that you haven't heard of the current issue with various
servers running SMTP AUTH. These MTAs are secure by normal
mechanisms, but are being made to relay spam anyway.

It's hard enough to get mailservers secured when they are maintained
by real sysadmins on static IPs with proper and informative PTR
records. When the IP addresses sourcing the spam are moving targets,
with "generic" PTR records, and the machines are being operated by
end users with no knowledge that their computer is even capable of
sending direct to MX mail, the situation is impossible to solve
without ISP intervention via Port filtering, etc.

If the person running the system in question chooses to do so, yes,
they should be able to do so.

If the person running the system in question wants to run server
class services, such as ftp, smtp, etc, then they need to get a
compatible connection to the internet. There are residential service
providers that allow static IP addressing, will provide rDNS, and
allow all the servers you care to run. They generally cost more than
dial-ups or typical dynamic residential broadband connections. As a
rule, you tend to get what you pay for.

I'm not convinced blocking port 25 on dialups helps much with that.
What it does help with is preventing them from connecting to open
relays.

There are so few open relays now that spammers have moved on. They
now use, almost without exception, compromised Windows boxes acting as
open proxies, or on which a trojan spam-sender of some sort has been
installed - usually by one of the recent stream of viruses/worms.

Blocking outbound port 25, other than via a designated smarthost, would
at least prevent the direct-to-MX traffic from compromised boxes - which
currently seems to be the spammers "method of choice".

The real solution in the long run will be two-fold:
1. Internet hosts need to become less penetrable.
   (or at least one particular brand of software)

2. SMTP AUTH will need to become more widespread and end-to-endish.

Right on both counts. But "end-to-end" may have to include the senders'
fingers: as if bundled mail-client software contains the AUTH password
it will be trivial for the spammers to hijack at the client level.

And users won't like having to key in their password each time, meaning
that trivial, guessable passwords will often be used. In recent weeks
one particular spammer seems to have perfected a knack of breaking SMTP
AUTH passwords on a widespread basis.

Governments on both sides of the Pond may be reluctant to make spam
illegal, but the issue is not spam (or we couldn't be discussing it here).
This is a matter of system and network security, and if law enforcement
had the skills, resources and motivation to deal with what are clear
breaches of existing laws, admins' jobs would be significantly easier.

Until then, we have to deal with issues as they arise. Networks need to
be contactable quickly when compromised sites start to be misused, and
to respond immediately. Not just wait until "Monday Morning" in their
timezone ... if we can't deal with the incidents in real time, how can
we expect law enforcement to do anything?

Hello Comcast, Skynet, Ireland-onLine, NTL in the UK ... need I go on?
Where's Declan McC when we need him?

I would suggest instead that you have mandatory sending via your relays,
and allow inbound connections to port 25.

Sympatico, last I checked, didn't have any restrictions until you
tripped off their alarms, at which point you needed to configure your
smtpd to send mail via their relays. If they continued spewing copious
amounts of spam, cut them off entirely until they fix their
configuration.

There are a couple of pluses to this type of setup; people like me who
have dozens of (required) email addresses can forward them all to their
home machine. Some of my family also much prefer this even though
they've only got one or two email addresses. It also ensures that they
can't send spam directly no matter what the source; blocking inbound
connections will certainly stop open relays, but it won't stop trojans
and worms and whatnot that are really just spamware. (Note that I
consider spamware included in other applications and hidden from the
user "trojans.")

* rpb@community.net.uk (Ray Bellis) [Sun 21 Sep 2003, 00:25 CEST]:

What we do have though are (optional) *inbound* filters that make sure
no-one can connect to their privileged ports over TCP/IP, and a mandatory
filter that says only our network can deliver to their SMTP service.

There's an ISP in the Netherlands who do that too for their DSL
customers. Unfortunately, their mail servers are not that reliable to
begin with and also spool mail only for 4 hours, so if your connection
is down for the weekend (happens more often if you work for a company in
direct competition with the telco that owns this particular ISP) all
your mail bounces instead of getting spooled somewhere and delivered
later...

  -- Niels.

So if someone wants to run Outlook or Netbios from home, they need to get
a "server-class" connection to the Internet? If everyone buys a
server-class connection, we end up back were we started.

The problem is many "clients" act as servers for part of the transaction.
Remember X-Windows having ports 6000-6099 open on clients? IRC users
need to have Identd port 113 open. Microsoft clients sometimes need
to receive traffic on port 135, 137-139 as well as transmit it due to how
software vendors designed their protocols. Outlook won't receive the
"new mail" message, and customers will complain that mail is "slow."
And do we really want to discuss peer-to-peer networking, which as
the name suggests, peer-to-peer.

It costs service providers more (cpu/ram/equipment) to filter a
connection. And even more for every exception. Should service providers
charge customers with filtering less (even though it costs more), and
customers without filtering more (even though it costs less)? If the
unfiltered connection was less expensive, wouldn't everyone just buy
that; and we would be right back to the current situation?

In the old regulated days of telephony, service providers could get away
with charging business customers more for a phone line or charging for
"touch-tone" service. But the Internet isn't regulated. There is always
someone willing to sell the service for less if you charge more than what
it costs.

I would suggest instead that you have mandatory
sending via your relays, and allow inbound
connections to port 25.

We're a fairly big provider on the GRIC (global roaming) network.

That means that it's not feasible for us to prevent many of our POPs' users
from contacting off-net SMTP servers.

Running an enforced SMTP service via transparent proxying wouldn't stop the
spam problem, it would just shift it and probably get the proxy system
black-listed as an open-relay's relay.

Anyway, like I said, we don't *have* a spam problem on our dialups. By
virtue of our filters we don't have any open relays on dialup.

ADSL is a different matter and we do have occasional problems with open
relays and/or worms there.

Unfortunately the UK incumbent wholesaler(*) doesn't provide a way to filter
ADSL traffic within their ATM core. The only way to do it is to put another
router between our network and the "BT Central" router that connects their
ATM cloud to us. Of course that doesn't provide any inter-customer
filtering, since that traffic never reaches our network :frowning:

Ray

(*) BT - they have a nearly complete monopoly on the local loop.

Would this be a reference to the qmail-smtp-auth patch that recently was
discovered, that if misconfigured, could allow incorrect relays? If so, I
would say that this was an isolated incident for a single patch for a
specific MTA and only when it was misconfigured. I'm not sure I would
describe that as "secure by normal mechanisms" nor quite the epidemic it
was the first week or two.

I'm not necessarily making a statement one way or the other on port 25
filtering, but SMTP Auth, when properly configured and protected against
brute force attacks is certainly a useful thing. YMMV of course.

andy

Would this be a reference to the qmail-smtp-auth patch that
recently was discovered, that if misconfigured, could allow
incorrect relays?

No, that was the tip of the iceberg.

If so, I would say that this was an isolated
incident for a single patch for a specific MTA and only when it was
misconfigured. I'm not sure I would describe that as "secure by
normal mechanisms" nor quite the epidemic it was the first week or
two.

We've seen the same behavior out of Postfix, QMail, Imail, Mdaemon,
Exchange, Sendmail, Mercury, Merak, NTMail, and others that I can't
recall off the top of my head.

In some cases, the relaying was fixed with the release of a new patch
or a conf change. In others, particulary Exchange, the guest account
was enabled, allowing open authentication. The big "BUT" is that
there is a not insignificant number of other machines that have
either been shown to have been brute forced or we've yet to determine
the mechanism that allows the relay.

The problem is not small.

I'm not necessarily making a statement one way or the other on port
25 filtering, but SMTP Auth, when properly configured and protected
against brute force attacks is certainly a useful thing. YMMV of
course.

Yes, it is a useful thing. It's not the ultimate answer.

A machine that tests secure by any test we are willing to run, that
requires fifteen character passwords, with mulitple special
characters required, that is STILL relaying indicates there is a bad
thing happening somewhere.

This veers off the original topic. Of course I don't think any of us
recall what that was anyways... I remember back when I first started
using the DUL. Of all the DNSBLs I used at the time it blocked the most
spam of any of them. I mean that by long shot. About the time the DUL
and other MAPS lists went commericial is about the same time I noticed
fewer and fewer hits on the DUL. We still pay for an AXFR (IXFR) of it
but it doesn't block nearly as much as it used to.

The open proxy lists block an unbelievable amount of spam. In theory the
DUL would take care of this if it also list residential dynamically
assigned cable/dsl lines (if it doesn't already, hmmm...). Still the
open proxy DNSBLs seem to be more effective now. Bottom line, use every
DNSBL you possibly can and don't be afraid to pay for them. I strongly
recommend redirecting SMTP traffic for this same class of user as well.

Now I'm going to get even more off-topic. It occurs to me that major
changes to a protocol such as SMTP getting auth should justify utilizing a
different tcp/ip port. Think about it like this. If authenticated forms
of SMTP used a different TCP/IP port we netadms could justify leaving that
port open on these same dynamically assigned netblocks in the theory that
they are only able to connect to other authenticated SMTP services.
Doesn't that seem logical?

Justin

Abosulutely. At least if the customer wants technical support or plans on
paying for their bandwidth. It costs *more* resources for an ISP to *not*
filter ports and it costs them *less* resources to filter known ports that
are rarely used by Joe Blow average user but the cause of 99% of their
(our) headaches. How many people here have ever worked in a helpdesk with
hundreds of users calling you for help when they've been infected with the
latest greatest Netbios-enabled virus and lost their report, thesis,
archived email, pictures of the kids, you name it. I used to work at a
Unv helpdesk. Every single time the mail server hiccuped for whatever
reason, or the personal webserver was offline for a few minutes of
maintenance in the week hours of the morning (no matter whether it was 2
minutes of 2 days) people would inundate us with complaints. All the real
problems had to be put on hold so we could answer the phones. Technical
support costs an ISP many times that of the neccessary CPU and RAM
resources on an access server or border router needed to filter malicious
ports. Why don't we just wait until we identify that a user has been
infected or compromised (by whatever resource-hog of a method that
entails). Then we can just disable their account and wait for them to
call. Those calls are always the most pleasant of the day.

When did proactive security measures become criminal? Was there a memo I
missed?

Justin

The majority of viruses still spread through port 25 and port 80.

I've asked other providers about their experiences. Based on their
experiences, the number of incidents for providers that filtered
netbios was essentially the same as providers that didn't. It didn't
significantly change the number of calls to their help desks over the
long-term (e.g. 6 months) either. They were hit with the same number of
drop-everything-all-hands-on-deck incidents. Microsoft Windows has
more than enough vulnerabilities. Blocking a few ports doesn't change
much. Deleting Outlook might help :slight_smile:

I know how people working the help desk feel. But is this a case of "do
something" rather than figuring out what the problem is.

What data do people have to back up blocking specific ports. What were
your control groups? With Trojan proxies appear on almost any port,
blocking anything less than every port will be ineffective.

At one time, signing up for "throwaway dial-up accounts" was a common
spammer MO. We got hit a couple times, and they were like a plague of
vermin [the spammers]. They'd sign up giving us bogus contact info and a
freshly stolen (active) credit card. When the account was activated,
they'd dial in using half a dozen or so lines and pump out as much spam
(direct-to-MX) as they could. The really annoying bit is, we'd terminate
them, they'd call right back, and sign up again, giving different bogus
info and card numbers. We'd block them by ANI, and they'd block caller-ID
when calling us. I ended up being forced to block access to some of our
dial-up numbers both by ANI, and if there was no ANI, and then had to
setup exceptions for a few customers in those areas who we never got ANI
for. When I tried getting police in their areacode to investigate, they
had no interest/were too busy...even though I could give them phone
numbers the accounts were used from and stolen credit cards.

To put a little operational spin in here...how many of you run dial-up
networks where you refuse logins unless you get ANI?...and if you do this,
do you also maintain an ANI blacklist?

Anyway...they moved on to proxy abuse, then outright theft by creating
their own proxies on compromised MS Windows boxes. Both methods have the
advantage of totally hiding the spammer from the recipients and bandwidth
amplification. I imagine you could utilize multiple spam proxies on
broadband connections pumping out your spam while connected via dial-up
yourself.

If you look at the numbers at http://njabl.org/stats, about 5% of the
hosts that have ever been checked are currently open relays (or nobody's
bothered to remove them). IIRC, at one point, this was nearly 20%.
13.6% are open proxies...and the disparity is definitely still growing,
with about 10x as many open proxies as relays being detected daily.
Unfortunately, the new breed of purpose-built spam proxies are generally
not remotely detectable, so the proxy percentage would be even higher if
it included the newer spam proxies.

Should any dialup, dsl, cable, wi-fi, dhcp host be able to use any service
at any time? For example run an SMTP mailer, or leave Network
Neighborhood open for others to browse or install software on their
computers?

As someone who has been using IP for a while now, I would very much like to be able to use any service at any time.

Or should ISPs have a "default deny" on all services, and subscribers need
to call for permitssion if they want to use some new service? Should new
services like Voice over IP, or even the World Wide Web be blocked by
default by service providers?

Obviously not. Blocking services that are known to be bad or vulnerable wouldn't be entirely unreasonable, though. But who gets to decide which services should be blocked? Some services are very dangerous and not very useful, so blocking is a no brainer. Other services are only slightly risky and very useful. Where do we draw the line? Who draws the line?

As a HOST requirement, I think all hosts should be "client-only" by
default. That includes things when acting as like hosts such as routers,
switches, print servers, file servers, UPSes. If a HOST uses a
network protocol for local host processes (e.g. X-Windows, BIFF, Syslog,
DCE, RPC) by default it should not accept network connections.

It should require some action, e.g. the user enabling the service,
DHCP-client enabling it in a profile, clicking things on the LCD display
on the front ofthe printer, etc.

Get yourself a Mac. :slight_smile:

I think it would useful to set aside a block of port numbers for local use. These would be easy to filter at the edges of networks but plug and play would still be possible.

SERVICE PROVIDERS do not enforce host requirements.

But someone has to. The trouble is that access to the network has never been considered a liability, except for local ports under 1024. (Have a look at java, for example.) I believe that the only way to solve all this nonsense is to have a mechanism that is preferably outside the host, or at least deep enough inside the system to be protected against application holes and user stupidity, which controls application's access to the network. This must not only be based on application type and user rights (user www gets to run a web server that listens on port 80) but also on application version. So when a vulnerability is found the vulnerable version of the application is automatically blocked.

I don't see something like this popping up over night, though.

Iljitsch van Beijnum wrote:

But someone has to. The trouble is that access to the network has never been considered a liability, except for local ports under 1024. (Have a look at java, for example.) I believe that the only way to solve all this nonsense is to have a mechanism that is preferably outside the host, or at least deep enough inside the system to be protected against application holes and user stupidity, which controls application's access to the network. This must not only be based on application type and user rights (user www gets to run a web server that listens on port 80) but also on application version. So when a vulnerability is found the vulnerable version of the application is automatically blocked.

Go and count the Pinto�s on US101 or I-880. :slight_smile:

I don't see something like this popping up over night, though.

For this to be really effective, there needs to be an unbroken chain of authentication for code
from the author to your PC and additionally the operating system needs to change to get rid
of the notion of "superuser". As have been said multiple times on this and other lists, most
consumer users expect their stuff "just work" and unfortunately Microsoft translated this
requirement to "Always Local Administrator" which has catastrophic security consequences.

The chain above does not have to mean that there is central authority enabling the code to
run on your box, it can as well give the right to you or some place in the organization
where it makes sense.

Pete