Packet anonymity is the problem?

If you connect a dialup modem to the public switched telephone network, do
you rely on Caller ID for security? Or do you configure passwords on the
systems to prevent wardialers with blocked CLIDs from accessing your
system? Have a generation of firewalls and security practices distracted
us from the fundamental problem, insecure systems.

http://www.ecommercetimes.com/perl/story/security/33344.html
  Gartner research vice president Richard Stiennon confirmed that packet
  anonymity is a serious issue for Internet security.
[...]
  "Because of the way TCP/IP works, it's an open network," Keromytis
  said. "Other network technologies don't have that problem. They have
  other issues, but only IP is subject to this difficulty with abuse."

[...]
  Bellovin compared the situation to bank robberies. "[S]treets, highways
  and getaway cars don't cause bank robberies, nor will redesigning them
  solve the problem. The flaws are in the banks," he said. Similarly, most
  security problems are due to buggy code, and changing the network will
  not affect that.

: "Because of the way TCP/IP works, it's an open network," Keromytis
: said. "Other network technologies don't have that problem. They have
: other issues, but only IP is subject to this difficulty with abuse."

If networks properly filtered the source IP's of packets exiting or entering
their networks to only the valid delegations for that network, this would be
far less of a problem: we could at least get *some* accountability going.

Of course, the still high number of bogon routes illustrate that very few
folks (if any) really care.

Worse; the registries make it trivial to steal registrations and
assignments, but nigh impossible to get them back to the rightful owners.

-Dan

: "Because of the way TCP/IP works, it's an open network," Keromytis
: said. "Other network technologies don't have that problem. They have
: other issues, but only IP is subject to this difficulty with abuse."

If networks properly filtered the source IP's of packets exiting or entering
their networks to only the valid delegations for that network, this would be
far less of a problem: we could at least get *some* accountability going.

Of course, the still high number of bogon routes illustrate that very few
folks (if any) really care.

in another thread tonight i see subjects like "lazy network operators" and at
first glance, those are the people you're describing (who don't really care.)

however, that's simple-minded. "because of the way tcp/ip works..." is a very
good lead-in toward the actual cause of this apparent non-caring / laziness.

because of the way ip works, and because of the way human nature works, many
of the things that would have to be done to fix this problem have assymetric
cost/benefit. if a network provider isn't lazy, then everyone except them
will benefit from that non-laziness. human nature says that ain't happening.

even though i try every day, it probably is too late to redesign human nature.

the assymetric cost/benefit is an emergency property of fundamental design
principles in tcp/ip, so it's no surprise that ipv6 didn't do much about this
"weakness".

attempting to symmetrize cost/benefit without design changes in either human
nature or the tcp/ip protocol suite has had mixed results. (i.e., MAPS.)

so, the article sean quoted is all very entertaining, but says nothing new,
which is sad, because i for one would really like to hear something new.

If you connect a dialup modem to the public switched telephone network, do
you rely on Caller ID for security? Or do you configure passwords on the
systems to prevent wardialers with blocked CLIDs from accessing your
system? Have a generation of firewalls and security practices distracted
us from the fundamental problem, insecure systems.

http://www.ecommercetimes.com/perl/story/security/33344.html
  Gartner research vice president Richard Stiennon confirmed that packet
  anonymity is a serious issue for Internet security.
[...]
  "Because of the way TCP/IP works, it's an open network," Keromytis
  said. "Other network technologies don't have that problem. They have
  other issues, but only IP is subject to this difficulty with abuse."

Is IP really more insecure than, say, *nix? Back in the days of open mail relays and telnet and guest accounts and anonymous FTP sites, etc., hosts were at least as insecure as the "network" is today. Filtering source addresses is analogous to turning off telnet or applying TCP wrappers on a host. No one seems to think that securing your host is a bad idea, but securing your network seems to be way too much trouble.

Of course, the analogy only goes so far. Filtering source addresses costs you time & effort, and maybe even hardware if you are running old boxes. Not filtering doesn't really do much until someone launches an attack from your network and you might not even notice that. Leaving telnet running on your host hurts you directly, so that is not even considered.

Point is IP is not "inherently insecure". IP is just a transport mechanism. How you configure it, and what you do with it, is up to you.

[...]
  Bellovin compared the situation to bank robberies. "[S]treets, highways
  and getaway cars don't cause bank robberies, nor will redesigning them
  solve the problem. The flaws are in the banks," he said. Similarly, most
  security problems are due to buggy code, and changing the network will
  not affect that.

I've always liked that Bellovin guy. :slight_smile:

Another note: Today's attacks tend to not spoof source addresses. What's a few 10s of 1000s of zombies here or there? Let them be caught, not worth the time to put in source spoofing code. Easier to just make them spew massive bits as fast as they can. Shouldn't we concentrate on the problem (hosts), not the transport?

  "Because of the way TCP/IP works, it's an open network," Keromytis
  said. "Other network technologies don't have that problem. They have
  other issues, but only IP is subject to this difficulty with abuse."

I don't think so. Non-IP networks such as the phone network, the (snail) mail network and the pizza delivery network are also subject to abuse. The difference is there are much fewer convenient multipliers around that give an attacker an asymmetric advantage.

  Bellovin compared the situation to bank robberies. "[S]treets, highways
  and getaway cars don't cause bank robberies, nor will redesigning them
  solve the problem. The flaws are in the banks," he said. Similarly, most
  security problems are due to buggy code, and changing the network will
  not affect that.

Ok, then explain to me how removing bugs from the code I run prevents me from being the victim of denial of service attacks.

It's the other way around in fact: if others were to run (more)
   secure code, there would be far less boxen used as zombies to launch
   ddos attacks against your infrastructure, to propagate worms, and to
   be used as spam relays.

   While it can sound a bit theorical (to hope that the "others" will
   run secure code), as the vast majority of users run OSs from one
   particular (major) vendor, an amelioration of said family of OSs
   would certainly benefit to all. Just think about all the recent
   network havocs caused by worms propagating on one OS platform ...

      - yann

Ok, then explain to me how removing bugs from the code I run prevents
me from being the victim of denial of service attacks.

   It's the other way around in fact: if others were to run (more)
   secure code, there would be far less boxen used as zombies to launch
   ddos attacks against your infrastructure, to propagate worms, and to
   be used as spam relays.

You make two assumptions:

1. denial of service requires compromised hosts
2. good code prevents hosts from being compromised

I agree that without zombies launching a significant DoS is much more difficult, but it can still be done. Also, while many hosts run insecure software, the biggest security vulnerability in most systems is the finger resting on the left mouse button.

Also, waiting for others to clean up their act to be safe isn't usually the most fruitful approach.

   While it can sound a bit theorical (to hope that the "others" will
   run secure code), as the vast majority of users run OSs from one
   particular (major) vendor, an amelioration of said family of OSs
   would certainly benefit to all. Just think about all the recent
   network havocs caused by worms propagating on one OS platform ...

I'm not all that interested in plugging individual security holes. (Not saying this isn't important, but to the degree this is solvable things are moving in the right direction.) I'm much more interested in shutting up hosts after they've been compromised. This is something we absolutely, positively need to get a handle on.

There are network equipment manufactures who offer
last mile protection at the chip level which forces
authentication or the packets get dropped, this has
been around for about 4 years now and people should
seriously look at that as a solution, fast changeable
FPGA designs can accommodate such issues and can be
changed on the fly long before someone has time to
effectively reverse engineer them to find out how they
work, they will always be behind by several years and
will not he having access to source code to be able to
hack anything........

Forced Identification for people who purchase Cisco
reseller equipment and any other manufacturer of said
equipment will put a dent in some of this non sense
also. If there is to be security then you must look
at the entire issue well beyond the ability to hack
stuff. Anyway my 2 cents for the moment

-Henry

>>Ok, then explain to me how removing bugs from the code I run prevents
>>me from being the victim of denial of service attacks.

> It's the other way around in fact: if others were to run (more)
> secure code, there would be far less boxen used as zombies to launch
> ddos attacks against your infrastructure, to propagate worms, and to
> be used as spam relays.

You make two assumptions:

1. denial of service requires compromised hosts

   I don't remember having made such an assumption :slight_smile: the assumption i
   made (and i still make) is that compromised hosts *are* used for
   dos attacks, as well as for other uses having major network impact
   (worms and spam, that is)

2. good code prevents hosts from being compromised

   yes, i think that good code can reduce the exposure to
   compromissions. And then came the diseasusers ...

I agree that without zombies launching a significant DoS is much more
difficult, but it can still be done. Also, while many hosts run
insecure software, the biggest security vulnerability in most systems
is the finger resting on the left mouse button.

   I perfectly agree. But there are technical countermeasures available
   to limit the user willingness to help compromise his own box.
   Sandboxing, ingress *and* egress filtering, sensible security
   defaults and so on.

   While it would have not been a panacea, i think that no unnecessary
   open ports on default installs + OSs not encouraging their users to
   run as Administrator would certainly have been a good thing (tm)

   We certainly can't expect nothing from the user, but we should be
   able to expect sensible default settings from OS vendors

Also, waiting for others to clean up their act to be safe isn't usually
the most fruitful approach.

   I was not even suggesting something like that :slight_smile:

> While it can sound a bit theorical (to hope that the "others" will
> run secure code), as the vast majority of users run OSs from one
> particular (major) vendor, an amelioration of said family of OSs
> would certainly benefit to all. Just think about all the recent
> network havocs caused by worms propagating on one OS platform ...

I'm not all that interested in plugging individual security holes. (Not
saying this isn't important, but to the degree this is solvable things
are moving in the right direction.) I'm much more interested in
shutting up hosts after they've been compromised. This is something we
absolutely, positively need to get a handle on.

   I think we mostly agree on every points, i just wanted to pinpoint
   the fact that insecure code run by others has certainly repercussions
   on everyone's network.

   So now let's this thread die, because it begins to sound like
   something we have seen so many times :slight_smile: I won't add _one_ word to
   these way too much rebated subjects

   Cheers,

      - yann

[snip]

in another thread tonight i see subjects like "lazy network
operators" and at first glance, those are the people you're
describing (who don't really care.)

however, that's simple-minded. "because of the way tcp/ip
works..." is a very good lead-in toward the actual cause of
this apparent non-caring / laziness.

because of the way ip works, and because of the way human
nature works, many of the things that would have to be done
to fix this problem have assymetric cost/benefit. if a
network provider isn't lazy, then everyone except them will
benefit from that non-laziness. human nature says that ain't
happening.

I have heard the 'assymetric cost/benefit' rationale for the
bad laziness (sloppiness, not the larry wall-esque 'good'
laziness of automation) on and off the last few years.
Similarly, I have heard about the tremendous cost of sloppiness
and human error in terms of root-cause for networking badness
for the past several years.

Seems that these items are related...

You make two assumptions:

1. denial of service requires compromised hosts
2. good code prevents hosts from being compromised

I agree that without zombies launching a significant DoS is much more
difficult, but it can still be done. Also, while many hosts run insecure
software, the biggest security vulnerability in most systems is the
finger resting on the left mouse button.

Prior to Windows I would have agreed with you. However, with the advent
of Windows, I think insecure software has surpassed the user as a source
of problems. This is not based on a belief that users have gotten any
better, but, rather that software is significantly worse.

Also, waiting for others to clean up their act to be safe isn't usually
the most fruitful approach.

This is very true. However, education and encouragement of others to fix
their insecure systems is a worth-while endeavor, and, the reality remains
that if we could find a way to solve that issue, it would significantly
reduce today's DDOS and SPAM environment.

   While it can sound a bit theorical (to hope that the "others" will
   run secure code), as the vast majority of users run OSs from one
   particular (major) vendor, an amelioration of said family of OSs
   would certainly benefit to all. Just think about all the recent
   network havocs caused by worms propagating on one OS platform ...

I'm not all that interested in plugging individual security holes. (Not
saying this isn't important, but to the degree this is solvable things
are moving in the right direction.) I'm much more interested in shutting
up hosts after they've been compromised. This is something we absolutely,
positively need to get a handle on.

I think both efforts are necessary and worthy.

Owen

Joe Provo wrote:

I have heard the 'assymetric cost/benefit' rationale for the

bad laziness (sloppiness, not the larry wall-esque 'good' laziness of automation) on and off the last few years. Similarly, I have heard about the tremendous cost of sloppiness and human error in terms of root-cause for networking badness for the past several years.

Maybe there should be more "neighborhood intelligent" worms which would target resources that are within the vicinity of the compromised host. SMTP, WWW, etc. services. That way the effects would be most devastating for the lazy.

Pete

Petri Helenius wrote:

Joe Provo wrote:

I have heard the 'assymetric cost/benefit' rationale for the

bad laziness (sloppiness, not the larry wall-esque 'good' laziness of automation) on and off the last few years. Similarly, I have heard about the tremendous cost of sloppiness and human error in terms of root-cause for networking badness for the past several years.

Maybe there should be more "neighborhood intelligent" worms which would target resources that are within the vicinity of the compromised host. SMTP, WWW, etc. services. That way the effects would be most devastating for the lazy.

Pete

That raises what some would call an interesting veiwpoint (not my own)

Since there will be a worm for X written by "bad" people, and the worse the worm, the faster the "lazy" administrators patch......

Therefore the "good" people should beat the bad people to the punch and write the worm first. Make it render the vulnerable system invulnerable or if neccessary crash it/disable the port etc..... so that the "lazy" administrators fix it quick without losing their hard drive contents or taking out the neighborhood.

Such "corrective" behavior as suggested by you might also be implemented in such a "proactive" worm.

How many fewer zombies would there be if this was happening?

Clearly the current model is not working.

As I understand it, Netsky is supposed to be such a worm. Doesn't seem to make much of a difference, does it?

I thought that Nachi/Welchia was supposed to be such a worm as well, and it ended up doing more harm than good.

-J

Jeff Workman wrote:

Therefore the "good" people should beat the bad people to the punch and
write the worm first. Make it render the vulnerable system invulnerable
or if neccessary crash it/disable the port etc..... so that the "lazy"
administrators fix it quick without losing their hard drive contents or
taking out the neighborhood.

Such "corrective" behavior as suggested by you might also be implemented
in such a "proactive" worm.

How many fewer zombies would there be if this was happening?

As I understand it, Netsky is supposed to be such a worm. Doesn't seem to make much of a difference, does it?

I thought that Nachi/Welchia was supposed to be such a worm as well, and it ended up doing more harm than good.

One could argue that those were implementation issues, probably performed by people who did not know what they were doing.

I would be inclined to agree. However, how do we "verify" such a worm. Do we only allow signed worms to infiltrate our system? This is flawed because the worms in the wild are obviously penetrating systems without their owner's (or the operating system's) consent. And, even if it were possible to implement such a worm, who is going to assume the liability of signing it?

-J

[SNIP]

Interesting that I sent this on the 11th and it gets delivered on the 14th.

My reverse did not match the forward of that IP, which I just fixed. I figured that the mail was dead, but I guess it queued. Sorry for the ... strange reply timing.