Maformed SNMP Packet log/trace

I need a trace or log file of the Malformed SNMP packets
recently tracked by CERT.
http://www.cert.org/advisories/CA-2002-03.html

Thanks,
BM

I need a trace or log file of the Malformed SNMP packets
recently tracked by CERT.
2002 CERT Advisories

Go to the Oulu University page mentioned in the advisory.

Download the 4 .jar files that comprise the toolkit.

unzip the jar files. There'll be a testcases/ dir in each of them.
Each file in this directory is one of their packets.

There are 53,000 of them. Have fun!

ericb

And to keep things exciting, programmers rarely make mistakes in
only one protocol. Turing still holds. It wouldn't be surprising
if other packets can do bad things when "this should never happen"
happens. Who is checking NTP, OSPF, ISIS, BGP, SSH, DNS, TELNET,
TACACS+, etc code paths?

A lot of those protocols have people looking at them on a regular basis,
and they still manage to come up with obscure exploits noone else noticed
(ex: 23mb of buffer overflows to exploit telnetd).

On the other hand, a lot of those protocols (and more specifically their
implementations in routers) have probably never seen the light of day, and
are so rotten we are all better off keeping them covered up. I'm certain
that more then enough people here can attest to the fact that it doesn't
take much in the way of "unexpected packets" before certain vendors BGP
implementations start wigging.

Of course, it is up to the user to decide if they would rather have a
product with 50,000 holes that script kiddies don't know about, or a
product with 100 holes that the do. Most days security through obscurity
works just fine, but the days that it doesn't really suck.

But SNMP is special. It has the distinct honor of being one of those
protocols which has daylight all around it and yet somehow manages to stay
under a rock. I attribute this to what I like to call the "Upchuck Code
Barrier", namely that very few people have the intestinal fortitude to
look at the existing implementations without hurling their lunch. This
severely limits the number of exploits which are written. :slight_smile:

</rant>

Hello,

I think this question may have been asked before, but what is the minimum
latency and delay I can expect from a satellite connection? What kind of
delay have others seen in a working situation? What factors should be
considered in end to end connectivity architecture when utilizing a
satellite link?

Any help appreciated,

Tim

Also sprach Tim Devries

I think this question may have been asked before, but what is the
minimum latency and delay I can expect from a satellite connection?
What kind of delay have others seen in a working situation? What
factors should be considered in end to end connectivity architecture
when utilizing a satellite link?

Well, as a lower bound, geosynchrous orbit is between 22,000 and 23,000
miles and the speed of light is approximately 186,000 miles per second.
The math is not terribly complex from there.

Figure up and back is around 45,000 miles, then if you're doing a
round-trip, you're up to around 90,000 miles...that's on the order of a
half second right there.

That's why I laugh when BellSouth tries to tell the Kentucky PSC that
satellite is a competitor to DSL...ever tried to type over a 1/2 second
lag on telnet or ssh? Painful doesn't adequately describe it.

One of our customers tried Starband. Ping time from our mail server to his
was normally 800-900 ms.

Tim Devries wrote:

So what is the solution for a public network operator. I attended
a presentation last week where a Checkpoint reseller suggested the
client needed to buy eight Checkpoint firewalls to protect a single
web server. I was impressed, what about the undercoating and scotchguard
fabric protector.

Is it time to fall back in punt? How would you architect a backbone if
you could do it over?

Enable BGP authentication
Enable NTP authentication (use more than GPS as a source)
Enable OSPF/ISIS authentication

Use TL1 on the Aux port for network management

Ip route null0 packets from outside containing internal-only backbone
addresses.

Is the complexity of SSH code worth the protection? Or is it better
never to access your routers through VTY ports, and always use an
reverse-terminal server to the console from an out-of-band management
LAN?

I have a tachyon.net system. 600ms is typical with a good aim.
Higher latitudes are more error prone. They don't do tcp/ip over the
up/down link. They effectively terminate the tcp/ip session on the
local lan (answering handshake and acks), encapsulate, and
proxy for you on the other end. It makes a BIG difference in overall
performance for typical things like surfing and mail delivery.
Doesn't help much for interactive work. Type ahead at
600ms is much better than 2x that. vi is usable.

Our unit is available for demos and events.

DirecPC (Hughes) now has a business grade service called
DirecWay. Haven't tried it. One would expect it
would be better than the ones intended for residential only.
It claims to go to 1.5M down like Tachyon.

Barb Dijker
NeTrack

They use particularly slow tachyons :slight_smile:

--vadim

A lot of those protocols have people looking at them on a regular basis,
and they still manage to come up with obscure exploits noone else noticed
(ex: 23mb of buffer overflows to exploit telnetd).

So what is the solution for a public network operator. I attended
a presentation last week where a Checkpoint reseller suggested the
client needed to buy eight Checkpoint firewalls to protect a
single web server. I was impressed, what about the undercoating
and scotchguard fabric protector.

That's actually a possibility, soon as they support OC-192 interfaces
:wink:

Stay away from the undercoating, but the ScotchGuard(tm) is definitely
worth it!

Is it time to fall back in punt? How would you architect a backbone if
you could do it over?

Security is not about making things foolproof. They'll always be able
to break you, no matter what you do. Security is about assuming
acceptable risk, and mitigating unacceptable risk.

This whole recent mess has actually gone over fairly cleanly. The
vast majority of public infrastructure seems to have been patched with
a fair amount of speed, and nobody's noticed any serious outages due
to it. Apparently, the risk we assumed was acceptable, and when it
became unacceptable, it was mitigated quickly enough.

If I could do it over? I'd get in my Tardis, and go back to 1969.
I'd teach everyone at DARPA how to spell security. Loose source
route, IP options in general, ICMP address mask requests, all these
things should go away.

Is the complexity of SSH code worth the protection? Or is it better
never to access your routers through VTY ports, and always use an
reverse-terminal server to the console from an out-of-band management
LAN?

Console is slow, logs can easily DoS a 9600 baud line. It only allows
one connection. Good fallback point, operationally does not scale.

SSH is worth the protection, as reference implementations are
available, and it requires very little in the way of system support.
As long as in-band access to routers is required, SSH (or HTTPS or
IPSec) will be with us. As time passes, the quality of the tools that
we have to work with improves, and our trust in them can grow.

The official answer is control plane separation. This worked for the
PSTN, and it's the way the Internet will go, eventually.

ericb

Since they move backwards in time, wouldn't "fast" tachyons take longer?

Eric :slight_smile:

Security is not about making things foolproof. They'll always be able
to break you, no matter what you do. Security is about assuming
acceptable risk, and mitigating unacceptable risk.

10 years ago I suspect we would have been discussing software quality
control. The security label isn't always the best approach a problem.

Yes, car thieves will always be able to steal your car. That isn't the
same problem as having the wheels fall off the car because the
factory didn't tighten the lugnuts. Are buffer overflows an intrinsic
risk, or a symptom of bad software engineering?

I don't believe in unbreakable systems. But quality engineering can
make systems more stable and robust under all conditions, even the
unexpected. Yes, Murphy, Mother Nature and Malicious people will
still get you. But its easier to fix a well-designed system than one
held together with lots of duct tape.

If I could do it over? I'd get in my Tardis, and go back to 1969.
I'd teach everyone at DARPA how to spell security. Loose source
route, IP options in general, ICMP address mask requests, all these
things should go away.

You wouldn't need to go all the way back to 1969. I debated loose
source routing with one of the authors of TCP/IP in the early
1980's :slight_smile: I made an ass of myself in that debate. But its not really
fair to say they didn't understand security. Security is one of those
words, which means a lot of different things to different people. The
Internet is better at security than the NSA for some types of security,
and worse at other types of security.

What will be interesting is if the Internet can add confidentiality on
top of a network easier than other networks can add availability on top
of their networks. The Internet blew through Y2K without a hiccup, ask
the NRO how their super-secure network did.

SSH is worth the protection, as reference implementations are
available, and it requires very little in the way of system support.
As long as in-band access to routers is required, SSH (or HTTPS or
IPSec) will be with us. As time passes, the quality of the tools that
we have to work with improves, and our trust in them can grow.

SNMPv1 had reference implementations too. Out trust seems to have
been misplaced.

The official answer is control plane separation. This worked for the
PSTN, and it's the way the Internet will go, eventually.

Just because Bell Labs never released a paper on "Security Problems in
the SS7 Protocol Suite" doesn't make the telephone network secure.
PSTN security relies primarly on trust between telephone companies. Not
very scalable. The Internet has been the biggest improvement in
telephone security in the last 100 years. The Internet was a nice
bright shiny object which attracted most of the phreakers away from
the PSTN.

Control plan seperation isn't a complete answer for the Internet
because its a network of networks. Just like control plane seperation
has problem scaling in the PSTN, you'll find a lot of "untrustworthy"
parties will end up with access to the control planes which extend
between networks.

... Are buffer overflows an intrinsic risk,
or a symptom of bad software engineering?

as long as stack memory is immutably executable, and software is written
in the C language, buffer overflows are an intrinsic (though manageable)
risk. one expects that they don't program interplanetary missions in C.

... But its easier to fix a well-designed system
than one held together with lots of duct tape.

and yet it's a lot harder to break 500 duct-taped systems of mixed and
varied heritage than to break one well designed system. monoculture is
intrinsically brittle no matter how strong the genes themselves may be.

> SSH is worth the protection, as reference implementations are
> available, and it requires very little in the way of system support.

and it's been broken a couple of times now, from buffer overflows to
integer overflows. and now i hear that mr. bernstein has a paper out
about how RSA isn't nearly as secure as we thought it was, and there's
a rumour of another ssh integer overflow attack in the offing.

there's no silver bullet or silver buzzword. programs written in languages
other than C are susceptible to buffer overflows, bitswizzlers other than
just RSA will turn out to be trivial in hindsight, IPsec and DNSsec and
XYZZYsec will each turn out to be crocks of dung at some point in the
future. but for some values of "now", each might have its place.

the best security-related records will be set and held by people and
companies who are unceasingly vigilant and who practice professional risk
management, and not by people or companies who depend on silver buzzwords.

(note that i hold the single-author record for total CERT advisories,
proving that in my copious youth i knew how to sling code but not how to
manage risk.)

Beware of any protocol named "Simple". In my experience things that are
designed with unneeded complexity tend to be the most broken or breakable.

DirecPC (Hughes) now has a business grade service called
DirecWay. Haven't tried it. One would expect it
would be better than the ones intended for residential only.
It claims to go to 1.5M down like Tachyon.

I use Direcway presently as my primary link. I refer to it as, "Dialup
Plus!" As a previous poster noted, ssh sessions are painful. I now resort
to typing e-mails out in a local text editor and cutting and pasting them
into my ssh window.

Here's a local traceroute performed just now at 7am on a Wednesday to a
random service like Google. Performance gets significantly poorer during
peak business hours. High d/l rates are only achievable late at night it
seems.

  1 797 ms 810 ms 810 ms 172.23.4.17
  2 742 ms 686 ms 742 ms 172.23.4.17
  3 796 ms 755 ms 742 ms 172.23.128.2
  4 920 ms 755 ms 742 ms 192.168.11.251
  5 741 ms 755 ms 687 ms 63.215.128.137]
  6 742 ms 741 ms 687 ms [64.159.18.6]
  7 755 ms 741 ms 742 ms 209.249.0.173
  8 796 ms 742 ms 755 ms [209.249.0.213]
  9 742 ms 755 ms 741 ms [208.185.0.137]
10 783 ms 810 ms 824 ms [216.200.127.26]
11 810 ms 879 ms 810 ms [208.184.233.50]
12 810 ms 811 ms 810 ms [208.185.75.198]
13 742 ms 810 ms 810 ms [216.239.47.18]
14 811 ms 810 ms 810 ms [216.239.47.162]
15 783 ms 796 ms 824 ms www.google.com [216.239.35.101]

/david

i regularly work across satelite links, and haven't found it intolerable.

if your expectations are that of terrestrial based networks, then, yeah,
it is gonna seem slow. i tend to think "geez, this actually works!", and
thus my expectations are based on actual function, not throughput.

mind you, my first internet connections were 9600bps, and 2400bps over x.25,
so, maybe my tolerance level is a bit higher. 8^)

actually, i think the problem you are seeing is moreso related to the
asymetric nature of the connection (i think you were referring to directway
which is a "one-way" satelite feed).

i generally only use such connections for front-ending a squid server.

the problem is that the round trip routing of your packets gets pretty
diverse. when you type the packets go out the modem, across a terrestrial
network to the uplink, then back down over satelite.

not much can be done to make that better.

if you "own" the network, there are some tunneling things you can do to make
things appear to be less asymetric, which might help, but if you are an
end-user, you'll just have to tough it out.

alternately, configure your router/etc to use the IP of your dial-up connection
for ssh/telnet, instead of the IP of the downlink. this will make your
telnet/ssh work using only the dial-up connection, which will eliminate the
asymetric routing.

(ie. interactive traffic uses the dial-up addr, "bulk" services use the
downlink).

Actually, it's a two-way connection, and I sent you a private e-mail, but
hey, in front of 10k folks is cool.... :>

Actually, it's a two-way connection,

hmmm, ok, my bad. if it is two-way, then the telnet/ssh buffering shouldn't
be that bad. at least in my opinion. i use ssh over (two-way) satelite
connections all the time, and usually don't have much trouble unless the link
is full.

and I sent you a private e-mail, but hey, in front of 10k folks is cool.... :>

i figured my response might be informative to others in a similar (albeit
mis-interpreted) situation.