What could have been done differently?

They do have a lousy track record. I'm convinced, though, that
they're sincere about wanting to improve, and they're really trying
very hard. In fact, I hope that some other vendors follow their
lead. My big worry isn't the micro-issues like buffer overflows
-- it's the meta-issue of an overall too-complex architecture. I
don't think they have a handle on that yet.

    --Steve Bellovin, http://www.research.att.com/~smb (me)
    http://www.wilyhacker.com (2nd edition of "Firewalls" book)

Quite true - complexity is inversely proportional to security (thanks, Mr.
Schneier). Unfortunately, it seems like the Net as a whole, including the
systems, software and protocols running on it, only gets more complex as time
goes by. How will we reconcile this growing complexity and our increasing
dependency on the global network with the ever-growing need for security and
reliability? They seem to be accelerating at the same rate.

:They do have a lousy track record. I'm convinced, though, that
:they're sincere about wanting to improve, and they're really trying
:very hard. In fact, I hope that some other vendors follow their
:lead. My big worry isn't the micro-issues like buffer overflows
:-- it's the meta-issue of an overall too-complex architecture. I
:don't think they have a handle on that yet.

Excellent point. I have been saying this since the dawn of Windows
3.x. Obviously, software engineering for such a large project as an(y) OS
needs to be distributed. MS has long been remiss in facilitating
(mandating?) coordination between project teams pre-market. You're
absolutely correct that complexity is now the issue, and it could have
been mitigated early on. (Who knows what? Is "who" still
employed?" If not, where are "who's" notes? Who knows if "who" shared
his notes with "what"?, Who's on third?...)

Now, it's going to cost loads of $$ to get everyone on the same page (or
chapter), if that's even in the cards. For MS, it's a game of picking the
right fiscal/social/political tradeoff. It's extremely complex now, as
the project has taken on a life of its own.

Someone let the suits take control early on, and we all know the rest of
the story.

Any further discussion will likely be nothing more than educated
conjecture (as was the above).

cheers,
brian

They do have a lousy track record. I'm convinced, though, that
they're sincere about wanting to improve, and they're really trying
very hard. In fact, I hope that some other vendors follow their
lead.

Of course we need to be honest with ourselves and recognize this has
been going on for a long time before Microsoft was even a glimmer in
Bill Gates eye.

Multics security. Bell Labs answer: Unix. Who needs all that "extra"
security junk in Multics. We don't need to protect /etc/passwd because
we use DES crypt and users always choose strong passwords. We'll make
the passwd file world readable so we can translate uid's to usernames.
Multi-level security? Naw, its simplier just to make everything Superuser.

FORTRAN/COBOL array bounds checking. Bell Labs answer: C. Who wants
the computer to check array lengths or pointers. Programmers know what
they are doing, and don't need to be "constrained" by the programming
language. Everyone knows programmers are better at arithmatic than
computers. A programmer would never make an off-by-one error. The
standard C run-time library. gets(char *buffer), strcpy(char *dest, char
*src), what were they thinking?

My big worry isn't the micro-issues like buffer overflows
-- it's the meta-issue of an overall too-complex architecture. I
don't think they have a handle on that yet.

The strange thing about complexity is its much harder to design a "simple"
system than a Rube Goldberg contraption.

Possibly that bounds checking is an incredible cpu suck, there are a great
many powerful things you can do in C based on the fact that there is no
bounds checking (pointers ARE your friend god damnit :P), and in a world
before buffer overflow exploits it probably didn't matter if Joe Idiot's
program crashed because he goofed? (hindsight is 20/20)

They do have a lousy track record. I'm convinced, though, that
they're sincere about wanting to improve, and they're really trying
very hard. In fact, I hope that some other vendors follow their
lead.

Lest we forget, Microsoft did not originally design Windows for the
Internet, nor for a lot of what it does today.

Of course we need to be honest with ourselves and recognize this has
been going on for a long time before Microsoft was even a glimmer in
Bill Gates eye.

Multics security. Bell Labs answer: Unix. Who needs all that "extra"
security junk in Multics. We don't need to protect /etc/passwd because
we use DES crypt and users always choose strong passwords. We'll make
the passwd file world readable so we can translate uid's to usernames.
Multi-level security? Naw, its simplier just to make everything Superuser.

FORTRAN/COBOL array bounds checking. Bell Labs answer: C. Who wants
the computer to check array lengths or pointers. Programmers know what
they are doing, and don't need to be "constrained" by the programming
language. Everyone knows programmers are better at arithmatic than
computers. A programmer would never make an off-by-one error. The
standard C run-time library. gets(char *buffer), strcpy(char *dest, char
*src), what were they thinking?

Unix and C where also not designed for the Internet.

More ramble ... but a point will emerge ...

My big worry isn't the micro-issues like buffer overflows
-- it's the meta-issue of an overall too-complex architecture. I
don't think they have a handle on that yet.

The Internet magnifies relatively harmless conveniences into
major problems. Network access and "crack" made the world readable
/etc/password into a major security hole. "C" is a vast improvement
over assembly and evolved into the language of choice for developers
over other languages. So we have a few buffer overflows now and then.

The formative Internet did a lot to spread C source code. Unix was
the primary platform for the Internet before ISPs spread the network
to small businesses and home computers. Some of us remember down
loading C code from ftp sites in the era before the web page
when you could count off the major source code archives on your
fingers.

The strange thing about complexity is its much harder to design a "simple"
system than a Rube Goldberg contraption.

The complexity of Windows ... indeed all our modern OSes has evolved
as they adapt themselves to network environments, complex graphics,
multi media applications, complex user interfaces. Microsoft has tended
to absorb applications into the core OS and, perhaps more than any
other, softened the line between kernel and application to a point
where security suffers. Unix systems have the same problem when root
privileges are given to given to code ... often because it is less
complex to give a process privilege than to craft a secure sandbox.

I was just starting to use the Internet when the Morris worm chewed
its way through the net. The Morris worm was the first taste of what
a harmless back door and lapses in security could do on the Internet.
It has been almost 15 years since that incident and look at how far
we have come. Common code and lack of review contributed to that one.

Internet worms and viruses have a far greater impact when we all use
the same code, the same operating system, the same stack. If you
plant one genetic strain of corn you risk famine come the blight.

Having BSD*, Linux, OS X, and Microsoft in the mix helps prevent
mono culture blights. Having Juniper, Cisco, and others in the
core is good for our networks.

Competition, variety and some level of complexity do act as safeguards
against the the catastrophic failures exhibited by "mono culture" systems.

IMHO competition and diversity are necessary for healthy systems,
corporation, economies and societies. Any complex set of
structures that becomes dominated by a single technology, OS,
ideology or genotype becomes the ideal growth media for disease.

This is why, IMHO, mono-anythings are bad, no matter how benign
or well designed.

A world before buffer overflow exploits ?

The first (Fortran) programming course I ever took at MIT on the first day of lab they said

1.) If you set an array index to a sufficiently large negative number you would overwrite
the operating system and crash the system (requiring a reboot from punched paper tape).

and

2.) If you did that, they would be so pissed off at you, you would summarily fail the course.

This was back when the Fortran complier was included in each run as part of the card deck.

I also remember overflow type hacks on the MIT Multics system, which was constantly being hacked.

So, I agree with Sean, what were they thinking ?

I think the larger concern at that time was memory capacity. Remember that
only the very largest machines had over 128K.

In a message written on Wed, Jan 29, 2003 at 03:32:41AM -0500, Sean Donelan wrote:

Multics security. Bell Labs answer: Unix. Who needs all that "extra"
security junk in Multics. We don't need to protect /etc/passwd because
we use DES crypt and users always choose strong passwords. We'll make
the passwd file world readable so we can translate uid's to usernames.
Multi-level security? Naw, its simplier just to make everything Superuser.

A choice made what, 20 years ago? Almost every major form of unix
moved to shadow password files and/or stronger password protection
years ago.

FORTRAN/COBOL array bounds checking. Bell Labs answer: C. Who wants
the computer to check array lengths or pointers. Programmers know what
they are doing, and don't need to be "constrained" by the programming
language. Everyone knows programmers are better at arithmatic than
computers. A programmer would never make an off-by-one error. The
standard C run-time library. gets(char *buffer), strcpy(char *dest, char
*src), what were they thinking?

Again, a choice made perhaps 20 years ago? New libraries and
languages make solving this problem much easier. New tools are
available to catch it when it does happen, even in traditional C.

We can't expect people to never make mistakes. Rather, the bar
must be set that once a mistake is made and understood we strive
never to make it again. The choices you site were made at a very
different time, and for very different reasons. I highly doubt
if Bell Labs had to make choices today that they would choose the
same outcome.

I said exploits, not ways to get outside your proper address space and
crash the OS. Any sufficiently powerful language presents an opportunity
to do bad things to an ill prepared OS, but the answer isn't to make the
language less powerful.

Perhaps if we banned C and assembly, and made everyone use perl, we'd be
safe. :slight_smile:

Date: Wed, 29 Jan 2003 08:18:45 -0500
From: Richard A Steenbergen

Possibly that bounds checking is an incredible cpu suck,

If you check before each byte. Checking for sufficient space
first ("is there room for a 245-byte string?") is much faster.
Besides, looking at all the bloated code using indirect function
calls[*] and crappy code using poor algorithms... is speed really
a concern?

[*] Try profiling indirect function calls on x86, especially
    newer cores. Such instructions carry a stiff penalty... but
    there's no shortage of virtual functions in certain software.
    (Think: OWL and MFC libraries.)

Eddy

Note I'm making a distinction between fixing the string libraries to
handle overflow situations better, and changing the entire OS to do array
bounds checking. One is good, the other is not.

Date: Wed, 29 Jan 2003 12:36:22 -0500
From: Richard A Steenbergen

Note I'm making a distinction between fixing the string
libraries to handle overflow situations better, and changing
the entire OS to do array bounds checking. One is good, the
other is not.

Okay. I'll buy that.

On a somewhat similar note, it's too bad x86 lacks native support
for diasbling PROT_EXEC. That wouldn't solve everything, but it
would help. (I recall a paper on some funky asm-foo to implement
it, but only skimmed it...)

The real definition of layered security: We needn't worry about
that here, because another layer will take care of it.

Eddy

> > FORTRAN/COBOL array bounds checking. Bell Labs answer: C. Who wants
> > the computer to check array lengths or pointers. Programmers know what
> > they are doing, and don't need to be "constrained" by the programming
> > language. Everyone knows programmers are better at arithmatic than
> > computers. A programmer would never make an off-by-one error. The
> > standard C run-time library. gets(char *buffer), strcpy(char *dest, char
> > *src), what were they thinking?
>
> Possibly that bounds checking is an incredible cpu suck

It doesn't have to be, if your compiler is worth its salt. Take a look at the GNU Ada compiler implementation of bound checking -- incredibly efficient. There are optimizations and inductive reasoning you can perform at compile time. Strongly typed programming languages make it easier to perform those optimizations, which is a major problem for C (everything's a pointer, right? :slight_smile: However, the current language fad is Java, which is somewhat more strongly typed.

, there are a great
> many powerful things you can do in C based on the fact that there is no
> bounds checking (pointers ARE your friend god damnit :P), and in a world
> before buffer overflow exploits it probably didn't matter if Joe Idiot's
> program crashed because he goofed? (hindsight is 20/20)

Not sure if this was ever true for networked applications. The original Morris ARPANet worm exploited a buffer overrun vulnerability in the BSD Unix finger daemon. There's no excuse for failing to change behavior, or not re-visiting bounds checking in compilers & interpreters / virtual machines (e.g. JVM).

Finally, and rather off-topic, I have yet to come across a C programming technique that "can't be done" efficiently in, say, Ada -- a language that usually gives C programmers fits of apoplexy. You just have to know how to express the solution in that language, rather than forcing a literal translation of the way it's done in C.

Cheers,

Mathew

Date: Wed, 29 Jan 2003 11:07:59 -0800
From: Mathew Lodge

It doesn't have to be, if your compiler is worth its salt.
Take a look at the GNU Ada compiler implementation of bound
checking -- incredibly efficient.

s/compiler/programmer/

How about:

  struct buf_t {
    char *first ;
    char *cur ;
    char *last ;
    size_t size ;
    size_t remaining ;
  } ;

  /* implement various buf-management macros */

Implement some or all, as needed. Replace the char* elements
with a union of various ptr types if desired. Keep the type
definition available; there's no need for an opaque struct.

Now write programs to toss around buf_t* instead of char*. It's
not that difficult.

Eddy

No, it isn't, as is doing buf_t[x] rather than pointer arithmetic, but the *practical* problem is that you really need

1,$s/compiler/programmer/

:slight_smile:

In other words, there are far fewer compilers, interpreters, Java Virtual Machines, libraries etc. in use than there are programmers using them. So working the tools angle gives you leverage across far more programmers and the programs they create.

Cheers,

Mathew

Richard A Steenbergen <ras@e-gerbil.net> writes:

(pointers ARE your friend god damnit :P)

Most C programmers have no clue about the C pointer semantics, I'm
afraid, so this powerful feature is often abused.

Richard A Steenbergen <ras@e-gerbil.net> writes:

I said exploits, not ways to get outside your proper address space and
crash the OS. Any sufficiently powerful language presents an opportunity
to do bad things to an ill prepared OS, but the answer isn't to make the
language less powerful.

The Burroughs B6700 had trusted compilers.

Perhaps if we banned C and assembly, and made everyone use perl, we'd be
safe. :slight_smile:

The Perl parser itself (written in C :wink: seems to have some issues (in
__DIE__ handlers). 8-(

Date: Wed, 29 Jan 2003 12:58:58 -0800
From: Mathew Lodge

No, it isn't, as is doing buf_t[x] rather than pointer

True. I just like having a struct so I may pass a single
variable in function calls instead of a whole mess of them.

arithmetic, but the *practical* problem is that you really
need

1,$s/compiler/programmer/

:slight_smile:

In other words, there are far fewer compilers, interpreters,
Java Virtual Machines, libraries etc. in use than there are
programmers using them. So working the tools angle gives you
leverage across far more programmers and the programs they
create.

Yes, although there's a certain level of "bare minimum clue"
required, no matter what the tools. On comp.programming,
someone recently asked if they could write C++ programs in VB
6.0. Ummmm.... my guess is they'd make a rotten C++ programmer.
Hopefully it was a joe job or silliness, and not a bona fide
question.

Is it unreasonable to ask that programmers not assume memory is
initialized, to check bounds as needed, and to realize that
operations are NOT atomic without special protection[*]? I
don't think so. Sure, it's extra work; put it in a library.

[*] Special protection that, for obvious reasons, is unavailable
    in userland. Hence fstat(2), fchown(2), fchdir(2), etc for
    fs operations.

Eddy

[reader warning: diatribe following]

Gee, there once were a handflul of people;
their principle goal was to make an OS for their own use.

They did it in such a way that it could be developed by its users while
they used it. Creeping featurism was held down successfully, at least
initially ;-(. It ran on platforms orders of magnitude cheaper than
what Multics ran on at the time. It taught a lot of people about
programming style. I hope I learned some things from it. And they
wrote up the shortcomings of the security architecture concisely at the
time this began to matter. They understood stuff that M$ with its
"creeping featurism", "low support cost defaults", "undocumented API of the week"
cannot possibly begin grok and deal with because of ETOOBIG.

Now you and I use it because it does the job better than anything else.
Then you blame them for not designing in today's requirements 30
(not 20!) years ago. Give them a break ...

Daniel

PS: Worm? Virus? Who wrote this up concisely first?

PPS: Plan 9 anyone?