Bell Labs or Microsoft security?

They do have a lousy track record. I'm convinced, though, that
they're sincere about wanting to improve, and they're really trying
very hard. In fact, I hope that some other vendors follow their

Of course we need to be honest with ourselves and recognize this has
been going on for a long time before Microsoft was even a glimmer in
Bill Gates eye.

Multics security. Bell Labs answer: Unix. Who needs all that "extra"
security junk in Multics. We don't need to protect /etc/passwd because
we use DES crypt and users always choose strong passwords. We'll make
the passwd file world readable so we can translate uid's to usernames.
Multi-level security? Naw, its simplier just to make everything Superuser.

Yes and no. The password file situation is a failure of prediction.
They understood the problem -- password-guessing attacks on weak
passwords (see Morris and Thompson's Nov. 1979 CACM paper on password
security) -- but misjudged the effect of Moore's Law (much less
well-known then than today) and algorithms optimized for
password-cracking. It took a dozen years for the threat level to
return to the one they countered in the 1970's -- and that's a pretty
good lifespan for any program.

The superuser question is a more interesting one. The 7th Edition Unix
manuals included a short paper by Ritchie entitled "On the Security of
UNIX". The second paragraph starts, "the first fact to face is that
Unix was not developed with security, in any realistic sense, i nmind;
this fact alone guarantees a vast number of holes." Later on, the
paper notes "It must be recognized that the mere notion of a super-user
is a theoretical, and usually practical, blemish on any protection
scheme." In other words, they understood what they were doing -- and
what they were doing wasn't designing a secure operating system,
because that wasn't very interesting at the time. Remember that the
machines weren't networked, and they had a well-controlled user base.
I submit that the threat environment has changed. (One of the worst
mistakes in system design is to use yesterday's answers for today's
questions. The technological environment has changed. I seem to
recall getting a Unix distribution on an RK05 disk pack. According to
my trusty 7th Edition manual, such a pack held 2.4M bytes. My PDA has
considerably more storage than that..)

The real sin was not rethinking the design in the late 1980s, when
machines were much larger and ubiquitous networking was clearly coming.
Of course, the Bell Labs folks did just that, and produced Plan 9.
Microsoft rethought some of these issues, but *not* from a security

FORTRAN/COBOL array bounds checking. Bell Labs answer: C. Who wants
the computer to check array lengths or pointers. Programmers know what
they are doing, and don't need to be "constrained" by the programming
language. Everyone knows programmers are better at arithmatic than
computers. A programmer would never make an off-by-one error. The
standard C run-time library. gets(char *buffer), strcpy(char *dest, char
*src), what were they thinking?

I wish I knew. McIlroy once told me that C was the best assembler
language he had ever used. I saw Kernighan at a Secure Software
conference a few weeks ago and gave him a hard time about the C
library; he blames Ritchie. Next time I see Dennis, I'll mutter at him

My big worry isn't the micro-issues like buffer overflows
-- it's the meta-issue of an overall too-complex architecture. I
don't think they have a handle on that yet.

The strange thing about complexity is its much harder to design a "simple"
system than a Rube Goldberg contraption.

I don't agree. It's harder to design it per line of code. Overall,
though, complex systems are *very* hard to get right, especially from a
security perspective. That's what I think that Microsoft doesn't
understand yet. For example, they have a security metric that they
apply to revised systems. Programs gain insecurity points for, among
other things, open network sockets -- but the penalty is the same for
the time-of-day service as for IIS... But of course, complexity is a
very hard thing to measure.

    --Steve Bellovin, error (me) (2nd edition of "Firewalls" book)