If you have nothing to hide

Mr. Clarke has been floating several trail ballons this week.

http://news.com.com/2100-1001-947409.html
  "Software makers and Internet service providers must share the blame for
  the nation's vulnerable networks, President Bush's special adviser on
  cyberspace security said Wednesday."

http://www.computerworld.com/mobiletopics/mobile/story/0,10801,73150,00.html
   "Why is it that companies have sold products that they know are
   insecure?" asked Richard Clarke, President Bush's chief cybersecurity
   adviser. "And why is it that people have bought them? We should all
   shut [wireless LANs] off until the technology gets better."

While Mr. Clarke was identifying groups to blame for the current state
of affairs, he seems to have left out the group which has historically
blocked many security improvements.

Gee, it seems like just last year the US Government had a policy of
futzing with international standards development to block strong
security (GSM), engaging in expensive legal investigations of people who
wrote things like Pretty Good Privacy, prohibiting companies from
exporting products with strong encryption, and generally making it a PITA
for companies who wanted to make products which were more secure (forcing
security research offshore or to Canada). Even attempts to include
default encryption in IPv6 hit government policy roadblocks. Anyone who
tried to make it more difficult to intercept communications was accused of
helping child pornographers, criminals, terrorists and hackers. The
refrain was if you have nothing to hide, ...

It took decades of government policy to reach this point. Does Mr.
Clarke's statement signal the end of the government's policy of
maintaining the status quo? If we secure wireless communications, that
means it will be possible for people to communicate without worrying
(excesively) about evesdroppers. But that security improvement also
means the government may not be able to listen in on those communications
either. Has the FBI and NSA signed off on this apparent new policy of
securing our networks?

Finally, what role should network operators play in determining what
content subscribers can have access, including "unsafe" content?

  "ISPs to step up
   Internet service providers also have to be more security conscious,
   Clarke said. By selling broadband connectivity to home users without
   making security a priority, telecommunications companies, cable
   providers and ISPs have not only opened the nation's homes to attack,
   but also created a host of computers with fast connections that have
   hardly any security."

Public network operators are very security conscious, about the
public network operators network. Should public network operators do
things, common in private corporate networks, such as block access to
Hotmail, Instant Messenger, Peer-to-peer file sharing, and other
potentially risky activities? Should it be official government policy
for public network operators to prohibit customers from running their own
servers by blocking access with firewalls?

sean@donelan.com (Sean Donelan) writes:

  "ISPs to step up
   Internet service providers also have to be more security conscious,
   Clarke said. By selling broadband connectivity to home users without
   making security a priority, telecommunications companies, cable
   providers and ISPs have not only opened the nation's homes to attack,
   but also created a host of computers with fast connections that have
   hardly any security."

Public network operators are very security conscious, about the
public network operators network. Should public network operators do
things, common in private corporate networks, such as block access to
Hotmail, Instant Messenger, Peer-to-peer file sharing, and other
potentially risky activities? Should it be official government policy
for public network operators to prohibit customers from running their own
servers by blocking access with firewalls?

Don't dismiss this concern. We know why multipath (core) RPF is hard and
why most BGP speakers don't do it yet. But unipath (edge) RPF has been easy
for five years and possible for ten, and yet it is in use almost nowhere.

The blame for that lays squarely, 100%, no excuses, with the edge ISP's.
Whether Microsoft or the rest of the people CERT has named over the years
with various buffer overflows are also to blame for making hosts vulnerable
is debatable. But whether edge ISP's are grossly negligent for not doing
edge RPF since at least 1996 is not debatable. Cut Mr. Clark *that* slack,
even if you must (righteously, I might add) blast him on other issues.

I encourage network operators (or IX operators, DNS operators, etc) to let
the government know what you think. Mr. Clarke's crew is writing the
plan, and taking input from many sources. If you think RPF (or some other
source address validation) is a solution let them know. If you think
S-BGP is a solution, let them know. If you think network operator managed
firewalls on every DSL/Cable modem is a solution, let them know. On the
other hand, if to think some of those things are not a solution (or a
really bad idea), tell them that.

I have my opinion, and I've told the government what I think. But I'm
certainly not smart enough to get everything right (or even most things
right). Its not a matter of cutting Mr. Clark some slack, but getting
good information from (many?) network operators.

These are technical operations matters. Seems like there might be some benefit in formulating consensus views within the technical operations community.

Any chance that an IETF BCP would be possible and helpful?

Diverse input to a government process can be good for learning about choices, but consensus views should be helpful for making them.

d

These are technical operations matters. Seems like there might be some
benefit in formulating consensus views within the technical operations
community.

Any chance that an IETF BCP would be possible and helpful?

There is a difference between technical/operational matters and policy
matters. I respectfully disagree this can be treated as a technical
problem.

For example, Bellcore wrote the technical standard for Caller-ID, but
Caller-ID policy varies widely throughout the telephone system. Ever
notice how telemarketers never seem to have valid Caller-ID. That is
not a really technical problem. Likewise Internet source address
validation has a technical part, and a policy part. For the technical
parts the IETF has RFC2827. The policy questions are how should it be
enforced, by whom? Since the end of "connected status" there hasn't
really been a way to control who can use what addresses to connect to the
Internet.

One issues is the RFCs aren't written as regulations. It would be a bad
idea to attempt to enforce them as written. They are useful as guidance
to network operators, but as anyone who has ever tried to write a TCP/IP
stack from scratch using nothing but the RFCs (yes, people have tried), it
doesn't work.

Diverse input to a government process can be good for learning about
choices, but consensus views should be helpful for making them.

What group is the best forum for developing consensus views on
Internet operation policy issues?

One of the Mr. Clarke's complaints in his speech was there is no group
the government can go to find out what the consensus view of Internet
operators is. IETF doesn't appear to want to take on that role. NANOG
isn't structured to develop policies for ISPs. IOPS, ICANN, ISPSEC, etc
have issues. ATIS, ITU, NRIC, NSTAC would love to take on the role.

The National Cybersecurity Plan (or whatever the final name ends up) will
be announced in September. The next NANOG meeting is October 27-29. The
next IETF meeting is November 17-21.

There is a difference between technical/operational matters and policy
matters. I respectfully disagree this can be treated as a technical
problem.

Yes, it's essential to be clear about technical spec. vs. policy spec, although they seem to have some overlap. Maybe a lot. But no, that does not make them the same.

However the list of questions you asked, in the note I was responding to, looked like technical choices. My assumption was that the "policy" issue was in choosing between technologies.

I consider the IETF Best Current Practises label as intended specifically for guidance in operations matters. Hence the suggestion to consider it.

What group is the best forum for developing consensus views on
Internet operation policy issues?

In between pure tech specs and abstract policy discussion there is technically based consideration of tradeoffs, etc., for technical alternatives. That's not something to leave to purely policy folk and my sense is that the IETF venue can work for such discussion.

One of the Mr. Clarke's complaints in his speech was there is no group
the government can go to find out what the consensus view of Internet
operators is. IETF doesn't appear to want to take on that role.

Hmmm. As soon as a policy becomes multi-operator, I'll bet it starts looking like a technical spec.

d/

[snip]
: > Diverse input to a government process can be good for learning about
: > choices, but consensus views should be helpful for making them.
:
: What group is the best forum for developing consensus views on
: Internet operation policy issues?
:
: One of the Mr. Clarke's complaints in his speech was there is no group
: the government can go to find out what the consensus view of Internet
: operators is. IETF doesn't appear to want to take on that role. NANOG
: isn't structured to develop policies for ISPs. IOPS, ICANN, ISPSEC, etc
: have issues. ATIS, ITU, NRIC, NSTAC would love to take on the role.
:
: The National Cybersecurity Plan (or whatever the final name ends up) will
: be announced in September. The next NANOG meeting is October 27-29. The
: next IETF meeting is November 17-21.

Invite him and/or members of his team to NANOG...

scott

However the list of questions you asked, in the note I was responding to,
looked like technical choices. My assumption was that the "policy" issue
was in choosing between technologies.

That's actually part of the problem. What happens when you put a bunch of
technical people in a room and ask them to solve a problem? You get
technical solutions without consideration of what the policy should be.
In this case I think we've got the technical version of Mr. Smith Goes to
Washington. Technical people who mean well, but don't understand the
rules are different inside the washington beltway. I put myself in the
same catagory.

Mr. Clarke and crew are coming up with a national policy. Technical
folks gave lots of technical suggestions. A firewall is a technical
tool, but a firewall may not be a good policy. If firewalls were the
answer to a national security policy, China would have one of the most
secure networks in the world.

I consider the IETF Best Current Practises label as intended specifically
for guidance in operations matters. Hence the suggestion to consider it.

IETF BCPs are great guidance for operational matters, they are a lousy
basis for regulations or enforcement. Whether you are writing a new
TCP/IP stack, or a contract with a vendor, just referencing the RFCs isn't
sufficient to get a working system. This is a good thing. OSI tried to
cover everything so there is no doubt products from different vendors
would work together. IETF just tries to cover enough, and leaves the
rest up to interoperability "goodwill" between implementors. But when
that goodwill is missing, the IETF and BCPs run into problems.

In between pure tech specs and abstract policy discussion there is
technically based consideration of tradeoffs, etc., for technical
alternatives. That's not something to leave to purely policy folk and my
sense is that the IETF venue can work for such discussion.

Maybe, but IETF has slowly been moving away from anything that doesn't
involve running code, bits and photons for a few years. There also seems
to be fewer network operators and more vendors at IETF.