data request on Sitefinder

A number of people havce responded that they don't want to be forced to
pay for a change that will benefit Verisign. That's a policy issue I'm
trying to avoid here. I'm looking for pure technical answers -- how
much lead time do you need to make such changes safely?

    --Steve Bellovin, http://www.research.att.com/~smb

I think that the policy problem adds to the technical one. If the
community were behind Sitefinder and supported Verisign's design
goals, it would be possible to hammer out everything in a short period
of time. But because the hearts and minds of those who would make the
changes are not won, those responsible for implementing changes would
drag their feet, hoard necessary resources, and use the incomplete
state of their implementation as an obstacle to change and, should the
change happen anyway, use this "evidence" of Verisign's "bad behavior"
as an excuse to act openly against the service, on ways that have
already been demonstrated. Thus, the human factor will make any
purely technical estimate useless.

  Sadly, I do not feel qualified to give a detailed estimate on your
question, as presented, which I find intriguing from a purely
theoretical point of view, except to say that there are always going
to be one-offs, unique builds, etc that willneed to be changed
individually, and even without the sour feeling towards Sitefinder,
there will be procrastination and compteting priorities. This is not,
and never will be, the only thing that needs working on. Even with
complete technical buy-in, I wouldn't expect the mass of users to be
covered by these changes until the middle of next year if work started
today.

-Dave

may i suggest another operational issue then?

how does verisign plan to identify and notify all affected parties when changes
are proposed?

for example, in the current case, how do they plan to identify every party running
postfix and inform them that they need to upgrade their MTA?

this seems non-trivial to me.

richard

OK, since you asked....

At least from where I am, the answer will depend *heavily* on whether Verisign
deploys something that an end-user program can *reliably* detect if it's been
fed a wildcard it didn't expect. Note that making a second lookup for '*.foo'
and comparing the two answers is specifically *NOT* acceptable due to the added
lookup latency (and to some extent, the attendant race conditions and failure
modes as well).

Also note that it has to be done in a manner that can be tested by an
application - there will be a *REAL* need for things like Sendmail to be
able to test for wildcards *without the assistance* of a patched local DNS.

And yes, this means the minimum lead time to deploy is 'amount of time to write
a "Wildcard Reply Bit" I-D, advance through IETF to some reasonable point on
standards track, and then upgrade DNS, end host resolvers, and applications'.

Purely from an operational standpoint, it would be a mark of efficiency to have a central repository of who is running what. That would mean that notifications would only be sent to those that need them, and also would provide objective information to determine how many organizations would be affected by a change. In other words, something that actually would be useful.

Unfortunately, we have seen Verisign constantly take the position that information they learn through operations is their intellectual property, to be used as they see fit, and generally to be kept proprietary.

So if we try to separate operational from policy, we see white-winged ships sail by, carrying data that might be useful, but then have them crash on the rocks of stewardship of the data.

You make an assumption here -- one with which I agree completely -- but that certainly wasn't followed during the Sitefinder debacle. The assumption is that the IETF provides a tested mechanism for disseminating information and making comments.

Verisign claims that they had tested their ideas with a Verisign-selected group of organizations, and made their commercial decisions based on the proprietary data it generated from those organizations.

A number of people havce responded that they don't want to be forced to
pay for a change that will benefit Verisign. That's a policy issue I'm
trying to avoid here. I'm looking for pure technical answers -- how
much lead time do you need to make such changes safely?

  You can't separate them. How long something takes to do depends upon who is
paying for it and how much they are paying. With a blank check, I can do
almost anything in a week. On the other hand, if it's an unexpected expense
without an accompanying unexpected source of funds, the same task can take
years. If the beneficiary is not paying, things take very much longer.

  In any event, this question is currently impossible to answer from a purely
technical perspective because we don't know what Verisign intends to change.
For example, what will be the new correct way to determine whether or not a
domain exists in the DNS, say for purposes of spam filtering? Will the
wildcard A record be guaranteed to always point to the same, single address
or not?

  We don't know what the target is, so we don't know what's involved in
hitting it. When Verisign releases a specification, then we can talk about
what's needed to meet it.

  DS

i maintain that building this list is phenomenonally difficult. the set of
people running mail servers is substantially larger than the set of
people who read nanog, run backbones, run regional ISPs, etc., etc.

i don't disagree that it would be useful, but how are you going to
build it without actively probing mail servers across the internet?
and it can't possibly ever be complete, with PIX firewalls obscuring
SMTP banners and sysadmins depending on security-by-obscurity
who change their banners to elminate MTA identification.

richard

Oh boy, well first and foremost the root servers and database are
owned by the public because they were paid for from the TAX-BASE…

Second and foremost the technology to redirect web pages and ips is
not new or innovative, kiddies used to do it on IRC, to redirect to porn
sites and get paid for every redirected hit, starting in the 1996,1997 time frame.

Network solutions on more than one occasion caused an incredible stir
when they adjusted pricing and had to role back the price, court ordered.

I think some of you were here and remember that ruckus.

In the public interest and the interest many businesses, Verisign
should divest itself completely and just become another company doing
business across the backbone.

I see serious troubles ahead, imagine a client of a client who has
lets say 3,000+ servers on-line and new list of clients is added and
there is a typo and all 3,000 servers are redirected with 10’s of
thousands of clients, each with the potential to sue in both directions.

Gentlemen and ladies this is simply not a well thought out idea, I
don’t care how many PR firms get involved they are simply there for
the money, with no clue to the potential harm.

I think the leadership here needs to formulate a public posture and present it’s case and
it’s alternative solution that the NSP community can live with and rapidly adapt to as a working acceptable model.

Henry R LInneweh
Sr Design Systems Engineer

"Steven M. Bellovin" wrote:

A number of people havce responded that they don't want to be forced to
pay for a change that will benefit Verisign. That's a policy issue I'm
trying to avoid here. I'm looking for pure technical answers -- how
much lead time do you need to make such changes safely?

Merely install a new version of postfix on all MX servers? Assuming
that postfix itself has been modified as desired by VeriSign?

Well, let's see, in an emergency with the master mail server crashing
20+ times a day, I was able to get the support folks to scavenge parts,
build another machine, essentially talk them through cloning one of the
old NS machines, update it to latest system and BIND 9, run a few
rudimentary tests, and physically swap it in, all in just about 6 days.

(I probably could have done it myself in under a day, but I'm in
Michigan and they are in rural Mississippi. Also, you have to consider
that it's a 3.5 hour drive round trip to Memphis for any parts needed
on an emergency basis, and POPs are spread about an hour apart. Quick
installation is not in the cards.)

Of course, that was for BIND, not postfix, which would take longer.

To order a faster postfix frontend MX machine (we did), await delivery,
install and test and physically swap -- oops, they still haven't
finished install and test ... in 4+ weeks so far.

When they finish that, the same process on the machine swapped out,
lather, rinse, repeat until all machines are finished.

(Since the VeriSign emergency went away, there was a lot less pressure
to divert support from the jobs they are paid to do, or work overtime.)

Really, no matter how you slice it, money is at least as important to
lead time as the "pure technical answers".

Richard -
Do they (Verisign) have any legal reason to??? - is there anything between
them and ANY of their clients that requires them to inform them before any
changes to protocol facilities are made - I think not.

Todd

>may i suggest another operational issue then?

>how does verisign plan to identify and notify all affected parties
>when changes
>are proposed?

>for example, in the current case, how do they plan to identify every
>party running
>postfix and inform them that they need to upgrade their MTA?

>this seems non-trivial to me.

Purely from an operational standpoint, it would be a mark of
efficiency to have a central repository of who is running what. That
would mean that notifications would only be sent to those that need
them, and also would provide objective information to determine how
many organizations would be affected by a change. In other words,
something that actually would be useful.

i maintain that building this list is phenomenonally difficult. the set of
people running mail servers is substantially larger than the set of
people who read nanog, run backbones, run regional ISPs, etc., etc.

I don't really disagree with you, even ignoring that many providers would consider much of this information proprietary, much as they might for private peering arrangements. This is something of a thought experiment on what would have to be available for a Verisign or the like to make unilateral changes without presenting the idea for comment, well in advance.

The process of asking for comment through IETF and the operational forums has the proven benefit of getting major players to look at the issue and decide to comment. Now, as you point out, there are many people who run mail servers and the like, who don't follow any relevant mailing lists.

I would suggest, however, that the number of people that do read these lists run mail servers with more end users than the small system administrators that do not.

The absence of a list such as I've described, the difficulty of creating of which you point out, makes it more unlikely to me that an organization can really assess the effects of unilateral design changes, especially when that assessment is shrouded in commercial secrecy.

I would suggest, however, that the number of people that do read
these lists run mail servers with more end users than the small
system administrators that do not.

true, but this can be interpreted as "they're small and clueless, so
screw 'em", a position which i find unattractive.

The absence of a list such as I've described, the difficulty of
creating of which you point out, makes it more unlikely to me that an
organization can really assess the effects of unilateral design
changes, especially when that assessment is shrouded in commercial
secrecy.

agreed.

richard
  ("nine out of ten experts hand selected by Verisign agree...")

But clearly the problem there is in the UI for the application they used. After all, if the application had just looked up the host name the admin entered, and checked first to see if it had an A record, then the typo would have been detected immediately, instead of after deployment.

:slight_smile:

i'd say that their client is the Department of Commerce.

when the wildcard is inserted in the .com and .net zones, it affects many third
parties who are not direct clients of Verisign, some of whom are users of .org
or other tlds that verisign doesn't handle, so they in fact have no contractual
relationship with Verisign or with a Versign client.

what i had in mind, though, was that Verisign has apparently indicated that they
will give somewhere around 60 days (plus/minus) notice of any future changes
of this sort.

Steve is attempting to collect data which constitutes technical input about the
appropriateness of the interval.

what i am suggesting is that the sum total of people who courtesy dictates
ought to be notified is basically anyone who runs any sort of internet server.
i picked mail servers because Verisign themselves identified the postfix MTA
as an "issue".

after that, there's still the nagging issue of notification interval. many are thinking
in terms of their own, often large and busy ISP or backbone operation. there are
many, though, in the Enterprise or SMB spaces who are at risk of being left twisting
in the wind ("They're small and clueless, screw 'em").

cost is without question an operational issue. how fast an affected entity (ISP,
NSP, Enterprise, SMB) can adapt may be directly related to available manpower
or funding. i maintain that it is very difficult to separate the funding issue from the
time issue, given that Verisign apparently proposes to give the community 60
or 90 days notice of potentially significant changes to the infrastructure, affecting
unpredicatable numbers of entities in ways unknown, and impossible to cost out
in advance.

for all the flaws of the IETF, it is infinitely preferable to this scenario.

richard

Hi,

We are getting a LOT of web requests containing what mostly looks like
giberish.

[Mon Oct 20 21:13:42 2003] [error] [client 172.133.3.204] request
failed: erroneous characters after protocol string:
\xb8\xcf\xc235\x9f\xc4\x1c\xebj\xd7\xc5\x8e\xe9d>\xfdMe\xed\x16\xca\xd51\xcfReF\x82\xa3qi\x89\x832<\vJ5k\x15\xa2\x0c\x90\xed\x8bCT\xa3\xa2\x96\xd7\xe8\xa2`S#+W\xfc\xc2\xc2w*\xce\x1a<\xb9\xc3\x91\x14\xb0\x9e\xfe\x14\"7\xaa\xeaR\xd1\x9c\x13\x1a\xf0\x1aN\x8eklP\xdc\xc1\xe3\xb9w\xb0\x1aGt\x04|I4\xae\x06WC\x15NA\x80\xb1\xc5E~\xd59\x85+\xcc\x9e\xb8\xaf(\r\x1f\x97

But this is not the standard Microsoft worm stuff that I can tell. It is
coming from numerous IP addresses and nearly took down a few of our
servers until we started blocking them with the firewall. So I am trying
to find out as much as I can about what is happening, but I don't really
know where to start. I don't believe it is considered approperiate to
send a list of IPs to this list. So where should I start? The list so
far contains about 60 addresses.

Thanks,

Eric

todd glassey wrote:

Richard -
Do they (Verisign) have any legal reason to??? - is there anything between
them and ANY of their clients that requires them to inform them before any
changes to protocol facilities are made - I think not.

To inform? Not yet, although I have the feeling that this will be changed due to historic record. However, changes that have an effect are always analyzed and a course of action chosen. I believe this is the job of ICANN. At some point, ICANN's power will need to be tested and set in stone. Only the community can create or strip that power. Yet if an organization is going to exist to serve the community and maintain order, then it needs the power to do it.

I think Vixie has alluded to this a few times, and I know there is much that goes on in the hallways concerning the overall problem of who controls what. Verisign is just helping to push the process along. I doubt it will end as they want it to.

-Jack

todd glassey wrote:

Richard -
Do they (Verisign) have any legal reason to??? - is there anything between
them and ANY of their clients that requires them to inform them before any
changes to protocol facilities are made - I think not.

To inform? Not yet, although I have the feeling that this will be changed due to historic record. However, changes that have an effect are always analyzed and a course of action chosen. I believe this is the job of ICANN. At some point, ICANN's power will need to be tested and set in stone. Only the community can create or strip that power. Yet if an organization is going to exist to serve the community and maintain order, then it needs the power to do it.

Throughout this affair, I've been puzzled by what seems to be an assumption that once a contract exists, it cannot be changed or cancelled. Yet such changes and cancellations happen daily in business. They may require litigation, lobbying of the Congress or executive when government is involved, market/consumer pressures, etc., but change is not impossible.

Jack makes excellent points here, which I might restate that this is a defining moment for ICANN to establish its viability and relevance as an organization. If ICANN is to be meaningful in the future, it _must_ make a strong stand here.

Related issues include whether the IETF process, even if flawed, is the consensus means of proposing and discussing changes in the infrastructure. Whether or not the operational forums like NANOG have a role in this process, or even in presenting consensus opinions, also is a basic question for Internet governance.

Purely from my experience in journalism, media relations and lobbying, I have to respect the effectiveness of the Verisign corporate folk who largely have been setting the terms of debate, and managing the perception -- or misperception -- of this matter in the business and general press.

Apropos of that, lots of people equate "privatization" of the Internet to its "commercialization." Privatization isn't nearly that binary. If privatization, in general, is getting the US government out of Internet governance, we still have the options of:

    -- transferring such control as exists (and there may be no control
       mechanism) to a quasi-governmental body such as ICANN.
    -- transferring control, especially with regard to stewardship,
       to a not-for-profit corporation (e.g., ARIN)
    -- accepting that an organization such as IETF will manage a consensus
       process
    -- subcontracting, but closely monitoring, to a general for-profit
       enterprise.
    -- transferring control to a regulated technical monopoly, probably
       with a financial model of return-on-investment rather than maximizing
       shareholder value.
    -- transferring control, at least for a defined period, to a for-profit
       enterprise with a fiduciary responsibility to maximize shareholder value
    -- transferring control to competing for-profit organizations

Howard,
who is puzzled by what seems to be lots of tunnel vision (and I don't mean GRE).

To inform? Not yet, although I have the feeling that this will be
changed due to historic record. However, changes that have an effect
are always analyzed and a course of action chosen. I believe this is
the job of ICANN. At some point, ICANN's power will need to be
tested and set in stone. Only the community can create or strip that
power. Yet if an organization is going to exist to serve the
community and maintain order, then it needs the power to do it.

I will point out that it will be much easier for the community to strip
that power than to vest it in another entity. To strip that power only
requires one of two things:

  1. Enough of the community heading in a different direction
    and disregarding said entity (ICANN).

  2. An organization such as Verisign openly defying ICANN
    and ICANN failing to make a sufficiently strong response
    to enforce and protect the consensus will of the community.

I think item 1 is unlikely unless fueled by item 2. Verisign would do well
to notice that if they do implement the sitefinder wildcards again, and,
ICANN does not successfully put a stop to it, the single most likely outcome
is for the community to view ICANN as irrelevant and impotent. Once this
happens, the inevitable result is a fragmentation of the DNS, disparate
roots, and, loss of the convention of a single recognized authority at
the root of the tree. This convention is fundamental to the stability
of the current internet. Losing it would definitely have negative impact
on the end user experience.

In every forum to which I have convenient access, Verisign has repeatedly
attempted to restrict the discussion to the technical issues around the
wildcards. The reality is that the technical issues are the tip of the
iceburg and, while costly and significant, they are not the real danger.
The issues that must be addressed are the issues of internet governance,
control of the root (does Verisign serve ICANN or vice-versa), and
finally, whether the .com/.net zones belong to the public trust or to
Verisign. Focusing on the technical is to fiddle while Rome burns.

Related issues include whether the IETF process, even if flawed, is the
consensus means of proposing and discussing changes in the
infrastructure. Whether or not the operational forums like NANOG have a
role in this process, or even in presenting consensus opinions, also is a
basic question for Internet governance.

The IETF process is the consensus means of proposing and discussing changes
in the DESIGN of the infrastructure, not the construction or maintenance.
That _IS_ the role of the network operators and the operators forums. For
this to work, however, the operators have to be generally of good will and
cooperative for the greater good. This model is somewhat antithetical to
capitalism because for it to operate efficiently, it requires the long term
good of the community to take precedence over the short-term gains of the
individual or single organization. Capitalism is well optimized for the
short-term gains of the individual or single organization. This is one
of the growing pains that comes from the internet being originated as a
government-sponsored community research project. The design was done
assuming a collection of organizations whose primary motivation was to
cooperate. As we shifted to a privatized internet, that fundamental design
assumption was broken and we have seen some interesting changes as a result.
The fact that it still works at all is somewhat of a miracle. Its continued
stable operation will vitally require the continued good will and cooperation
of the entities playing vital roles. An ISP can be routed-around as damage,
although the larger the provider, the more painful the injury.

If it becomes necessary, significant portions of the internet will route around
Verisign in a similar manner. The difference is that absent ICANN providing for
this, there will be no agreed upon replacement, and, several alternatives will
emerge. The result will be fragmentation of the root, marginalization of ICANN
and a reduction in internet stability.

I believe much of ICANN's previous resistance to dealing with Verisign's abuses
of their role has been fear of the instability that could result. It has appeared
to me to be strategically and tactically very similar to the accomodations made
by the powers in Europe in the late 1930s. (No, I am not comparing Verisign's
actions to those of Hitler, but, the strategy and tactics are a match.)
If ICANN continues to give ground, Verisign's capabilities to commit further
abuses will continue to grow as well.

Purely from my experience in journalism, media relations and lobbying, I
have to respect the effectiveness of the Verisign corporate folk who
largely have been setting the terms of debate, and managing the
perception -- or misperception -- of this matter in the business and
general press.

Agreed. This is a big part of how the Nazis came to power in the 1930's as well.
I hate using that analogy because it is so emotionally charged and the scope of
the damage was so much more significant, but, again, I am comparing only the
strategy and tactics, not the ideology or the actions.

Owen

>
> A number of people havce responded that they don't want to be forced to
> pay for a change that will benefit Verisign. That's a policy issue I'm
> trying to avoid here. I'm looking for pure technical answers -- how
> much lead time do you need to make such changes safely?

OK, since you asked....

At least from where I am, the answer will depend *heavily* on whether Verisign
deploys something that an end-user program can *reliably* detect if it's been
fed a wildcard it didn't expect. Note that making a second lookup for '*.foo'
and comparing the two answers is specifically *NOT* acceptable due to the added
lookup latency (and to some extent, the attendant race conditions and failure
modes as well).

Its not just wildcards. Although the IAB rejected VeriSign's previous
request to do specific response synthesis for IDN, it is conceivable that
someone else will do 'interesting' response synthesis, which applications
will be _unable_ to detect by querying for a wildcard.

( A similar problem to Randy's 'how do I tell which nameserver gave me
  this response, without requerying?' )

Also note that it has to be done in a manner that can be tested by an
application - there will be a *REAL* need for things like Sendmail to be
able to test for wildcards *without the assistance* of a patched local DNS.

Yes, which implies that many applications would need to change
'gethostbyname()' calls to 'getrealhostbyname()' (or whatever).

Whilst many _popular_ applications can be patched in a relatively quick
timeframe, the more subtle implications of large scale synthesis
deployment will probably take much longer to be understood, let alone
patches being deployed, particular with less popular applications.

And yes, this means the minimum lead time to deploy is 'amount of time to write
a "Wildcard Reply Bit" I-D, advance through IETF to some reasonable point on
standards track, and then upgrade DNS, end host resolvers, and applications'.

draft-bcampbell-non-wildcard-00 was submitted last Tuesday to the rfc
editor and should appear in time to be discussed during dnsext in Minnie.

Even if its approved instantly (very unlikely, as I've suggested using the
last reserved header bit), and relevant authoritative nameservers are
upgraded in short order, there is a huge implied change to applications
and libraries, which extends the deployment timeline tremendously.

To answer Steve's question, it would be at least 3 months to patch my
employer's applications to work around a possible .com or .net wildcard,
and at least 6 months to do it in a fashion which does not break
established standards.