Sitefinder and DDoS

Let's assume for a moment that Verisign's wildcards and Sitefinder go back into operation.

Let's also assume someone sets up a popular webpage with malware HTML causing it, perhaps with a time delay, to issue rapid GETs to deliberately nonexistent domains.

What would be the effect on overall Internet traffic patterns if there were one Sitefinder site? (flashback to ARPANET node announcing it had zero cost to any route)

How many Sitefinder nodes would we need to avoid massive single-point congestion?

AFAIK, the issues of distribution of Sitefinder, and even a formal content distribution network, were not discussed. I asked some general questions that touched on this at the ICANN ISSC committee meeting, but I think they were interpreted as directed toward the reliability of the Sitefinder service in operation, rather than potential vulnerabilities it might create.

I am NOT suggesting this simply as an argument against Sitefinder, and I'd like to see engineering analysis of how this vulnerability could be prevented.

Howard C. Berkowitz wrote:

I am NOT suggesting this simply as an argument against Sitefinder, and I'd like to see engineering analysis of how this vulnerability could be prevented.

With $100M annual revenue at stake, I would be willing to provide distributed solutions
to this problem if you send me a reasonable fraction of that money.

Pete

As long as I get a finder's fee! :slight_smile:

But can you do it without breaking the assumption that any lookup on *.TLD will always return the same value as badxxxdomain.TLD?

Kee Hinckley wrote:

Well, the problem space is that a wildcard is involved. Since 1034
indicates that the answer for '*.something' is the same as
'otherwise-unmatched.something', I think this assumption is fairly safe.

The assumption is not safe if the authoritative nameservers for the
underlying zone are not performing according to the DNS specs; ie, they
have synthesised answers that are not from a wildcard (which can be
queried).

But, that requirement simply says that if at x time you query *.something
and otherwise-unmatched.something, you get the same result. It doesn't
say that if you query at *.something at x time and otherwise-unmatched
at x+5 time, you will get the same result. DNS servers can return different
answers over time, and, expecting them not to change rapidly is an assumption
not inherent in the protocol, much like the assumption that *.net and
*.com would not get arbitrarily defined by the registry.

While I would agree these are reasonable assumptions, I think we need to
make some effort to get these assumptions codified into the protocol before
someone else breaks them again.

Owen