T3 or not to T3

I've been doing a lot of reading and thinking on what the best solution
is for an application that initially requires approximately 15Mbps
outgoing and 8Mbps incoming (as viewed from my router), and talks with
500000 unique hosts daily (i.e. has fairly wide coverage of the net).
The application involves at least thirty machines, so colocation is likely
to be cost-prohibitive. A single T3, or frac T3 isn't an option because
there isn't a single provider that I can trust for the availability
we want. Even ignoring availability, I seriously doubt that any
provider can consistently fill a single customer's T3 pipe these days.

From stuff I've seen here and elsewhere I think the most important reason

for this is congestion at NAPs making it impossible to suck (or shove)
lots of bandwidth at anything but your provider's backbone. Taking all
of this into account, I'm really leaning towards a solution that involves
lots of small pipes to lots of providers. Essentially eliminating the
need for 90% of our packets to traverse NAPs by using each backbone
mostly for their own customers.

Now that I've done my homework, I'd like to hear comments from some
of the more experienced folk here.

I haven't considered yet the maintenance/logistical cost of managing
15 T1s to 6 or 7 providers vs. the "ease" of two frac-T3s to two providers.

From a provider's point of view, if a site wanted to connect, and was

willing to sign a use-policy saying they wouldn't use the connection
for transit to other providers (i.e. would only ask for customer BGP and
only route to the nets you provide in BGP updates), would that site have
lower costs associated with it? (that you could pass on?)

Thanks,
Dean

Are you sure you can't accomplish this with your own national backbone and
private interconnects with the major providers similar to what Sprint/MCI
are doing to keep traffic off the NAP's?

On the other hand, maybe you could be the customer that establishes the
distributed web server scenario I discussed earlier. If you have read
through http://www.ix.digital.com you will not that not only are they
running an exchange point but they are also running a web farm of sorts at
the same location. Chances are good that this web-farm-at-the-XP concept
will become the rule rather than the exception. Note that in Digital's
model it would be possible to connect to larger ISP's without requiring
traffic to flow through the XP itself.

Michael Dillon - ISP & Internet Consulting
Memra Software Inc. - Fax: +1-604-546-3049
http://www.memra.com - E-mail: michael@memra.com

The application involves at least thirty machines, so colocation is likely
to be cost-prohibitive. A single T3, or frac T3 isn't an option because
there isn't a single provider that I can trust for the availability
we want. Even ignoring availability, I seriously doubt that any
provider can consistently fill a single customer's T3 pipe these days.

of this into account, I'm really leaning towards a solution that involves
lots of small pipes to lots of providers. Essentially eliminating the
need for 90% of our packets to traverse NAPs by using each backbone
mostly for their own customers.

I haven't considered yet the maintenance/logistical cost of managing
15 T1s to 6 or 7 providers vs. the "ease" of two frac-T3s to two providers.

Given that it sounds like you are budgeting for slightly more than a DS3
worth of bandwidth, in connectivity cost, then the way to do multi T1 is to
pick a set of ISPs that comprise a good percentage of the net, and get N x T1
to the ISPs based on your best guess break down of traffic. If it's just the
whole Internet you are aiming for, then something like 4 T1s to MCI, 3 T1s to
Sprint, 3 T1s to UUNET, 2 to ANS, 1 to AGIS and 2 to others is an example.
Note that this is a wild guesstimate of the percentage of Internet traffic
sinks.

What you need to think about with this scenario are the following:

1) cost of procuring 15 T1s vs. DS3/fract DS3.
2) logistics of support.
3) infrastructure issues.

1) is pretty cut and dry.

2) is the largest "hidden" cost. With this set up, you need a competent net
engineer or two to babysit this so that the packets are flowing in the right
direction.

You also ideally need significant automation of your router configurations so
that you can pull correct info and generate configs that match a very fluid
reality of today's routing.

You'll also need a decent NOC to deal with the 6-7 ISP NOCs and possibly
different carrier's NOCs when trouble hits. I find that much of trouble
shooting involves live body at an end of a phone much more than a brain.

3) you need to have different equipment to do this. It costs more to provision
hardware and to do 15 T1s then it does for 1-2 DS3/fract DS3. This doesn't
even go to redundancy and sparing issues. Something like Cisco 7200 might this
better, but I'm not sure.

The other scenario of two fract DS3 alleviates the problem #2 and #3, but
still doesn't make them go away altogether.

You also need to pick providers with interesting enough traffic sinks so that
you can load balance effectively (as effectively as you can get in that
situation anyway) in a somewhat straigh forward fashion. ( like taking
internal customer routes from ISP A and the rest via ISP B.

From a provider's point of view, if a site wanted to connect, andwas
willing to sign a use-policy saying they wouldn't use the connection
for transit to other providers (i.e. would only ask for customer BGP and
only route to the nets you provide in BGP updates), would that site have
lower costs associated with it? (that you could pass on?)

It would seem to me that, as long as the site is an interesting enough
traffic source, and the ISP can recoup whatever cost of offering that
connection + margin(or not).

Speaking only for CICNet, given that the site is an interesting traffic
source, we'd gradly offer a connection for what it costs to provide that
connection, if such request came to us.

Hope this helps,

-dorian

I believe the current ante to get into this game is DS3 infrastructure with
presence at 3 exchange points. This is a slightly different order of magnitude
than what's Dean seems to be talking about.

-dorian

Who else will do this? MCI, Sprint? This seems like a new and growing
market. Is it being added to the service menus of various backbones?
We'd love to talk to any backbone that will do this.

Chris Caputo
President, Altopia Corporation