Selfish routing

alex@yuriev.com wrote:
> Having capacity *always* makes a network better.

True enough; given massive over-capacity, you'll have a hard time finding any congestion. (Of course, you also won't find optimality without applying some kind of measurement.) But curiously, adding some incremental capacity to a network can, under some conditions, actually make it worse!

Leo Bicknell wrote:
> I mean, really, the fact that a secondary path is worse than a primary
> path with no capacity is a no brainer, couldn't these people be doing
> something more useful?

Roughgarden's point (as I see it) is that Braess' paradox applies. Most people find it obvious that adding more capacity to a network will always help, but as it happens, that's not true.

That is not to say that adding capacity is always wrong. It's usually right; it's just of some interest to note that there are conditions where harm can result, especially when independent actors act "selfishly". The basic result is quite old, as Roughgarden observes; the same phenomenon exists in road networks, and he gives a nice example of strings and springs in his paper, which Jeffrey Arnold cited earlier:
   <http://wisl.ece.cornell.edu/ECE794/Apr2/roughgarden2002.pdf>
This contains a good deal more detail than the NY Times writeup that started this thread :slight_smile:

As Jeffrey observed, the assumptions in the model don't map well to the Internet we all know and love, but results like Braess' paradox come up again and again. If you want an optimal network, you can:
   1/ sit in the middle and play at being the God of TE
   2/ have the various actors optimize "selfishly"
   3/ count hops and assume that's close enough
(Oh, and if you're into that sort of thing, I suppose you can try dropping some packets to speed things up.)

Roughgarden's result is that 2/ is not quite as good as 1/ at making fully optimal networks, but in a bounded way. I think the bound is a really nice result, although it's a pity the model assumptions aren't all that close to the operating nature of the Internet.

alex@yuriev.com wrote:
> However, claims "we have a special technology that magically
> avoids problems in the networks that we do not control" is the
> egineering religion.

And I wouldn't evangelize that faith, as stated. I do happen to believe in "special" (or if you prefer, "selfish") technology that measures problems in networks I do not control, and if they can be avoided (say by using a different network or a different injection point), avoid them. In practice, that extra "if" doesn't change the equation much, since:

Richard A Steenbergen wrote:
> Random good luck just by having lots of paths to choose
> from and a way to detect which one "works"... it's possible.

Quite so. I won't comment on the degree of effectiveness, since that would be marketing :slight_smile: Nobody should be surprised at what I'd say about that anyway.

Mike

I find it odd that the reaction of nanog readers to the paper title
is as though the paper said the opposite of what it does. In brief,
the paper's says that under certain assumptions the globally optimal
latency would be only 25% better than the selfish result. That's
actually a very pleasant conclusion, as the globally optimal case
would require global knowledge and infinite computing capacity, neither
of which is available on any real network.

I read the paper as vindicating the existing architecture of independent
self-interested players rather than as wishing for central control.
That's certainly not how it was reported, or how nanog folks reacted.

Barney Wolff wrote:

I find it odd that the reaction of nanog readers to the paper title
is as though the paper said the opposite of what it does.

Granted, but then, Roughgarden's work generally seems to get reported on backwards. What's a poor journalist to do? Write a story on how selfish routing helps the Internet? :slight_smile:

In brief,
the paper's says that under certain assumptions the globally optimal
latency would be only 25% better than the selfish result.

If I might, his model says the best is between 0% and a max of 25% better than the selfish result. Selfish routing may actually get us the best possible Internet, although that is not proven.

Mike

Thus spake "Mike Lloyd" <drmike@routescience.com>

Roughgarden's work generally seems to get reported on
backwards. What's a poor journalist to do?

Learn how to present his ideas?

If I might, his model says the best is between 0% and a max of 25%
better than the selfish result. Selfish routing may actually get us the
best possible Internet, although that is not proven.

Selfish routing is the simplest and cheapest to implement, which are large
factors in evaluating the "best" dumb network.

S

Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Date: Sat, 26 Apr 2003 16:03:48 -0700
From: Mike Lloyd

What's a poor journalist to do?

"Poor journalist"... pfffft! Books are reviewed by technical
editors before publication, yes? Can journalists not exercise
the same level of caution, perhaps asking the interviewee "did I
get this correct?"

Eddy

Stephen Sprunk wrote:

Thus spake "Mike Lloyd" <drmike@routescience.com>

Roughgarden's work generally seems to get reported on
backwards. What's a poor journalist to do?

Learn how to present his ideas?

Perhaps I was overly diplomatic in wording the joke :slight_smile: It seems likely to me that Roughgarden encourages the misreading. But I have a hard time blaming him for that; what harm was done in this case by emphasizing the "newsworthy" spin of this result? We wouldn't be talking about his interesting work if he'd tried to get the interest of the NYT with "hey, selfish routing is almost as good as perfect routing!"

Selfish routing is the simplest and cheapest to implement, which are large
factors in evaluating the "best" dumb network.

Simpler than a God of TE in the middle of the network, but not simplest. What we have today is about the simplest, and it's not what Roughgarden means by "selfish" routing. He assumes routing which promptly responds to congestion-induced latency, and that is not automated in much of the Internet today. It's also not simple to implement correctly.

The technology is available, and a perennial question (which Sean Donelan referred to at least obliquely at the start of this thread) is whether it's better to use smarter routing decisions, to add more bandwidth, or to just leave things as they are. Since we're awash in bandwidth we can't find enough uses for, and some users remain dissatisfied, it's nice to see academic results that suggest option one is (theoretically) effective.

Mike

Er, nothing in the paper said anything at all about the performance of
latency-influenced routing vs other, presumably dumber, schemes. Other
papers, maybe? References?

Barney Wolff wrote:

Er, nothing in the paper said anything at all about the performance of
latency-influenced routing vs other, presumably dumber, schemes. Other
papers, maybe? References?

Exactly my point. He's compared "perfect" latency regulation to "selfish" latency regulation. He doesn't offer a comparison of selfish regulation to conventional (lack of) regulation. I've presented some numbers on the latter at a NANOG meeting before:
   <http://www.nanog.org/mtg-0206/ppt/mike1/sld013.htm>
But much more of that, and I'll be judged to have transgressed the vendor taboo in these parts.

Other places to look include the Detour project from UWash and MIT's RON. These generally involve "bank shots" off intermediate nodes, not making better selections at a single node or edge site. I'm aware of one paper on the latter (not written by RouteScience), but it's under review, and hence not publishable, I'm afraid.

Mike

> Having capacity *always* makes a network better.

True enough; given massive over-capacity, you'll have a hard time
finding any congestion. (Of course, you also won't find optimality
without applying some kind of measurement.) But curiously, adding some
incremental capacity to a network can, under some conditions, actually
make it worse!

Oh, rubbish.

When you are moving OC-12 worth of traffic you don't add new DS3 backbone
links. When you are moving OC-48 worth of traffic, you don't add OC-3s i.e.
any non-braindead network architect/engineer will see it.

Of course, no one said that those getting out of Chub or CIT that companies
hire have that ability, in which case the issue is caused by people with
wrong set skill being hired to do the wrong job.

Of course, the sales people of yet another equipment vendor trying to sell
yet another useless technology that claims in a yet another way eliminate
the need of people with a clue on staff in exchange for major $$$ do not
want to admit it.

As Jeffrey observed, the assumptions in the model don't map well to the
Internet we all know and love, but results like Braess' paradox come up
again and again. If you want an optimal network, you can:
   1/ sit in the middle and play at being the God of TE
   2/ have the various actors optimize "selfishly"
   3/ count hops and assume that's close enough
(Oh, and if you're into that sort of thing, I suppose you can try
dropping some packets to speed things up.)

Oh how about "fire those who are ordering wrong size of interconnects, order
right sized interconnects, count the moneys that you did not waste".

And I wouldn't evangelize that faith, as stated. I do happen to believe
in "special" (or if you prefer, "selfish") technology that measures
problems in networks I do not control, and if they can be avoided (say by
using a different network or a different injection point), avoid them. In
practice, that extra "if" doesn't change the equation much, since:

So, the brilliant technology costs money but does not provide excellent
results under all circumstances? Simply not making stupid mistakes
designing the network *already* achieves exactly the same result for no
additional cost.

Alex

alex@yuriev.com wrote:

And I wouldn't evangelize that faith, as stated. I do happen to believe
in "special" (or if you prefer, "selfish") technology that measures
problems in networks I do not control, and if they can be avoided (say by
using a different network or a different injection point), avoid them. In
practice, that extra "if" doesn't change the equation much, since:

So, the brilliant technology costs money but does not provide excellent
results under all circumstances? Simply not making stupid mistakes
designing the network *already* achieves exactly the same result for no
additional cost.

And what, pray tell, governs stupid mistakes designing the network? For that matter, which network? I've run traffic through some networks for years without a problem. Then one day, that network makes a mistake and clobbers the traffic I send through it. Naturally, I redirect traffic via other networks, but the spare capacity via the other networks does not equate to the traffic I'm shifting, so while improving QoS for my customers, I have still shorted them.

It could be argued that more spare capacity should have been allotted for the other networks, yet then if the first network hadn't had a problem, money would have been wasted on capacity that wasn't needed. It is an art to establish enough bandwidth to handle redirects from networks having problems and yet keep costs at a reasonable level.

Hypothetical: You are interconnected with 3 networks pushing 100Mb/s through each. Slammer worm appears and makes 2 networks untrustworthy because of their internal policies. The third network is fine, but your capacity to it probably won't hold 300Mb/s. Do you a) spend the money to handle your full capacity out every peer and pay two - three times what your normal traffic is to the peer, or b) allot reasonable capacity for each peer, and recognize that there are times when the capacity will fall short?

Network planning is not just about whether you make a mistake or not. Performance is dependant upon the networks we interconnect with, issues they may have, and how well we can scale our network to circumvent those issues while remaining cost effective. My hypothetical is simplistic. Increase the number of peers, as well as the number of peering points to said peers, determine the cost and capacity necessary during multiple points of failure, plus the cost within your own network of redirecting traffic that normally takes more diverse routes, apply chaos theory and recalculate.

-Jack

alex@yuriev.com wrote:

But curiously, adding some incremental capacity to a network can, under some conditions, actually make it worse!

Oh, rubbish.

Hmm? You dispute the result in Roughgarden's paper - that Braess' paradox can occur? Or are you just saying that if the Internet is run solely by people at your intelligence level, it'll never come up as an issue? (I've not said Braess' paradox is common; only that it's an interesting result.)

Of course, the sales people of yet another equipment vendor trying to sell
yet another useless technology that claims in a yet another way eliminate
the need of people with a clue on staff in exchange for major $$$ do not
want to admit it.

Glad to be accused of offering a technology that can only do what smart people can do (whether I agree or not). Since the supply of clue in this world is limited ...

If you want an optimal network, you can:
  1/ sit in the middle and play at being the God of TE
  2/ have the various actors optimize "selfishly"
  3/ count hops and assume that's close enough
(Oh, and if you're into that sort of thing, I suppose you can try dropping some packets to speed things up.)

Oh how about "fire those who are ordering wrong size of interconnects, order
right sized interconnects, count the moneys that you did not waste".

Certainly a reasonable addition to the list - I'd prefer it to those who believe smart packet loss will solve all our problems. Trouble is, firing staff and buying big cross connects does rather assume that all the people you hand packets to are as smart as you are (or can be stopped from misbehavhing promptly).

And I wouldn't evangelize that faith, as stated. I do happen to believe
in "special" (or if you prefer, "selfish") technology that measures
problems in networks I do not control, and if they can be avoided (say by
using a different network or a different injection point), avoid them. In
practice, that extra "if" doesn't change the equation much, since:

So, the brilliant technology costs money but does not provide excellent
results under all circumstances? Simply not making stupid mistakes
designing the network *already* achieves exactly the same result for no
additional cost.

I certainly feel no need to defend a technology as perfect in all circumstances; it only need bring useful, cost-effective improvement. If you inhabit a part of the network where all links are over-provisioned, and no phenomena occur for which you'd like automatic re-route, great. I'm happy for you.

In your own parallel posts, you acknowledge all the murky reasons why other people don't build their networks in the way you'd like. OK; so I can make my own network and interconnects Yuriev-compliant, but that still doesn't solve all the issues as long as I want to talk to people across fabric that is not Y-c. It's a network of networks we live in.

Mike

>>But curiously, adding some
>>incremental capacity to a network can, under some conditions, actually
>>make it worse!
>
>Oh, rubbish.

To alex: It's not necessary to add a tiny link to the network
to make things worse. In fact, the actual Braess Paradox example
that roughgarden uses arises from the addition of a high-capacity,
low-latency link in the wrong place. It presumes the existence of
a smaller capacity path through the network somewhere, but are you
arguing that those paths don't exist? I can show you a lot of them,
since it's what my software (the aforementioned MIT RON project) is
designed to exploit. The Internet is full of weird, unexpected paths
when you start routing in ways that the network designers didn't intend.
And that's what selfish routing _does_.

In fact, another of Roughgarden's results is that it's fundamentally
hard (in the NP sense) to tell whether or not you're going to have
an occurrence of suboptimal selfish routing on your network. There
may be simple guidelines that can help avoid them, but that remains
to be seen (yes, I asked).

>Of course, the sales people of yet another equipment vendor trying to sell
>yet another useless technology that claims in a yet another way eliminate
>the need of people with a clue on staff in exchange for major $$$ do not
>want to admit it.

Glad to be accused of offering a technology that can only do what smart
people can do (whether I agree or not). Since the supply of clue in
this world is limited ...

And to reiterate this in a different light, note that the Roughgarden
work only deals with how far away from a theoretical latency optimum
networks are. It may not apply at all when you're operating with
sub-optimal network information in the way that both current networks
and current selfish solutions (ron, routescience, sockeye, etc.) do.
And note that one of the big benefits that we observed from our own
software wasn't latency -- it was reliability, with improved time-to-fix
vs. BGP convergence. And that's something that's hard to engineer around
from a single provider perspective, because it's all about the interaction
of multiple -- and variably clued -- ASes.

In your own parallel posts, you acknowledge all the murky reasons why
other people don't build their networks in the way you'd like. OK; so I
can make my own network and interconnects Yuriev-compliant, but that
still doesn't solve all the issues as long as I want to talk to people
across fabric that is not Y-c. It's a network of networks we live in.

  Bingo.

  -Dave

> So, the brilliant technology costs money but does not provide excellent
> results under all circumstances? Simply not making stupid mistakes
> designing the network *already* achieves exactly the same result for no
> additional cost.
>

And what, pray tell, governs stupid mistakes designing the network?

Let see.... This would be from a company that used to say "we are basically
former baby bell, so you can count on us".

The data center has OC-3 in.

The data center has 4 customers.

Sales people pitch the business to a webfarm that, based on the mrtg graphs
that the owner shows to the sales people does 170Mbit/sec average and
320Mbit/sec peak over last 5 month, with growth of about 10% a month.

The sales people take the order, the customer gets moved from a different
location to the data center. The customer who moved and the existing
customers are going ballistic trying to figure out why they have packet
loss. The company's engineering claims to the customer that there is no
congestion what so ever, since the data center has 2xOC-12 coming into it.

Customer discovers the lie when someone shows him a 7206 that is carrying
the data center.

Shall I continue? A company that went into chapter 11 and now is trying to
really rebrand itself has this great Fujitsu made gear that
they use to handoff DS1s to collo customers, a lot of which are callback
companies. The techs of the carrier joke that every two month, no matter
what they are supposed to do in that collo, they always bring with them a
spare power supply for that gear since at least once a month, a power supply
on the box blows up.

How about this one? There are very few companies that exercise nearly
complete control over the rights of way for the fiber that leaves island of
manhattan. Of those, just a couple sells dark fiber. Those companies are
known. Majority of the rest would do everything possible not to sell the
fiber or even lambda, preferring, instead to sell you lit circuit. The
buyers for the clients ( most of which are large companies as well ) know
that. They also know that buying the lit circuit would cost them a lot more
in $/mbit then buying the fiber or lambda and lighting it up themselves.
However, they are so pleased with the tickets to SuperBowl that they get
from the sales people of the non-fiber-selling-companies, they would not
ever consider buying from the others.

For that matter, which network? I've run traffic through some networks for
years without a problem. Then one day, that network makes a mistake and
clobbers the traffic I send through it. Naturally, I redirect traffic via
other networks, but the spare capacity via the other networks does not
equate to the traffic I'm shifting, so while improving QoS for my
customers, I have still shorted them.

Do you have more than one exit right now? Do you push around 100Mbit/sec to
each of those providers? Since you aparently have the money, did you
negotiate dials around $50 Mbit/sec exit on giges and OC-12 with 100Mbit
CIR?

If the answer to this question is "yes" I do not see how you should be
worried about that.

It could be argued that more spare capacity should have been allotted
for the other networks, yet then if the first network hadn't had a
problem, money would have been wasted on capacity that wasn't needed. It
is an art to establish enough bandwidth to handle redirects from
networks having problems and yet keep costs at a reasonable level.

I dont know which world do you live in, but today the sales people will beg
for 100Mbit/sec CIRs on OC-12 links just to meet the quotas. So, why dont
you get those 100Mbit/sec CIRs on OC-12c?

Hypothetical: You are interconnected with 3 networks pushing 100Mb/s
through each. Slammer worm appears and makes 2 networks untrustworthy
because of their internal policies. The third network is fine, but your
capacity to it probably won't hold 300Mb/s.

Why is your 100Mbit/sec delivered over OC-3s when with 100Mbit/sec CIRs you
can get OC-12 ports from basically everyone?

Network planning is not just about whether you make a mistake or not.

Network planning *is* about not making mistakes.

Performance is dependant upon the networks we interconnect with, issues
they may have, and how well we can scale our network to circumvent those
issues while remaining cost effective.

Rubbish.

Performance does not depend on cost effective interconnects. They are NOT
related.

My hypothetical is simplistic. Increase the number of peers, as well as
the number of peering points to said peers, determine the cost and
capacity necessary during multiple points of failure, plus the cost within
your own network of redirecting traffic that normally takes more diverse
routes, apply chaos theory and recalculate.

Rubbish again. The fundamental problem with this entire industry is that
some very clever marketing and sales people managed to convince entire bunch
of rather bright geeks that networks are complicated. The truth is, it is
not, however, since you have been told that it is over a million times, you
want to believe that it is.

Alex

>>But curiously, adding some
>>incremental capacity to a network can, under some conditions, actually
>>make it worse!
>
> Oh, rubbish.

Hmm? You dispute the result in Roughgarden's paper - that Braess'
paradox can occur? Or are you just saying that if the Internet is run
solely by people at your intelligence level, it'll never come up as an
issue? (I've not said Braess' paradox is common; only that it's an
interesting result.)

No, I simply read the paper without a need to mold it into my "product
vision".

The paper describes a perfect mathematical system with unlimited resources
available in it. Since we have neither the perfect system nor the resources,
it really does not apply to the real world apart from a nice theoretical
background (similiar to a nice background papers published around 1995 that
said there is no way CPUs of the size of P4 would be able to run at speeds
over 1 Ghz).

Glad to be accused of offering a technology that can only do what smart
people can do (whether I agree or not). Since the supply of clue in
this world is limited ...

The problem is that the technology offered really does not do anything more
than what already exist. It is similiar to going to Wawa and buying 18 rolls
of TP for $99c each, rather than driving to Wallmart 4 minutes away and
buying the same 18 rolls for $4.50.

>>If you want an optimal network, you can:
>> 1/ sit in the middle and play at being the God of TE
>> 2/ have the various actors optimize "selfishly"
>> 3/ count hops and assume that's close enough
>>(Oh, and if you're into that sort of thing, I suppose you can try
>>dropping some packets to speed things up.)
>
> Oh how about "fire those who are ordering wrong size of interconnects, order
> right sized interconnects, count the moneys that you did not waste".

Certainly a reasonable addition to the list - I'd prefer it to those who
believe smart packet loss will solve all our problems.

Actually, I said that using this magical technology that somehow eliminates
the need of having clueful staff as the solution is equivalent of using
claiming that since one uses QoS on packets he/she does not need to address
the problems of packet loss.

Trouble is,
firing staff and buying big cross connects does rather assume that all
the people you hand packets to are as smart as you are (or can be
stopped from misbehavhing promptly).

Rubbish.

SM fiber for OC-3, OC-12, OC-48, OC-192 costs the same.
It is the OC-3, OC-12, OC-48, OC-192 service that you are ordering from one
cage to the one next to you that is killing you.

Even accounting for the costs of cards to support those OC-x connections,
you will be much better of. Why are you using OC-12 and above anyway? Did
someone forget to tell you that gige actually works rather well for
interconnect applications?

In your own parallel posts, you acknowledge all the murky reasons why
other people don't build their networks in the way you'd like. OK; so I
can make my own network and interconnects Yuriev-compliant, but that
still doesn't solve all the issues as long as I want to talk to people
across fabric that is not Y-c. It's a network of networks we live in.

And no amount of technology on YOUR end is going to make the other side
Yuriev-compliant, because the moment the packet leaves your network, you
have exactly the same problem as the one have described.

Did not you notice that people really do not care if they get problems as
they see in traceroute on 2nd, 3rd, 4th or more hops inside the other
network. It is the same to them. They care about the presence of a problem
no matter where it is.

Alex

To those who really dont get what I am saying:

If you do not have enough capacity, the selfish or non-selfish routing does
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                not matter.
                                ^^^^^^^^^^

99.99999% of network problems are caused by CAPACITY issues be that packet
loss, or routers incapable of dealing with the traffic.

Addressing 0.00001% of problems caused by selfish routing is not going to
make it better. Address the issues that cause 99.99999% of the problems
before addressing 0.00001%

Alex

alex@yuriev.com wrote:

Do you have more than one exit right now? Do you push around 100Mbit/sec to
each of those providers? Since you aparently have the money, did you
negotiate dials around $50 Mbit/sec exit on giges and OC-12 with 100Mbit
CIR?

You assume things not in fact, primarily that I have the money to increase capacity. Burstable costs more than full rate. One can use 3 lower end routers to adequately handle 3 OC3 circuts much more cost effectively than buying a single high end router to handle OC-12, gig-e, etc. In addition, if the circuits are handled off at different geographic locations, you will need 3 high end routers to handle the OC-12, gig-e, etc, which increases the price exponentially.

You also assumed that OC-12 and Gig-e is available in the geographic region and that the interconnected networks can support such throughput in that portion of their network.

I dont know which world do you live in, but today the sales people will beg
for 100Mbit/sec CIRs on OC-12 links just to meet the quotas. So, why dont
you get those 100Mbit/sec CIRs on OC-12c?

I live in rural America. I provide access for rural America. There is only one public exchange point close buy and it is over 100 miles from my nearest pop (and the exchange has it's own problems). Capacity from various networks is limited, and it can take over 6 months to get the carriers upgraded to handle a new OC3, much less OC-12 or gig-e.

Why is your 100Mbit/sec delivered over OC-3s when with 100Mbit/sec CIRs you
can get OC-12 ports from basically everyone?

Your perspective is skewed. OC-12 ports are not available from everyone, everywhere. Obtaining, lighting, and maintaining fiber for long haul is not inexpensive.

Network planning *is* about not making mistakes.

Only in an ideal senario. Cost has a lot to do with it. I've watched numerous companies enter Chapter 11 due to spending too much in capacity. The perfect network is not the perfect business model.

Performance does not depend on cost effective interconnects. They are NOT
related.

No, but if you do not have cost effective interconnects, you will not have a business. Operating at a guaranteed loss is stupid at best.

Rubbish again. The fundamental problem with this entire industry is that
some very clever marketing and sales people managed to convince entire bunch
of rather bright geeks that networks are complicated. The truth is, it is
not, however, since you have been told that it is over a million times, you
want to believe that it is.

To use your word, "Rubbish." Your opinions are based upon specific business models and available resources. It does not take into account that costs do limit available resources. In many cases, money itself is the resource that is lacking. Marketing and sales people do not hold sway over everyone, but many of those people are also shrewed in business and recognize that concessions must be made to maintain profitability.

-jack

>
> Do you have more than one exit right now? Do you push around 100Mbit/sec to
> each of those providers? Since you aparently have the money, did you
> negotiate dials around $50 Mbit/sec exit on giges and OC-12 with 100Mbit
> CIR?
>
You assume things not in fact, primarily that I have the money to
increase capacity. Burstable costs more than full rate. One can use 3
lower end routers to adequately handle 3 OC3 circuts much more cost
effectively than buying a single high end router to handle OC-12, gig-e,
etc.

Rubbish. One does not need to buy a high-end router to deal with OC-12 worth
of traffic. In fact, one can get OC-12 interfaces for not so high-end
routers. Not to mention that for a lot of applications discussed, giges
would do just fine.

In addition, if the circuits are handled off at different
geographic locations, you will need 3 high end routers to handle the
OC-12, gig-e, etc, which increases the price exponentially.

If one cannot afford to buy them outright, one leases or finances them. It
will still be cheaper then adding a magic bullet box, plus one will actually
add routing capacity that one needs anyway.

You also assumed that OC-12 and Gig-e is available in the geographic
region and that the interconnected networks can support such throughput
in that portion of their network.

OC-12s and giges are available nearly everywhere as long as one actually has
100Mbit/sec traffic levels. If a sales person from the company is unwilling
to accomodate one using OC-12 and gige fabric, then the odds are the person
wanting those OC-12s/giges does not have the traffic he/she claims to have
to begin with. If that traffic does, in fact, exist, then one simply needs
to find a different sales person or start doing business with a different
company.

> I dont know which world do you live in, but today the sales people will beg
> for 100Mbit/sec CIRs on OC-12 links just to meet the quotas. So, why dont
> you get those 100Mbit/sec CIRs on OC-12c?
>
I live in rural America. I provide access for rural America. There is
only one public exchange point close buy and it is over 100 miles from
my nearest pop (and the exchange has it's own problems). Capacity from
various networks is limited, and it can take over 6 months to get the
carriers upgraded to handle a new OC3, much less OC-12 or gig-e.

Why are you getting carriers to upgrade to handle OC-3 as opposite to
getting your own dark fiber and lighting it up? In 6 month they will build
you the 100 miles in nowhere. It is actually a lot more difficult to get
them to cross Park Avenue in Manhattan then it is to get fiber into rural
area.

> Why is your 100Mbit/sec delivered over OC-3s when with 100Mbit/sec CIRs you
> can get OC-12 ports from basically everyone?
>
Your perspective is skewed. OC-12 ports are not available from everyone,
everywhere. Obtaining, lighting, and maintaining fiber for long haul is
not inexpensive.

Do you *really* believe that metro gear works only in metro areas? I must
have been dreaming having nightmares using bringin up PHL-DC span with
only metro equipment located in DC, BLT and PHL.

> Network planning *is* about not making mistakes.
>
Only in an ideal senario. Cost has a lot to do with it. I've watched
numerous companies enter Chapter 11 due to spending too much in
capacity. The perfect network is not the perfect business model.

Again, network planning *is* about not making mistakes.
Having a business model that does not start with "Cisco will finance us so
they get to make the quarterly numbers" is not a part of network planning.
The reason a lot of those companies are in chapter 11 is because they were
buying the long-haul gear when they could have used metro for 1/4 of the
price, because they insisted on paying nearly list price for gear, because
they ordered 4xOC3 as opposite to 1xOC12 (even though the application
required gige to begin with).

> Performance does not depend on cost effective interconnects. They are NOT
> related.

No, but if you do not have cost effective interconnects, you will not
have a business. Operating at a guaranteed loss is stupid at best.

No, buying a piece of gear that that brings a routing service for which one
gets to pay thousands dollars every month when ones does not have enough
routers is equivalent to taking a daily driven car that has engine problems
to a detailing shop as opposite to a mechanic while saying "I cannot afford
both". As far as the operating at loss goes, it is nearly guaranteed that a
company that sells broadband via ethernet at a price less than it pays for
those megabits, gambling that the clients wont be using the bandwidth as
opposite to calculating the real cost, and marking it up.

> Rubbish again. The fundamental problem with this entire industry is that
> some very clever marketing and sales people managed to convince entire bunch
> of rather bright geeks that networks are complicated. The truth is, it is
> not, however, since you have been told that it is over a million times, you
> want to believe that it is.
>
To use your word, "Rubbish." Your opinions are based upon specific
business models and available resources.

My opinions are based on observation of what happens to the companies whose
networks do not run well, companies that buy tons of gear to please the VCs
that finance them (because VCs invested into those who sell them gear),
companies that build networks and think the customers will come.

It does not take into account
that costs do limit available resources. In many cases, money itself is
the resource that is lacking. Marketing and sales people do not hold
sway over everyone, but many of those people are also shrewed in
business and recognize that concessions must be made to maintain
profitability.

So how spending money on a box that sometimes helps your routing decisions
when you dont have enough routers is the concession that must be made to
maintain profitability?

Alex

Thus spake <alex@yuriev.com>

Do you *really* believe that metro gear works only in metro areas? I
must have been dreaming having nightmares using bringin up PHL-DC
span with only metro equipment located in DC, BLT and PHL.

...

So how spending money on a box that sometimes helps your routing
decisions when you dont have enough routers is the concession that
must be made to maintain profitability?

Your arguments are as repetitive as they are incoherent. Your position,
distilled, is that everyone will go bankrupt if they listen to their
vendors, and instead should spend money like crazy to build excess capacity
into their network which they wouldn't need with traffic engineering.
Rubbish.

Lighting up dark fiber between pops is not as cheap, fast, or simple as you
pretend it is, nor is it necessarily less expensive than purchasing
circuits. I've witnessed the implosion of many large ISPs and CLECs, and
all failed due to incredibly stupid business plans and/or mismanagement --
not technical or engineering decisions. This includes all those Cisco
Funded Networks that provide us so much amusement.

Your personal anti-vendor crusade is adding much more heat than light to
this thread. There are many technical approaches which may be valid
depending on business factors; to claim any approach but yours is financial
suicide is incredibly naive, not to mention provably wrong.

S

Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Umm.. Alex? There's places around the US that would make a fiber pull
across Park Avenue look like a cakewalk. We're talking places that won't
get a fiber pull because no company will lay 50 to 100 miles of fiber before
having a *guarantee* of multiple customers to amortize the cost over.

Wyoming/Idaho... Maine.. Appalachia..

http://www.ecorridors.vt.edu is what we're doing to try to fix the
chicken-and-egg problem (basically, the same build-and-privatize model that
we already used for http://www.bev.net and http://www.networkvirginia.net).

http://www.ecorridors.vt.edu/papers/location/lenowisco/Demo%20Slide%20Show.pdf

Go look at page 3, and ask yourself what provider in their right mind will
start pulling dark cable to *THERE* - closest reasonable city is Knoxville TN
at around 100 miles away. We're talking about places that make Blacksburg VA
look like suburbs.. :wink:

On the other hand, if you're a provider that thinks this makes sense, let
us know.. :wink:

Your arguments are as repetitive as they are incoherent. Your position,
distilled, is that everyone will go bankrupt if they listen to their
vendors, and instead should spend money like crazy to build excess capacity
into their network which they wouldn't need with traffic engineering.

If the vendor is sitting on a opposite side of the negotiating table, the
goal of vendor is fundamentally different from your goal. The sales people
of the vendors are paid based on comissions. This means that they are
interested in selling you $40 lightbulbs. That is waht makes companies spend
money like crazy.

Lighting up dark fiber between pops is not as cheap, fast, or simple as you
pretend it is, nor is it necessarily less expensive than purchasing
circuits.

Lighting fiber between pops is fast, easy and cheap. It is clearly cheaper
than paying for OC-48 circuits.

I've witnessed the implosion of many large ISPs and CLECs, and
all failed due to incredibly stupid business plans and/or mismanagement --
not technical or engineering decisions.

What those ISPs and ILECs did was buy tons of equipment and tons of
circuits, lit up DS3s in addition to OC-12s for the backbone and hired too
many people who told them that it is difficult to run networks well. Thanks
to those ISPs now one can get a $3M lot of M40 and M160 for $750k cash.

This includes all those Cisco Funded Networks that provide us so much
amusement.

And somehow engineers are still buying into a magic box solution?

Your personal anti-vendor crusade is adding much more heat than light to
this thread. There are many technical approaches which may be valid
depending on business factors; to claim any approach but yours is financial
suicide is incredibly naive, not to mention provably wrong.

I do not have anti-vendor crusade. I have a crusade against vendors who are
selling ice to the eskimo, while molding the concept of what ice is to fit
their whitepapers.

Alex