The future of NAPs & IXPs

All is well but you missed one of the most critical issues with the current
IXPs: lack of scalability. The private point-to-point interconnects
are at least as fast as backbones. Fixing IXP scalability issues requires
somewhat radical departure from the current router architecture; such
as being done by terabit router vendors.

In other words, even if multi-party IXPs are more cost-effective, they
are currently (and in the near-term future) unable to handle the load.

Also, the number of interconnects (IXPs or direct) cannot be large
because of flap-amplification properties of the inter-backbone connections.

(BTW, O(5) can be an arbitrarily large fixed number, simply speaking :slight_smile:

--vadim

For what it's worth...I just finished a paper that highlights the trade
offs between the direct circuit interconnect model and the exchange point
interconnection model for ISPs. The paper discusses the operations and

financial models (taking into account the circuit costs, cost of exchange
participation, cost of dark fiber, etc.) and the implications of these
strategies across the # of interconnection participants and bandwidth
utilization between the participants.

To cut to the chase, the major points from the paper:

All is well but you missed one of the most critical issues with the current
IXPs: lack of scalability. The private point-to-point interconnects
are at least as fast as backbones. Fixing IXP scalability issues requires
somewhat radical departure from the current router architecture; such
as being done by terabit router vendors.

In both the direct circuit interconnection model and the exchange based
interconnection model, point-to-point interconnection can be accomplished
with at least equal scalability. Private cross connects (a piece of fiber)
within an exchange can be driven at the same speed as a piece of fiber that
travels across many miles under the ground.

(I think you inferred that there was a switch involved in the model...If
so, I agree, there are alternative ways to interconnect within an exchange
(i.e. switch vs. terabit routing technology, etc.) that each have different
characteristics and scalability issues. I'm comparing interconnection
environments apples to apples. )

----- snip -----

(BTW, O(5) can be an arbitrarily large fixed number, simply speaking :slight_smile:

OK - I'll restate; about, ~, roughly, and in the neighborhood of 5 :wink:

From traffic engineering point of view, I suspect the direct circuit

interconnection, private peering model to be considerably simpler to
deal with than the exchanged based model.

In the direct circuit case, if the interconnect pipe does not have the
umph to satisfy the performance characteristics of the peering traffic,
it can be relatively easily detected (percentage of packet loss) and
resolved between the two parties (buy more or fatter pipe).

In the exchange case, assuming the exchange box is connecting an OC12
link from each of 4 providers. If provider A is experiencing problem
with provider D, it could be due to the problem with the pipe and other
stuff associated with provider D, but just as likely, it may be due to
having too much concurrent traffic from provider B and C with D. Such
overloading due to [temporal] traffic aggregation can be pretty tricky
to identify [particularly provider A will unlikely to have access to
traffic profile/log of provide B and C] and even tricker to figure out
what to do.

Regards,
John leong