ISP network re-design feedback requested

Hi everyone,

Hopefully my question is operational 'enough' to be asked here, as I
don't know of any other place to ask...

Still trying to redesign (as-I-go) our ISP network, I've realized that
we are not large enough to deploy a full three layer approach (core,
dist, acc), so I'm trying to consolidate, with the ability to scale if
necessary. I also want full network reachability if I need to take any
one router off-line for upgrade or replacement purposes.

Given the following diagram (forgive me, it was drafted rather quickly
with Visio, and just dumped onto a web box), I'm hoping for advice on
whether I'm leaning the right way.

http://ibctech.ca/p-ce.html

What I want:

- ability to take a router off-line for upgrade, and not be concerned
about reachability issues if the lab-tested procedure fails miserably on
production gear
- a relatively easy way to keep traffic control measures at the
access/edge (ACLs, uRPF, RTBH etc)
- the 'core' free of interface ACLs (if possible), only running
filtering ingress to the process-switch environment
- the ability to scale without having to have a full mesh with all PE
routers

What I have:

- numerous CPE routers connected to a CE switch that multi-homes into
two different routers at two different locations in our access layer
- an access layer that has no routers capable of a full BGP table (well,
v4 that is)
- a core layer that can handle full tables
- a network access layer on the north side of the diagram that you can't
see, with the same type of setup, but with full v4 routing tables being
announced in
- the access layer provides def-orig to CPE routers
- the PE protects the CE from becoming transit

What I am thinking

- use the core routers as route-reflectors to the PE access routers,
including a def-orig where it applies (to remain scalable, until PE can
be replaced to hold full routes)
- the PE routers send def-orig on to the CE sites
- stop thinking about every network like it is an 'enterprise' network
- look at most of my ISP environment as 'access clients', instead of
always seeing my ISP as everything in my buildings. See the ISP as a
'network provider', and then realize the rest are just access 'clients':

-- the 'hosting provider'
-- the 'collocation provider'
-- the 'Internet provider'
-- the 'email provider'
-- ect

There is much, much more, but feedback on the above setup will get me
going on the proper path...

Steve

The Coalition of Internet Service Providers has filed a substantial contribution at the CRTC stating:

1) The CRTC should forbid DPI, as it cannot be proven to be 98.5% effective at trapping P2P, such as to guarantee congestion relief

2) The CRTC should allow for other forms of traffic management by ISPs, such as Flow Management

http://www.crtc.gc.ca/public/partvii/2008/8646/c12_200815400/1029835.zip

This is part of the public record at the following address:

http://www.crtc.gc.ca/PartVII/eng/2008/8646/c12_200815400.htm

The world will see Canada taking head-on the issue of addressing the legitimacy of DEEP PACKET INSPECTION as a mean of properly managing an incumbent's network behind the unbundling/peering interface.

NANOG cannot pretend that this debate does not take place and remain silent on this.

Best regards,

F.

Francois Menard wrote:

The Coalition of Internet Service Providers has filed a substantial
contribution at the CRTC stating:

1) The CRTC should forbid DPI, as it cannot be proven to be 98.5%
effective at trapping P2P, such as to guarantee congestion relief

2) The CRTC should allow for other forms of traffic management by ISPs,
such as Flow Management

http://www.crtc.gc.ca/public/partvii/2008/8646/c12_200815400/1029835.zip

This is part of the public record at the following address:

2008-11-20 - #: 8646-C12-200815400 - Public Notice 2008-19 - Review of the Internet traffic management practices of Internet service providers | CRTC

The world will see Canada taking head-on the issue of addressing the
legitimacy of DEEP PACKET INSPECTION as a mean of properly managing an
incumbent's network behind the unbundling/peering interface.

NANOG cannot pretend that this debate does not take place and remain
silent on this.

Francios;

Are you responding directly to my post via an automated filter, or am I
the only one seeing the hijacking of this thread?

Either way, great! I thought that I'd state that I forgot to mention in
my original post that I was considering running HSRP (vrrp) on my core
routers, for the access-layer clients who are not multi-homed.

Given my setup, does RFC3768 at the 'core' make sense?

Steve

Francois:

Should your email have also included a French interpretation as well?

Sincerely,

Lorell Hathcock

If by french, you actually are wondering if the time could be afforded to translate all of the filing to french, and file in both languages, the answer is unfortunately no.

However, I most certainly assist with any requested translation.

My take on this is that there is no difference in a peering router ignoring DSCP bits, and queuing all of the traffic in the same lane, such as to disregard the intended prioritization between the peering parties.

DPI gear instead of ignoring DSCP bits, compute DSCP bits from the content of the packer (so-called application headers).

I agree with various party submissions that the 5-layer Internet model does not provide for such a concept of headers, which DPI and ILECs and Incumbent Cable Operators, call application headers.

I believe that anything traffic management, which purports to place its hook on application headers, is by definition, violating network neutrality.

However, traffic management in the form of 'pacing packets' based on their inter-arrival behavior, multiplicity of sources and multiplicity of destinations, remains legit in my humble opinion.

Just like ignoring DSCP bits at the peering interface is legit at this time.

Its like the post office getting envolopes by the truckload, then opening each envelope, read the content, to decide when to send the opened letter for delivery, either by foot or car, claiming that such a decision process will prevent envelopes from flooding the post office, coming into the post office for delivery in the last mile.

On the other hand, traffic management such as flow management, deal with stuff differently by ensuring that the envelopes do not get to the post office too fast, thus permitting the letters be dispatched always by car, except those envelopes which are arriving to the post office, exhibiting behaviour of P2P, which are then sent for delivery by foot. In this latter case, the envelopes are never opened.

F.

---------- see the small change in the third sentence ---- apologies.

If by french, you actually are wondering if the time could be afforded to translate all of the filing to french, and file in both languages, the answer is unfortunately no.

However, I most certainly assist with any requested translation.

My take on this is that there is no difference in a peering router ignoring DSCP bits, and queuing all of the traffic in the same lane, such as to disregard the intended prioritization between the peering parties AND FLOW management...

DPI gear instead of ignoring DSCP bits, compute DSCP bits from the content of the packer (so-called application headers).

I agree with various party submissions that the 5-layer Internet model does not provide for such a concept of headers, which DPI and ILECs and Incumbent Cable Operators, call application headers.

I believe that anything traffic management, which purports to place its hook on application headers, is by definition, violating network neutrality.

However, traffic management in the form of 'pacing packets' based on their inter-arrival behavior, multiplicity of sources and multiplicity of destinations, remains legit in my humble opinion.

Just like ignoring DSCP bits at the peering interface is legit at this time.

Its like the post office getting envolopes by the truckload, then opening each envelope, read the content, to decide when to send the opened letter for delivery, either by foot or car, claiming that such a decision process will prevent envelopes from flooding the post office, coming into the post office for delivery in the last mile.

On the other hand, traffic management such as flow management, deal with stuff differently by ensuring that the envelopes do not get to the post office too fast, thus permitting the letters be dispatched always by car, except those envelopes which are arriving to the post office, exhibiting behaviour of P2P, which are then sent for delivery by foot. In this latter case, the envelopes are never opened.

F.

There is, however, at least one more dimension with postal or package
delivery services. They offer different delivery priorities with different
pricing, may have surcharges or refuse large content that the physical
transport technically could carry, and offer sender-pays and receiver-pays
options.

A few specialized cases do apply as well, such as some package delivery
services accepting and handling hazardous materials only with declaration
and surcharges.

It seems that this discussion emphasizes technical capabilities, which
certainly are relevant, but does not necessarily consider economic
incentives or disincentives. We are probably in agreement that either DPI or
traffic analysis could identify high-volume P2P; how does one deal with the
customer assumption that they "should" be able to do whatever they like?
Content distribution networks and caches do allow a much cleaner economic
model, if not as convenient.

Its like the post office getting envolopes by the truckload, then
opening each envelope, read the content, to decide when to send the
opened letter for delivery, either by foot or car, claiming that such
a decision process will prevent envelopes from flooding the post
office, coming into the post office for delivery in the last mile.

There is, however, at least one more dimension with postal or package
delivery services. They offer different delivery priorities with different
pricing, may have surcharges or refuse large content that the physical
transport technically could carry, and offer sender-pays and receiver-pays
options.

The starting hypothesis is based on envelopes of the same size, weight, colour, with no chemical powder in them, and all with the same stamp value...

The emphasis, is the need to open the envelope to decide how to route them...

F.

The emphasis, is the need to open the envelope to decide how to route
them...

and more of my margin goes to the folk who make envelope openers. and
this is a good thing? and it helps get the packets to the customer how?

pfui!

randy

Yah. I like what Mike O'Dell said at
http://www.listbox.com/member/archive/247/2009/03/sort/time_rev/page/1/entry/3:12/

  I admit to no debate on Deep Packet Inspection by ISPs,
  advertisers, or other assorted evesdroppers.

  It is very, very simple and as black-and-white as they come:

  It is WRONG and Deeply Evil.

  There is no righteous purpose, period. Full Stop.

  The fact that various government agencies are very good at it
  is irrelevant.

  "When the President does it, it's STILL Wrong."

    --Steve Bellovin, http://www.cs.columbia.edu/~smb

Yah. I like what Mike O'Dell said at
Topicbox
   I admit to no debate on Deep Packet Inspection by ISPs,
   advertisers, or other assorted evesdroppers.
   It is very, very simple and as black-and-white as they come:
   It is WRONG and Deeply Evil.
   There is no righteous purpose, period. Full Stop.
   The fact that various government agencies are very good at it
   is irrelevant.
   "When the President does it, it's STILL Wrong."

go mo!

randy

With regards to DDoS mitigation, it's sometimes necessary to go above layers-3/-4 in the event of layer-7-targeted attacks.

In fact, it's sometimes important to have the ability to parse packet payloads and/or interact with traffic in some layer-3/layer-4 attacks, depending upon the type of traffic, source distribution, legitimate proxy intermediaries, spoofed vs. non-spoofed, and so forth.

A reminder that political threads are prohibited from the mailing list under the AUP.

Participants in the "DPI or Flow Management" might wish to move their discussion to a more politically orientated forum such as the Network Neutrality Squad mailing list:

http://www.nnsquad.org/

Simon Lyall
(on behalf of) NANOG Mailing List Committee.

In short, the entire DPI debate is starting to go on similar lines,
and flogging similar horses, as the gun control debate

Yes, dpi has great, useful applications (ddos mitigation and other
security, for example). And it has bad / harmful applications
(dictatorships doing dpi to catch political dissent).

That says a lot more about inappropriate / appropriate use of dpi
rather than dpi itself.

Nothing at all in DPI that makes it wrong, deeply evil etc.

-srs

Suresh Ramasubramanian wrote:

In short, the entire DPI debate is starting to go on similar lines,
and flogging similar horses, as the gun control debate

Yes, dpi has great, useful applications (ddos mitigation and other
security, for example). And it has bad / harmful applications
(dictatorships doing dpi to catch political dissent).

That says a lot more about inappropriate / appropriate use of dpi
rather than dpi itself.

Nothing at all in DPI that makes it wrong, deeply evil etc.

Which is why the political debates over it bother me. Declaring dpi as evil and regulating it could very well limit security of the future; not to mention the fact that DPI tends to be extremely vague in definition dependent upon its implementation.

Jack

The issue is use of dpi to eliminate congestion stemming from p2p's natural unfairness behind the unbundling interface.

F.