Cogent service

Hi

Does anyone have any comments (good or bad) about Cognet as a transit
provider in New York?
They seem to be too cheap.

Arie

Does anyone have any comments (good or bad) about Cognet as a transit
provider in New York?

No. But we (ISC) are using them in San Francisco (at 200 Paul Street) and
they've been fine.

We use them. They work, they're reliable, they keep their promises, and
their NOC is incredibly responsive during denial of service attacks or other
problems.

  My only complaint is that if you need anything customized at all, they just
won't do it. They script and standardize everything. At one point, we needed
one slight configuration change to a machine colocated at our office that
wouldn't affect anyone but us. They admitted we needed it and couldn't get
the affect any other way, but just couldn't do it. "That's not the product we
offer."

  DS

wouldn't affect anyone but us. They admitted we needed it and couldn't get
the affect any other way, but just couldn't do it. "That's not the product

we

offer."

Yeah -- those types of things suck, but from their perspective this kind of
policy allows them to keep their service level consistent. One-offs can end
up being expensive in the long run. "If I do it for you, everyone will want
the same thing!"

There are many of us selling a cogent service, or, in some cases a
cogent + extras service, in many cities.

You may want to consider said people when you want cheap-ass bandwidth,
but need some flexibility.

Actually, that would be OK - what they're worried about is if nobody else
wants the same thing... :wink:

They seem to have above-normal congestion at their peering points. They
are prepending 2x on their Sprint and MFN(AboveNet) transit. I guess this
has shifted too much traffic to the peering they acquired through PSI and
NetRail. They also have very poor routing for some ASNs like 577
(preferring long peered routes over much shorter transit routes).

I was also very surprised to see they prepend on BGP announcements to
their own customers. If you're multihomed then it means a bit more work
to try to avoid paying for a mostly empty Cogent pipe.

If financial stability is a concern for you, then I suggest reading the
debt covenants in their SEC filings. On one hand I doubt they'll be able
to live up to them by Q2 2003, but then Cisco is their main investor so
the consequences may not be that bad if they fail to meet them.

-Ralph

It seems the posts werent specific to your location, and you didnt specify if you had any priorities in which (if any) backbones were more important to you. Having been CTO at Netrail and dealing with Cogent lately, you should see zero problems going to most backbones such as uunet. The only negative routing comments Ive heard are complaints about extra hop counts.

They are integrating 3 different AS#s so you can expect to see some growth concerns, but that is normal and to be expected. Outside of the extra hops, the people I know on the backbone are very happy. The capacity to most peers is very good, and if you build an on-net network with them you will see very good performance.

I think it's hard to complain when you are getting such an amazing price. They are building their backbone by providing very large pipes to customers and then managing the aggregate traffic levels. Balancing of traffic is a concern for peering so they will probably ask what kind of traffic you have. If you have eyeball traffic, Im willing to be you could negotiate an even better deal. As their traffic levels grow they will be an important backbone to peer with, and that gives them more leverage in peering.

At the very least, if you are uncomfortable go multihomed. At their pricing I would not be concerned at all to having them as one of my providers. They are also responsive to any routing concerns.

The only drawback from the customer perspective has been mentioned. They do not want to be everything to everyone. What that tells me is that what they do, they are going to do very well, and they are process oriented. Anyone that tries to be everything to everyone usually isnt very good at anything.

If you are concerned about financials, name someone in better shape. It's simple, multihome, or go into a facility where you can quickly move a cross-connect to another provider (thanks Jay Adelson for burning that idea into my head).

Dave

I must agree with everyone else's synopsis. There bandwidth is cheap,
and connection is reliable. They however do have some congestion issues
and are not very flexible when it comes to special needs.

Dale Levesque

Dave - I know you know this and you are referring to an issue that both of us have heard....

The hidden assumption here is that the extra hops implies worse performance. This is perception rather than real. One could quite easily put in place a VPN or MPLS substrate and make all destinations appear "one hop away" without changing the underlying technology or performance of the network.

A network application with clear latency/jitter/packet loss characteristics would be a more effective way to evaluate network fitness. I suspect what really happens is
a) the is a performance problem somewhere in the path
b) a traceroute is done
c) the traceroute is misinterpreted - "the problem is packets go all over the place!"
d) the misinterpretation is generalized to "more hops is bad"

from what I've seen anyway.

Bill

Bill,

Thanks for assuming I know a bit about networking :wink:

Actually I just received an off-list email, but the sender has a incorrect reply to. So I will partly address his question here.

At Netrail we used ATM layer 2 to map express routes. We did this for many reasons. Yes helping with hope count doesnt always help with latency, it does keep customers happy. Where it does help is with routers like GRFs where you do NOT want to piggy back load from one hope to another, and then have those routers make decisions. It also didnt make sense to hand off all traffic from an ATM box, to a router box and then back. Do we need to mention the ATM boxes were more reliable then those routers?

With Junipers or today's ciscos we would not have done it that way probably. They other issue to consider Bill is that depending on your IP network design, you can add a great deal of complexity with a 67 city network where you have more then 1 router in many cities.

The off-list question was whether we had peering in LA as Netrail and yes we did. 1 Wilshire is a good location, although much of the peering is international. Ren likes LA a lot as a peering center. A lot of people will tell u voice traffic to Pacific rim is big there. We did map most of this back to our regional center in that area which is San Fran and exited the peering there. We had 3 peering exchanges there and private peering, so people in LA did in fact enjoy good performance. Mario if you are seeing packet loss or latency issues, you should address it with Cogent's noc or with Chris who is working on peering there. Im sure they would appreciate the feedback. You may see issues as a result of integrating several backbones.

The hop count question is interesting. Is the consensus that it's mostly a customer service issue, where latency isnt affected but customer perception is? Or is it a real latency issue as more routers take a few CPU cycles to make a routing decision.

Dave

Under the best possible circumstances, most of the extra delay is due to
the fact that routers do "store and forward" forwarding, so you have to
wait for the last bit of the packet to come in before you can start
sending the first bit over the next link. This delay is directly
proportional to the bandwidth and the packet size. Since ATM uses very
small "packets" this isn't as much an issue there.

However, the real problem with many hops comes when there is congestion.
Then the packet suffers a queuing delay at each hop. Now everyone is going
to say "but our network isn't congested" and it probably isn't when you
look at the 5 minute average, but short term (a few ms - several seconds)
congestion happens all the time because IP is so bursty. This adds to the
jitter. It doesn't matter whether those extra hops are layer 2 or layer 3,
though: this can happen just as easily in an ethernet switch as in a
router. Because ATM generally doesn't buffer cells but discards them, this
also isn't much of an issue for ATM.

However, when an ATM network gets in trouble it's much, much worse than
some jitter or even full-blown congestion, so I'll take as many hops as I
must to avoid ATM (but preferably no more than that) any day.

Under the best possible circumstances, most of the extra delay is due to
the fact that routers do "store and forward" forwarding, so you have to
wait for the last bit of the packet to come in before you can start
sending the first bit over the next link. This delay is directly
proportional to the bandwidth and the packet size. Since ATM uses very
small "packets" this isn't as much an issue there.

But doing SAR at the ends of the PVC you�ll end up suffering the same
latency anyway and since most people run their ATM PVC�s at a rate
smaller than the attached linerate, this delay is actually larger in many
cases.

However, the real problem with many hops comes when there is congestion.
Then the packet suffers a queuing delay at each hop. Now everyone is going
to say "but our network isn't congested" and it probably isn't when you
look at the 5 minute average, but short term (a few ms - several seconds)
congestion happens all the time because IP is so bursty. This adds to the

If you either do the math at OC48 or above or just look at how many places
are able to generate severe, even subsecond bursts on any significant
backbone,
you�ll figure out that 99.9% of the time, there aren�t any. If you burst
your
access link, then it�s a not a backbone hopcount issue.

jitter. It doesn't matter whether those extra hops are layer 2 or layer 3,
though: this can happen just as easily in an ethernet switch as in a
router. Because ATM generally doesn't buffer cells but discards them, this
also isn't much of an issue for ATM.

Most ATM switches have thousands of cell buffers for an interface or tens of
thousands to a few million shared for all interfaces. There is one legendary
piece of hardware with buffer space for 32 cells. Fortunately they didn�t
get
too many out there.

However, when an ATM network gets in trouble it's much, much worse than
some jitter or even full-blown congestion, so I'll take as many hops as I
must to avoid ATM (but preferably no more than that) any day.

Depends on your ATM hardware, most of the vendors fixed their ATM to make
decisions based on packets. Which kind of defeats the idea of having 53 byte
shredded packets in the first place.

Pete

From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of
David Diaz
Subject: Re: Cogent service

{portions deleted}

The hop count question is interesting. Is the consensus that it's
mostly a customer service issue, where latency isnt affected but
customer perception is? Or is it a real latency issue as more
routers take a few CPU cycles to make a routing decision.

Dave

An occasionally-overlooked result of engineering your network
so that any point in your core is one hop away from any other
point is that you negate the second-to-last BGP path selection
criteria, which will take you down to the router-ID tiebreaker
from time to time.

> Under the best possible circumstances, most of the extra delay is due to
> the fact that routers do "store and forward" forwarding, so you have to
> wait for the last bit of the packet to come in before you can start
> sending the first bit over the next link. This delay is directly
> proportional to the bandwidth and the packet size. Since ATM uses very
> small "packets" this isn't as much an issue there.

But doing SAR at the ends of the PVC you�ll end up suffering the same
latency anyway

Yes, but only once. With a layer 3 network (or non-ATM layer 2 network)
you get this at every hop.

> However, the real problem with many hops comes when there is congestion.
> Then the packet suffers a queuing delay at each hop. Now everyone is going
> to say "but our network isn't congested" and it probably isn't when you
> look at the 5 minute average, but short term (a few ms - several seconds)
> congestion happens all the time because IP is so bursty. This adds to the

If you either do the math at OC48 or above

Ok, if you push your line to 99% utilization your average queue size is
100 packets. Assuming those are 1500 bytes this adds up to 1200000 bits or
some 480 microseconds delay...

Not sure how many people do 99% or more over their 2.4 Gbps lines, though.

or just look at how many places are able to generate severe, even
subsecond bursts on any significant backbone, you�ll figure out that
99.9% of the time, there aren�t any.

Well, I don't have an OC48 but I have seen slower lines that aren't
considered congested by any definition still experience tail drops from
time to time, so bursts happen. But that could be just 0.1% of the time,
yes.

> However, when an ATM network gets in trouble it's much, much worse than
> some jitter or even full-blown congestion, so I'll take as many hops as I
> must to avoid ATM (but preferably no more than that) any day.

Depends on your ATM hardware, most of the vendors fixed their ATM to make
decisions based on packets. Which kind of defeats the idea of having 53 byte
shredded packets in the first place.

I'm sure that after seeing jumboframes in gigabit ethernet, someone will
invent jumbocells for ATM to solve those shredding inefficiencies.

The problem with running IP over ATM is that both protocols need/use very
different ways handle congestion and those ways tend to conflict. So then
you have to cripple either one or the other. (Or buy more bandwidth.)

(apologies for the previous email being HTML)

Yes, but only once. With a layer 3 network (or non-ATM layer 2 network)
you get this at every hop.

About 40% all packets are minimum size. Depending on your encapsulation
these are usually less than 53 bytes on a POS link. So you suffer only the few
microseconds of switching latency.

Ok, if you push your line to 99% utilization your average queue size is
100 packets. Assuming those are 1500 bytes this adds up to 1200000 bits or
some 480 microseconds delay...

Remember that in order to generate a queue you have to receive packets faster
than you can send. So you have to figure out how to get packets into the box
faster than your OC48 can sink them. The math gets complicated but
even strict real world applications tolerate ~10ms delay variance. So for 10 or
20 hops you can easily afford 500 microseconds on a hop. And if you really
care about latency you can drop the frames on the packet train to give better
service to other people.

Well, I don't have an OC48 but I have seen slower lines that aren't
considered congested by any definition still experience tail drops from
time to time, so bursts happen. But that could be just 0.1% of the time,
yes.

You make my point. Bursts happen mostly on slow lines. That�s because
there is a fast line somewhere which can burst your slow link.

The problem with running IP over ATM is that both protocols need/use very
different ways handle congestion and those ways tend to conflict. So then
you have to cripple either one or the other. (Or buy more bandwidth.)

That discussion is very old news, we would be done with ATM unless the
stupid and ignorant DSL equipment vendors wouldn�t be forcing it back on
us once again.
(though I like ATM over MPLS or Frame Relay)

Pete

Under the best possible circumstances, most of the extra delay is due to
the fact that routers do "store and forward" forwarding, so you have to
wait for the last bit of the packet to come in before you can start
sending the first bit over the next link. This delay is directly
proportional to the bandwidth and the packet size. Since ATM uses very
small "packets" this isn't as much an issue there.

But doing SAR at the ends of the PVC you�ll end up suffering the same
latency anyway and since most people run their ATM PVC�s at a rate
smaller than the attached linerate, this delay is actually larger in many
cases.

However, the real problem with many hops comes when there is congestion.
Then the packet suffers a queuing delay at each hop. Now everyone is going
to say "but our network isn't congested" and it probably isn't when you
look at the 5 minute average, but short term (a few ms - several seconds)
congestion happens all the time because IP is so bursty. This adds to the

If you either do the math at OC48 or above or just look at how many places
are able to generate severe, even subsecond bursts on any significant
backbone,
you�ll figure out that 99.9% of the time, there aren�t any. If you burst
your
access link, then it�s a not a backbone hopcount issue.

jitter. It doesn't matter whether those extra hops are layer 2 or layer 3,
though: this can happen just as easily in an ethernet switch as in a
router. Because ATM generally doesn't buffer cells but discards them, this
also isn't much of an issue for ATM.

Most ATM switches have thousands of cell buffers for an interface or tens of
thousands to a few million shared for all interfaces. There is one legendary
piece of hardware with buffer space for 32 cells. Fortunately they didn�t
get
too many out there.

However, when an ATM network gets in trouble it's much, much worse than
some jitter or even full-blown congestion, so I'll take as many hops as I
must to avoid ATM (but preferably no more than that) any day.

Depends on your ATM hardware, most of the vendors fixed their ATM to make
decisions based on packets. Which kind of defeats the idea of having 53 byte
shredded packets in the first place.

Really? Then I guess Juniper made a mistake chopping every packet into 64 byte packets :wink: . From a hardware standpoint, it speeds up the process significantly. Think of a factory with a cleaver machine, it knows exactly where to chop the pieces because there is a rhythm. It takes no "figuring out." By chopping up everything into set sizes you dont need to "search" for headers or different parts of the packet. It's always at a "set" byte number. Well Ive done a poor job of explaining it, but it does speed things up.

What are these 'special needs' people keep mentioning? What special needs
might you have of your transit providers?

Speaking of special...having played around a little with the BGP
communities supported by C&W and Sprint, I'm wondering which other big
transit providers (it seems almost a waste to say Tier 1 anymore) support
community strings that will let you (the customer) cause them to
selectively prepend route announcements to their peers.

This seems to be a really handy tool for balancing (or at least trying to
balance) traffic across multiple transit providers without having to
resort to the sort of all or nothing results you'd get by prepending your
announcements to the transit provider, or worse, deaggregating your IP
space for traffic engineering.

AFAIK, Genuity does not have this.

UUNet has a very rudimentary version which allows you to cause them to do
prepending to all or no non-customer peers.

Sprint and C&W do it very differently but allow you to select which peers
to prepend to (though you'll likely have to work with several Sprint
engineers or get lucky to get it working).

If there are others that support the sort of flexibility of Sprint and
C&W, and have decent T3 level pricing, I'd like to hear about/from them.

Date: Sun, 22 Sep 2002 23:16:20 -0400 (EDT)
From: jlewis

Speaking of special...having played around a little with the
BGP communities supported by C&W and Sprint, I'm wondering
which other big transit providers (it seems almost a waste to
say Tier 1 anymore) support community strings that will let you
(the customer) cause them to selectively prepend route
announcements to their peers.

1239, 3356, 3549, 3561

Other than that, I don't know of any positives. I was going to
compile a list and make a webpage... but I've had underwhelming
response to past posts on the subject.

4006 used to; I don't know what 174/4006/16631 does now.

This seems to be a really handy tool for balancing (or at least
trying to balance) traffic across multiple transit providers
without having to resort to the sort of all or nothing results
you'd get by prepending your announcements to the transit
provider, or worse, deaggregating your IP space for traffic
engineering.

Yes. Also handy for tuning latency/paths.

AFAIK, Genuity does not have this.

I believe this is the case for 1, 209, 2914, 7018. My experience
with 6347 has been "you want prepends, you do 'em yourself".

If there are others that support the sort of flexibility of
Sprint and C&W, and have decent T3 level pricing, I'd like to
hear about/from them.

I should have a link/email with 3549 communities... somewhere.
They also have a nice set of tags indicating where the route
originated (dowstream, public peer, etc.; US city, international)
that help outbound traffic.

Kevin Epperson is the person to contact for L3 info. He monitors
NANOG-L, so you should hear from him... ping me for his email
addr if not.

Eddy

"E.B. Dreger" wrote:

> Date: Sun, 22 Sep 2002 23:16:20 -0400 (EDT)
> From: jlewis

> Speaking of special...having played around a little with the
> BGP communities supported by C&W and Sprint, I'm wondering
> which other big transit providers (it seems almost a waste to
> say Tier 1 anymore) support community strings that will let you
> (the customer) cause them to selectively prepend route
> announcements to their peers.

1239, 3356, 3549, 3561

Other than that, I don't know of any positives. I was going to
compile a list and make a webpage... but I've had underwhelming
response to past posts on the subject.

There is an ongoing effort with IETF to define a special type of
extended communities to support this type of interdomain traffic engineering
in a standardize manner within the ptomaine working group.

See http://www.ietf.org/html.charters/ptomaine-charter.html

Those redistribution communities have been implemented in zebra
http://www.infonet.fundp.ac.be/doc/tr/Infonet-TR-2002-03.html

There is also a detailed survey of the utilization of communities that
might be of interest
http://www.infonet.fundp.ac.be/doc/tr/Infonet-TR-2002-02.html

Best regards,

Olivier Bonaventure