The market must be coming back

Everyone's so busy there hasn't been a peep on here in weeks.

Regards,
Christopher J. Wolff, VP CIO
Broadband Laboratories
http://www.bblabs.com

Actually, there has been a lot of peeping!

Peep peep!

I've been thinking about leasing some dark fiber and running one of the
new 10gigE blades for the Cat 6500 chassis. Throw in the Cisco
"Flamethrower" GBIC and I should be good for 50 miles. Has anyone tried
this?

C.

Chris:

I've been thinking about leasing some dark fiber and running one of the
new 10gigE blades for the Cat 6500 chassis.

Be careful here. Last I tested (at one of our channels that also resells
Cisco) is that the 10GbE on the Catalyst 6500 hasn't broken 4G throughput
yet. Sort of like buying a GbE interface for a 7200 (It only get's 10%
throughput... Why waste the money, just buy FE!). The GSR is up to about
8G throughput nowadays from what I've seen.

Foundry Networks (my company) can get a perfect clean 8G throughput on all
of our chassis with management modules M2 or above (we don't support 10GbE
on the legacy M1). Our NG chassis will be available later in the year for
those folks that want 4 X 10 GbE on each module (8 slot chassis). I expect
this will be a perfect 40G throughput since I've never seen us do anything
less than perfect (been working here since August).

Additionally, you would be the first customer I've heard about doing
standards based 10GbE on a Catalyst. (feel free to chime in if you're doing
this... Can I bring my SmartBits 600 to your site to test throughput?).
Good luck!

Foundry has a few references:

Deployed:
http://www.foundrynet.com/about/newsevents/releases/pr4_3_02.html
http://www.foundrynet.com/about/newsevents/releases/pr4_2_02.html
http://www.foundrynet.com/about/newsevents/releases/pr2_11_02.html

Many others that we don't press release. We've got these blades running in
production networks here in Japan that I'm not allowed to talk about. Also
many other places.

Deploying:
http://www.foundrynet.com/about/newsevents/releases/pr5_8_02.html

Performance:
http://www.spirentcom.com/news/press.cfm?id=87

Throw in the Cisco "Flamethrower" GBIC and I should be good for 50 miles.

Has anyone tried

this?

Foundry Network's Long Haul (LHB: 150 km, LHA: 70 km) Ethernet optics exceed
Cisco's on GbE (ZX: 100 km). I'm sure we exceed them on the ER LAN PHY for
10GbE. We've only tested to 85 kilometers (ER). 802.3ae standard is 40 km:

http://biz.yahoo.com/prnews/020508/nyw068_1.html

Cisco's website says they can do the 802.3ae standard 40 km on the 1550 nm
blade. I'm not sure if the optics are changeable either:

http://www.cisco.com/warp/public/cc/pd/ifaa/6500ggml/

I doubt if there is a GBIC for 10GbE available. We use the same blade with
changeable optics; however, I would not call the SR (300 meters), LR (10
km), and ER LAN PHY optics GBIC's...

Moral of this story is that BEFORE you buy these blades from Cisco (or
anybody), test them! If you don't have 10GbE SmartBits or IXIA, you can use
1GbE interfaces and wrap them around until you get 8G (no need to produced
anything higher 'cause the Cat 6500 has an 8G throughput limitation). Don't
test latency with this method :-). I don't believe the marketing from any
company, not even my own. I test, then tell.

I've personally never seen a packet drop at a steady 8G rate for up to 72
hours; however, one of our customers evaluating the 10GbE blades reported 2
64 byte packet's were dropped in a 12 hour line rate test. I suspect they
had bad fiber.

Gary Blankenship
Systems Engineer
Foundry Networks

From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On
Behalf Of Gary

that want 4 X 10 GbE on each module (8 slot chassis). I
expect this will be a perfect 40G throughput since I've never
seen us do anything less than perfect (been working here
since August).

Oh phuleeese.... Stop drinking your own Kool-Aid(tm). To honestly
suggest that Foundry, or any other vendor for that matter, never does
'anything less than perfect' is nothing less than idiotic. If Foundry
does things so 'perfect' why do they have a TAC? Why do they have bugs?
Why do they even need to release new software ever again? Obviously what
is out now will solve every possible issue - its 'perfect' right? The
only possible answer according to your logic, is to support customers
who are 'doing it wrong' and need to be educated.

Go find the nice black shirts that were passed out at Foundry's last
Kool-Aid fest. You are in obvious need of one. This is NOT the place to
post vendor FUD. All you are doing is making Foundry look bad, and
making yourself look even worse.

My apologies to NANOG..

.chance

"Mommy, my Kool-Aid tastes funny."
  - Katie, Age 7
    Jonestown 10/18/78

Additionally, you would be the first customer I've heard
about doing standards based 10GbE on a Catalyst. (feel free
to chime in if you're doing this... Can I bring my SmartBits
600 to your site to test throughput?). Good luck!

Foundry has a few references:

Deployed:
http://www.foundrynet.com/about/newsevents/releases/pr4_3_02.html
http://www.foundrynet.com/about/newsevents/releases/pr4_2_02.html
http://www.foundrynet.com/about/newsevents/releases/pr2_11_02.html

Many others that we don't press release. We've got these
blades running in production networks here in Japan that I'm
not allowed to talk about. Also many other places.

Deploying:
http://www.foundrynet.com/about/newsevents/releases/pr5_8_02.h

tml

Performance:
http://www.spirentcom.com/news/press.cfm?id=87

Throw in the Cisco "Flamethrower" GBIC and I should be good for 50
miles.

Has anyone tried

this?

Foundry Network's Long Haul (LHB: 150 km, LHA: 70 km) Ethernet optics
exceed Cisco's on GbE (ZX: 100 km). I'm sure we exceed them on the ER
LAN PHY for 10GbE. We've only tested to 85 kilometers (ER). 802.3ae
standard is 40 km:

http://biz.yahoo.com/prnews/020508/nyw068_1.html

Cisco's website says they can do the 802.3ae standard 40 km on the 1550
nm blade. I'm not sure if the optics are changeable either:

http://www.cisco.com/warp/public/cc/pd/ifaa/6500ggml/

I doubt if there is a GBIC for 10GbE available. We use the same blade
with changeable optics; however, I would not call the SR (300 meters),
LR (10 km), and ER LAN PHY optics GBIC's...

Moral of this story is that BEFORE you buy these blades from Cisco (or
anybody), test them! If you don't have 10GbE SmartBits or IXIA, you can
use 1GbE interfaces and wrap them around until you get 8G (no need to
produced anything higher 'cause the Cat 6500 has an 8G throughput
limitation). Don't test latency with this method :-). I don't believe
the marketing from any company, not even my own. I test, then tell.

I've personally never seen a packet drop at a steady 8G rate for up to
72 hours; however, one of our customers evaluating the 10GbE blades
reported 2 64 byte packet's were dropped in a 12 hour line rate test. I
suspect they had bad fiber.

Gary Blankenship
Systems Engineer
Foundry Networks

How did the Foundry test lab arrive at those figures, and what
substances were consumed at the time?

I'd say 300+ mbit/sec on a PA-GE is a more accurate real-world limit,
assuming you've got plenty of spare CPU cycles to burn, and no ACL's.

Besides, that's really an apples to oranges comparison. I don't think
anyone, including Cisco, has ever made the claim that it can do line
rate GbE; that's not to say it isn't useful for certain topologies
requiring slightly-faster-than-fast-e router<->switch uplinks, etc.

-a

I don't know.. it's been fairly chatty on here.
At times more so and more often on a single thread than usual.

One report claims that the job boards have exploded in parts of the world
recently with large numbers of new positions opening.
Anyone report claims that the market is getting better and that this is
expected.
I know in the UK a lot of departments would have got new budgets last
month which would have caused the above effects there.
Probably true of other parts of the world also.

Personally I would say that Foundry does EVERYTHING less than perfect.
Nearly everyone I'm aware of (including myself) who has had to misfortune
to try and use their devices in a service provider environment and a layer
3 role has come away with a universal loathing of biblical proportions.

I really can't stress this enough, it DOES NOT MATTER how many gigabits
your box forwards. A router is ONLY as useful as the quality of its
software and support, if you can't login to it or have working routing
protocols, it's just a big paperweight. The only "wannabe cisco" company I
have seen learn this lesson is Juniper, and I am firmly convinced this is
the reason for their success in the core.

Whenever I read a press release about Foundry in the core, I stop and take
a moment to laugh uncontrollably. It has nothing to do with ISIS or MPLS,
it has to do with making your existing functionality work correctly and
behave in a sensible fashion. Nothing personal against Foundry, but the
people in charge couldn't possibly "not get it" any more than they do now.

Adam:

[...] Sort of like buying a GbE interface for a 7200 (It only get's
> 10% throughput... Why waste the money, just buy FE!).

How did the Foundry test lab arrive at those figures, and what
substances were consumed at the time?

I used a Cisco 7200 VXR with NPE-400. I used two different 7200's with the
exact same results. Bidirectional throughput on 1GbE is a fraction above
10%. Unidirectional is a bit better (23%). Singl line ACL drops it to 8%
(permit ip any any). FE performance doesn't start to drop below line rate
until you put more than two in the box. I have a powerpoint if you'd like
it, but it is not meant to slander Cisco, just to convince my customers NOT
to put GbE in a 7200! It is not a GbE platform!

I'd say 300+ mbit/sec on a PA-GE is a more accurate real-world limit,
assuming you've got plenty of spare CPU cycles to burn, and no ACL's.

Besides, that's really an apples to oranges comparison. I don't think
anyone, including Cisco, has ever made the claim that it can do line
rate GbE; that's not to say it isn't useful for certain topologies
requiring slightly-faster-than-fast-e router<->switch uplinks, etc.

My powerpoint compares the 7200 with the FastIron 4802 Premium. It is line
rate with less than 7 us latency on the two GbE ports. I tested this
myself. I can forward this to you if you like. It is a bunch of SmartApps
screen captures of the testing.

I really like the 7200 VXR. It is a good 10M and minimum FE platform. It
can switch DS0 on the midplane and it supports a wide array of interfaces!
I just don't like to see it oversubsribed. Many of our customers use the
7200 and have nothing bad to say about it when deployed properly.

Gary

I have personally seen a 7200 with PXF-chip and two PA-GE do NAT at
300megabit with a few (10-15) ftp streams going thru it. With more random
load it wouldn't go much above 100 meg, though.

And please, lab tests doesnt show it all. Does the Foundry have a route
cache? How many entries? I have seen equipment that performs perfectly in
the lab start to bog down when you put real traffic on them, because of
route cache limitations (for instance, 256.000 entries starts to be
problematic when you have thousands of customers running real internet
traffic thru the device).

I have personally seen a 7200 with PXF-chip and two PA-GE do NAT at
300megabit with a few (10-15) ftp streams going thru it. With more random
load it wouldn't go much above 100 meg, though.

I have done 400Mbit with an NPE400, though that's pushing the box close to
its limits.

But really, a good engineer knows his tools and knows how to choose them
for the task. If you want to push 900Mbps, you don't pick a router with a
central software based route lookup system and PCI based backplane. On the
other hand, if you need to do "complex" things, a 7200 may be your best
bet simply because of its simplicity. All the nasty bugs that make using a
GSR so miserable almost never manifest themselves on a 7200. If you're
adventurous you can even install the "latest" code and probably not pay
for your transgression against the IOS gods within 48 hours. :slight_smile:

And please, lab tests doesnt show it all. Does the Foundry have a route
cache? How many entries? I have seen equipment that performs perfectly in
the lab start to bog down when you put real traffic on them, because of
route cache limitations (for instance, 256.000 entries starts to be
problematic when you have thousands of customers running real internet
traffic thru the device).

A classic Foundry flaw, which you can get around to some extent with ip
net-agg or dr-agg.

I've found it best to treat a Foundry doing layer 3 like you would a 7500.
You know, tiptoe when you walk by it, try not to give it any funny looks,
only login to it when you REALLY need to, only make changes at 2am, etc,
it is usable in a customer aggregation role. Anything more is tempting
fate. And if^H^Hwhen you run into a really fun issue, don't even think
about calling Foundry TAC after hours, all you'll get is someone's house
with their screaming kids in the background.

Chance:

> that want 4 X 10 GbE on each module (8 slot chassis). I
> expect this will be a perfect 40G throughput since I've never
> seen us do anything less than perfect (been working here
> since August).

Oh phuleeese.... Stop drinking your own Kool-Aid(tm). To honestly
suggest that Foundry, or any other vendor for that matter, never does
'anything less than perfect' is nothing less than idiotic. If Foundry
does things so 'perfect' why do they have a TAC? Why do they have bugs?
Why do they even need to release new software ever again? Obviously what
is out now will solve every possible issue - its 'perfect' right? The
only possible answer according to your logic, is to support customers
who are 'doing it wrong' and need to be educated.

Topic is performance. Not sugary beverages. Sorry for not making that
clear. Let me reword. My bad: "perfect performance on 10GbE". I believe
I also mentioned our 8G per slot throughput limitation not to mislead people
to think we do 10GbE non-blocking. Same limitation as the Cat6500 once it
gets up to speed.

Go find the nice black shirts that were passed out at Foundry's last
Kool-Aid fest. You are in obvious need of one. This is NOT the place to
post vendor FUD. All you are doing is making Foundry look bad, and
making yourself look even worse.

Didn't you pass out those shirts? Everything I posted concerning
performance of 10GbE I saw for myself. All other information was publicly
available and concerns operators interested in 10GbE. Many of them are
unaware of their options and I wanted to bring Foundry to light.

Reading NANOG you would think that the only way to spot Nimda would be NBAR
and the only MPLS is Juniper. The post I replied to is a person considering
10GbE in a 6500. I've seen the performance on this at a customer site with
SmartBits. The channel became a Foundry reseller because of this specific
issue.

Now the same configuration comes up on NANOG and I wanted the person
thinking about the 6500/10GbE solution to be aware of what I saw. Perhaps
the performance is faster than 4G today (My info is a month old). If I were
to leave Foundry today (to make them look better) and work for another
company (McDonalds?), I would have sent the same post (would you like fries
with that?). You can't forget what you see. I have tested our 10GbE
personally.

Gary

Richard:

Personally I would say that Foundry does EVERYTHING less than perfect.
Nearly everyone I'm aware of (including myself) who has had to misfortune
to try and use their devices in a service provider environment and a layer
3 role has come away with a universal loathing of biblical proportions.

Not worth a response. Can't please everybody and you CAN'T design everyone's network for them. Sort of like EIGRP. Even the worst network engineer can look great with it. Perhaps you should read JANOG. Maybe they can help you. Search for フアウンドリ。 (note, if you cannot read this, it is Japanese for Foundry in unicode).

I really can't stress this enough, it DOES NOT MATTER how many gigabits
your box forwards. A router is ONLY as useful as the quality of its
software and support, if you can't login to it or have working routing
protocols, it's just a big paperweight. The only "wannabe cisco" company I
have seen learn this lesson is Juniper, and I am firmly convinced this is
the reason for their success in the core.

Juniper is an OUSTANDING company. Much better than many networking companies in many respects. I've also heard nothing but good things about Unisphere here in Japan, so perhaps this will be a good marriage with benefits to service providers. I'll enjoy competing. We will compete.

Whenever I read a press release about Foundry in the core, I stop and take
a moment to laugh uncontrollably. It has nothing to do with ISIS or MPLS,
it has to do with making your existing functionality work correctly and
behave in a sensible fashion. Nothing personal against Foundry, but the
people in charge couldn't possibly "not get it" any more than they do now.

Remember what you said in this paragraph. I will refer to it later.

Yoroshiku,

Gary

Richard:

And if^H^Hwhen you run into a really fun issue, don't even think
about calling Foundry TAC after hours, all you'll get is someone's house
with their screaming kids in the background.

Our TAC is 24/7 and has been 24/7 for years. I work in the Support Center
for Japan. We have not gone 24/7 yet, but it is under investigation.
Sitting 2 feet from me is a gentleman who has been working with Foundry
products since '97. He has called almost every day since then and not once
has had the problem you described. I did not mention to him why I was
asking these questions and he is honest. Did you call the wrong number?
This looks a bit personal...

Gary

And please, lab tests doesnt show it all. Does the Foundry have a route
cache? How many entries?

I've been trying to use Linux for routing. I've had ~30Mbps going through
2 3Com cards without a hiccup. The "problem" I'm having is figuring out
when I'll hit the limit of throughput. Interrupt time doesn't show in top
or uptime, so it looks like the CPU is 99% idle (this is with a Duron
750). I've also looked at the kernel routing code and found that there is
room for significant improvement by changing the route-cache code. I
figure increasing the route-cache hash table size from 256 entries and
changing from one entry per IP to one entry per route prefix would give
about an order of magnitude of improvement based on typical route-cache
sizes of 5000 entries that I see. However without knowing the true CPU
utilization I don't know if it is even necessary to try.

-Ralph

I didn't say it wasn't 24/7, I just said it rang through to someones house
with their screaming kids in the background on a regular basis. I do know
how to operate a telephone, thanks. :slight_smile:

And it's nothing personal, I have actually been one of Foundry's biggest
supporters compared to almost every other engineer I know. Everyone else
gave up using them in layer 3 a long time ago.

I recall that, early in my career I had the opportunity to build a new
LAN backbone for a 6 story office building. It was going to be Category
5! Woohoo. With a 12/24 fiber backbone.

ATM in a LAN environment was new at the time but I was going to make
sure I had an OC3 backhauling each of the floors to a central switch. I
thought this design was beautiful and marvelous. There was a neat new
company that made LAN-style ATM gear with performance specs that would
just blow your mind.

So when I took the design to the board they loved the fastethernet fiber
blah blah and gave approval. But when it came down to selecting vendors
for the hardware I ran right into a brick wall with questions like:

How long has this company been in business?
Are they using open standards?
Do they have knowldgeable tech support?
..and so on.

So, regardless of whether the hardware is the fastest thing on the
block, pushing 10 nanobits at a megaflop, you can look like a fool if
you don't consider the business repercussions of the vendor you choose.
In the end, I didn't get my design approved until I chose Cisco. Was I
pissed, sure! Did I ship off white papers and other propaganda to
support my case? Yes! But the company went bankrupt about 2 weeks after
I submitted the bid.

Just my .02,

Regards,
Christopher J. Wolff, VP CIO
Broadband Laboratories
http://www.bblabs.com

"No one gets fired for buying IBM."

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
                               Patrick Greenwell
         Asking the wrong questions is the leading cause of wrong answers
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Good point! The other one is "Choose your battles wisely."

Patrick

How long has this company been in business?
Are they using open standards?
Do they have knowledgeable tech support?
..and so on.

Good startups make great partners, and a great partner will have crisp and compelling answers to these questions that CFO-types like, even before you start to ask them. Even so: you might not have needed such performance anyway, since your situation might have been risk_of_brand_name < risk_of_better_performance. (There's always a risk to choosing the "safe" alternative, but established vendors go through great lengths to make sure you don't see them.)
This topic brings to mind a phrase I once read:

"Truth and Technology will Triumph over Bullshit and Bureaucracy."
-- PanAmSat's slogan(mantra?) as a startup, often accompanied by an image of Spot, dutifully lifting his leg to the competition (other interpretations abound)