-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
http://www.eetimes.com/showArticle.jhtml?articleID=53700951
regards,
/vicky
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
http://www.eetimes.com/showArticle.jhtml?articleID=53700951
regards,
/vicky
Vicky, I apologize if I am hijacking your thread.
Is it just me or does all this talk of Research (and other Public Interest) Networks and logical separation by layer 1/2 leave [everyone] nonplussed?
How is logical separation of a network [say via MPLS] much different than using a lambda to do the same thing? It seems kind of dumb to me that a network that is spending the money to buy capacity is selling a 2.5G or 10G wave to universities as any kind of improvement... I'm not even sure they could do it at a better price than a desperate telco that is selling the underlying fiber in the first place.
Engineering idea: All the constituent folks do the same network, but build it as a single logical network, with say all 40x10G Lambdas on it. Everyone is given a 2.5G or 10G MPLS tunnel with the ability to use all unused bandwidth that is available on the network at that time... That would at least have some legs and create some value for having more membership.
This smacks me as similar to Philadelphia wanting to deploy universal WiFi and charging $20-$25/month for it -- a free network to the city makes sense, afterall they pay taxes -- a psuedo-commercial service, what's the point? Do these government (and other so-called Public Interest) networks really make sense in the U.S. or is everyone still stuck in a timewarp when/where the NSFnet made sense because no one (commercially) could/would step up to perform the same function.
Hopefully there is some operational content in there... If you don't see an on-list response from me, you probably know why.
Deepak Jain
AiNET
Vicky wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi Deepak!
you raise some interesting points from bw standpoint.
what really got me scratching my head is the fact that throwing bw to
conserve computing power. in this cat and mouse game, mouse always wins
"The OptiPuter project aims at learning how to 'waste' bandwidth and
storage in order to conserve 'scarce' computing in this new world of
inverted values," said Smarr.
i'm not even sure why even implement mpls where latency/congestion is
not an issue specially in this case or even talking about I2 for that
matter.
regards,
/vicky
Deepak Jain wrote:
you raise some interesting points from bw standpoint.
what really got me scratching my head is the fact that throwing bw to
conserve computing power. in this cat and mouse game, mouse always wins"The OptiPuter project aims at learning how to 'waste' bandwidth and
storage in order to conserve 'scarce' computing in this new world of
inverted values," said Smarr.i'm not even sure why even implement mpls where latency/congestion is
not an issue specially in this case or even talking about I2 for that
matter.
Hmmm. Maybe I need to be a little more explicit in my concerns....
I am not concerned with the applications of the bandwidth that research folks are doing. Of course, research for research's sake has a value. I guess I meant... what is this interest in building a new network from scratch when all they are doing is using commercially available equipment provided by Cisco, and perhaps other vendors, etc? Regen is probably handled by the fiber vendors too... so where is the research in running a network?? Its trying to use the network as a service, of which, I am not sure there are many research interests that have more experience than the commercial folks.
By mentioning MPLS or another tunneling technology, I didn't mean to imply IPV4. Indeed, I meant that you can encapsulate whatever you want on an underlying network, or if you need raw access to the optics, you can always order wavelengths... The idea of building a network like this seemed like reinventing a dirt road next to an existing superhighway.
Likewise, with the Internet2 stuff, the underlying network is provided by commercial carriers... End equipment may be different, and that's the way it is with all commercial circuits today for standards-based communications/protocols..
So what is the value in dedicated research networks when the same facility can be provided by existing lit capacities by commercial networks? Is it a price delta? Or is it belief that the commercial folks don't meet the needs of the underlying applications? (if its the latter, I'd love to know what is being done).
To hash this out even more.... In regards to regional academic networks, I completely understand that there are significant economies by operating as a single entity. The complexity of running dark fiber in a regional network isn't really bad at all, and capacity can be added in pretty dynamic increments. However, once you start expanding to connect regional networks to each, it seems that the complexity increases far faster than the benefits -- and where universal/commercial carriers seem to have the greatest value offering.
What am I missing?
(P.S. have a nice holiday).
DJ
Hi Deepak and Vicky,
I can't resist comment even though at this point the additional questions that i can answer are very limited. For the past month I have been talking privately and more recently on a private mail list with folk at the heart of this. Canarie under Bill St Arnaud has been the global leader for more than the past five years. Tom Defanti and Joe Mambretti in chicago with the start tap and star light exchanges have been before National lambda rail (NRL) the main players in the US. NRL is perceived as quite important in that it will permit american universities to experiment with their own fiber and wavelength as the canadians have been with CA*Net4 for years. In Europe Kees neggers with Surfnet6 is doing the same thing.
As I understand it Internet 2 and Dante/Geant in europe are primarily carrier dependent and therefore for the most part onlookers.
NLR, Surfnet6, Ca*Net4 are or will be experimenting with User controlled light paths (UCLP). For more put UCLP in google. I interviewed Bill St Arnaud 2 weeks ago on UCLP. Here one of the purposes is to enable users at the edge to make connection oriented links between each other WITHOUT A CARRIER IN THE MIDDLE by partitioning a segment of a switch or switches between them. The concept is peer to peer and ad hoc. the goal is enabling customer owned and operated networks.
As one of the players said on November 15th:
"UCLP is simply a layer 1 provisioning and configuration tool. Although we
use lightpaths it is not restricted to optical networks.
Although it seems paradoxical I am not a big believer in AON, ASON or
optical networking in general. I think the big benefit of DWDM networks
will be to increase the richness of meshing of IP networks and to allow
new business models of IP networking to evolve e.g customer owned and
managed IP networks."
Web services is being used to set up the lightpaths. If there is sonet underneath the folk with access to webservices can groom the light path from a ds3 on up and with further software development can groom less than a ds3.
This stuff is not yet well understood outside these research network circles. I believe that it is hugely important and I will be devoting most of my time in december and january to explaining to a broader audience what these folk are doing.
to the world of the best effort public internet it is utterly ALIEN. but my understanding is that it works. NOW. That it is a walled garden and that a big unknown is how long it will remain a walled garden.
Greetings - The use of aliases or partial names is prohibited on the NANOG
mailing list. Please see our AUP:
We suggest that you either:
1. Configure your email agent to also insert the
real name field (i.e., my.pseudonyn --> John Smith)
2. Include your .sig in each message.
We thank you for your cooperation in helping to maintain the content and
quality of the NANOG mailing list.
Susan Harris, Ph.D.
Merit Network/Univ. of Mich.
Date: Wed, 24 Nov 2004 21:33:24 -0500
From: Gordon Cook <cook@cookreport.com>
To: deepak@ai.net, nanog@merit.edu
Subject: Re: Public Interest Networks (try UCLP)
[ ... ]
In Europe Kees neggers with Surfnet6 is doing the same thing.
Well, some people over here in .NL might take offense If you're
interested, check out http://www.gigaport.nl/info/en/home.jsp
Kees Neggers and Boudewijn Nederkoorn are the directors of SURFnet,
the party which is basically the educational ISP, where educational
should be viewed (nowadays) from K12 to Academic levels
SURFnet6 is the next version of the SURFnet network over here in
Holland. It is developed through a partnership comprising Universities
research institutes etc. That partnership is called "GigaPort", and
the new effort is called "GigaPort NG" (Next Generation - we're not
that imaginative over here ;D).
[ ... ]
As I understand it Internet 2 and Dante/Geant in europe are primarily
carrier dependent and therefore for the most part onlookers.
I believe that Dante/Geant are looking at the model which is at the
core of SURFnet5 and SURFnet6. SURFnet5 (the current network)
already incorporates a lot of dark-fiber. But it's only used at L3.
SURFnet6 will basically make the ISP (SURFnet) also the carrier
operating at L2 and semi-L1 (the cable is still rented via IRU).
[ ... ]
This stuff is not yet well understood outside these research network
circles. I believe that it is hugely important and I will be
devoting most of my time in december and january to explaining to a
broader audience what these folk are doing.
Well, apart from high-volume data-sets like LOFAR (check out
http://www.lofar.org/), having 30+ 10G paths at your disposal as an
ISP would make for interesting cases Look at the DSL
oversubscription model. Over here consumer DSL is usually 1/40.
Business DSL can be had from 1/20 to 1/1. For an ISP that could allow
for a much more flexible differentiation within it's backbone
resources. Another much cited possibility was that in case of an
overcrowded pipe, connections could be moved to another
lightpath; alleviating the pressure of a bandwith-usurping event on
the regular path it would travel... (DoS, severe Slashdotting etc.) Or
implementing QoS on a L1/L2 level
to the world of the best effort public internet it is utterly ALIEN.
but my understanding is that it works. NOW. That it is a walled
garden and that a big unknown is how long it will remain a walled
garden.
GigaPort (which resulted in SURFnet5) had a bunch of R&D labs from
commercial companies on board. I believe they're also on board for
GigaPort NG. The usefulness of such a network, or better formulated
the results from all the research on / with / about these types of
networks is clear from a scientific point. From a "ISP World" point it
is definitely something for the larger carriers. But for all ISP's of
Network Operators (getting back to the 'NOG'-part of NANOG) it's
definitely worth keeping tabs on. To quote Erik-Jan Bos of SURFnet:
"The Paradigm Shift is upon us".
Kind Regards,
JP Velders
(working at a GigaPort NG partner ;D)