Fundamental changes to Internet architecture

I guess I'm not the only one who thinks that we could benefit from some
fundamental changes to Internet architecture.

http://www.wired.com/news/infostructure/0,1377,68004,00.html?tw=wn_6techhead

Dave Clark is proposing that the NSF should fund a new demonstration
network that implements a fundamentally new architecture at many levels.

I'm tired of the tax money wasted everywhere for "demonstration
networks" which cost huge amount of money, have fat pipes for nothing
and lots of press coverage (and if interconnected to the DFZ world often
introduce routing problems for those who actually try to use IPv6 in
production). But still most ISPs don't get native IPv6 service from
their v4 upstreams.

The real work is done elsewhere. There _are_ commercial ISPs nowadays
who have 30Gbps (30, not 3) of native IPv6 bandwidth US-EU and can
provide native IPv6 transit throughout Europe and the US. But no press
releases.

Some talk about deployment, some actually do. Intersection between those
camps is relatively small.

Best regards,
Daniel

PS: and it's not an US ISP nor tier 1 :slight_smile:

> Dave Clark is proposing that the NSF should fund a new demonstration
> network that implements a fundamentally new architecture at many

levels.

The real work is done elsewhere. There _are_ commercial ISPs nowadays
who have 30Gbps (30, not 3) of native IPv6 bandwidth US-EU and can
provide native IPv6 transit throughout Europe and the US. But no press
releases.

I think Dave Clark is talking about something more fundamental than
simply IPv6 and also more far reaching. Also, the experience with
retrofitting most of IPv6's new features into IPv4 shows that it
is good to have role models and that is what Dave is proposing.

More information on the new architecture work is here
http://www.isi.edu/newarch/

--Michael Dillon

I think Dave Clark is talking about something more fundamental than
simply IPv6 and also more far reaching. Also, the experience with
retrofitting most of IPv6's new features into IPv4 shows that it
is good to have role models and that is what Dave is proposing.

Indeed. Looks like I still had the IPv6-goggles on from the earlier
thread(s). :slight_smile:

More information on the new architecture work is here
NewArch Project: Future-Generation Internet Architecture

Thanks for the pointer.

Best regards,
Daniel

'"Look at phishing and spam, and zombies, and all this crap," said Clark.
"Show me how six incremental changes are going to make them go away."'

Well I suppose it is a good sales pitch, but I'm not terribly sure that these
are a network layer problems.

We could move to a network layer with more security that makes it impossible
for network carriers to identify or intercept such dross, which might at
least deal with the crowd who think "filter port 25 outgoing" is the solution
to all the Internets woes :wink:

Why not create a special taskforce to research implementing
RFC 2549 - IP over Avian Carriers with Quality of Service
considdering the dodo or alternatively achaeopteryx (both extinct)?

It is about wasting taxpayers money while watching china deploy IPv9.

We do not need IPv6. We do not need P2P serverers for everybody. We do
not need worldnews. IPv4 is good enuf for us. 127.0.0.1 is the only ip
we really need. We need a strong gouvernement and free tv for everybody.
Lets go to China :slight_smile:

After toying a bit with Eudora I found out it was working - really!

How about IPv6? It is working. Everybody can use it - even with windows!

How about

http://www.inaic/index.php?p=manual-upgrade
http://www.inaic/index.php?p=internet2-tool

It is working! People use it! IPSs use it! Even countries are changing!

In the Middle ages, if they ever existed in the first place, church
was doing research. Today we have to do it ourselves.

Radio was not invented by gouvernment, only hindered. The automobile was
not invented by gouvernement only licensed and taxed.

Regards - and have a nice weekend
Peter and karin Dambier

I guess I'm not the only one who thinks that we could benefit from some
fundamental changes to Internet architecture.

http://www.wired.com/news/infostructure/0,1377,68004,00.html?tw=wn_6techhead

Dave Clark is proposing that the NSF should fund a new demonstration
network that implements a fundamentally new architecture at many levels.

Not that I want to throw any more fire on this, but I think the article is
talking about National Lambda Rail. From what I've seen, this is supposed to
be a next-generation Internet2 based on wavelengths instead of pipes. I don't
know a lot about it but, from what I've seen, my impression is this. Keep in
mind that I'm not really involved with the NLR stuff directly, so my thoughts
are really as an outsider looking in.

Internet2 is significantly under utilized, so we don't need a new network to
provide significantly more bandwidth (we're at 10% utilization I think).
Institutions on Abilene connect in through GigaPops and, in terms of the
"last mile", getting gigabit rate metroEthernet down to a Gigapop isn't as
bad as it used to be given that many fiber providers have gone under recently
and given the collective bargining that's going on among members
(see the Quilt project - http://www.thequilt.net). Just as this situation
is being exploited to make purchasing fiber inexpensive for NLR, its also
made it inexpensive for local consortiums to purchase for themselves
(e.g. the Northern Crossroad's metro ring purchase, NEREN, etc.)

NLR's scheme looks like the ideas floated in the 90s about creating this
massive ATM network (this time, its going to be DWDM and MPLS), where you
could provision all sorts of "services". When they built it (vBNS), my
impression is that no one wanted those services and hence Abilene was built.
Again, the institutions really just wanted packet transport and didn't care
about the overly services and it seems that one of NLRs big benifits is mainly
its ability to provide these overlay services.

Next, NLR seems to be supplying waves and fibers in the exact same locations as
Abilene, which is to say the same locations that already have both a glut of
fiber and high speed connectivity. NLR isn't going to help me solve any of
the problems I have with connectivity today because it goes only where I already
go today. Another big drawbback is that NLR has been a closed club with a
very high cost of entry. Again, if the cost of buying fiber assest directly
(or one or two schools collectively) is less than joining NLR, why would I
want to join?

These issues, and I'm sure others, has resulted in somewhat of a lack of
interest in NLR which seeems to be party of why NLR is looking to merge back
into the Internet2 project.

At least, that's my view from from Boston...

Eric :slight_smile:

It is about wasting taxpayers money while watching china deploy IPv9.

Though I'm not positive, my impression is that NLR currently being built not by
the NSF but by "member institutions" - which is to say by research Universities
that are a part of the Internet2 project. Because we're being asked to pay for
it directly, many of the institutions are balking because we don't see the
benifit. I

Its likely that NLR is looking back to the NSF because of this lack of
"funding"...

Eric :slight_smile:

To clarify a bit: Dave Clark is talking about a new, proposed research agenda for networking research that emphasizes the heck out of making the research become real and relevant. He's not talking about building an NLR or an Internet2, though both NLR and I2 are resources that can and probably will be used as a part of the demonstration network, if the project really takes off.

In fact, Fergie's later comment "... We're pretty far along in our current architecture to 'fundamentally' change" is actually the root of what I think DC is trying to get at. I think it's a very reasonable question to ask: Is the Internet heading towards a local maxima? (I don't know the answer!) What is it possible to change in today's Internet? Imagine a couple of things that seem desirable:

If research came up with an improved inter-domain routing protocol that had faster convergence, better security and better stability than BGP, but that was unfortunately in no way backwards compatible, could we deploy it?

A solution to DDoS that required another change to the basic IP packet format?

An improved intra-domain management and control system?

Of those, some seem possible -- particularly the latter, given that it could be deployed by a single ISP on its own, giving it (ideally!) a competitive advantage over others. A BGP replacement, if the designers/ietf/etc. couldn't figure out a way to make it backwards compatable? Not so sure. Another IP packet format change, after all of the pain of trying to get IPv6 deployed?

Perhaps more of the answers to these questions would be "yes" if it were possible to demonstrate - at scale - that the new protocols were actually effective and worthwhile. Or perhaps the answers would be "yes" if that demonstration network exploded in popularity because it had those features, and the NSF found itself with another Internet on its hands. :slight_smile:

I think it's these kind of questions that Dave Clark is trying to get at, much more than just trying to build a really fast demonstration network.

Is the clean-slate approach the way to go? I don't know. It could work out well, or perhaps academia would be better served by sending more of our students to summer internships at ISPs who're doing innovative things. I do know that Dave Clark is a damn smart guy, and he does have TCP under his belt loops. Sometimes you have to aim for the sky...

Disclaimer: While I've heard some of the discussion about this proposal, I'm speaking only for myself on this one.

   -Dave

Raw research often produces rewards and unexpected results, so I applaud and encourage work in this direction.

However, philosophically: security=less trust vs. scalability=more trust. intelligent=smart-enough-to-confuse vs. simple=predictable. Thus, a very Intelligent Secure network is usually a nightmare of unexplained failures and limited scope.

This is why researchers should sometimes ignore experience-hardened network technicians :slight_smile:

I look forward to seeing what he comes up with.

John

I guess I'm not the only one who thinks that we could benefit from some
fundamental changes to Internet architecture.

http://www.wired.com/news/infostructure/0,1377,68004,00.html?tw=wn_6techhea
d

Dave Clark is proposing that the NSF should fund a new demonstration
network that implements a fundamentally new architecture at many levels.

'"Look at phishing and spam, and zombies, and all this crap," said Clark.
"Show me how six incremental changes are going to make them go away."'

Well I suppose it is a good sales pitch, but I'm not terribly sure that these
are a network layer problems.

Good point.

However a network architecture is not limited to network layer only (at least in classroom network architecture goes from physical to application layers)
I hope that figuring out which layers should hold what responsibilities would be one of the questions to clarify in this re-examination of network architecture effort.

Counter-example: SS7.

Cheers,
-- jra

That is a good counter example, although it comes with some caveats. I work with SS7 regularly. SS7 should be simple since it performs a simple function, it is actually complicated and complex. But, since SS7 takes us away from the human-managed "static routing" of the older (MF?) trunk networks systems, it's intelligence creates redundancy and limited failover.

Perhaps Clark will create something that is win-win like that...

(I assume you are giving this as a "intelligent vs. simple" counter-example, since SS7 is an example of good scale because it trusts blindingly.)

John