Admitting to not having read every message in these threads,
but would like to highlight a bit of the history.
IMnsHO, the otherwise useful history is missing a few steps.
1) The IAB selected ISO CLNP as the next version of IP.
2) The IETF got angry, disbanded, replaced, and renamed IAB.
3) On the Big-Internet list, my Practical Internet Protocol Extensions
(PIPE) was an early proposal, and I'd registered V6 with IANA.
I was self-funding. PIPE was cognizant of the needs of ISPs and
4) Lixia Zhang wrote me that Steve Deering was proposing something
similar, and urged us to pool our efforts. That became Simple
Internet Protocol (SIP). We used 64 bit addresses. We had a clear
path for migration, using the upper 32-bits for the ASN and the old
IPv4 address in the lower 32-bits. We had running code.
5) The IP Address Extension (IPAE) proposal had some overlapping features,
and we asked them to merge with us. That added some complexity.
6) The Paul Francis (the originator of NAT) Polymorphic Internet Protocol
(PIP) had some overlapping features, so we also asked them to merge
with us (July 1993). More complexity in the protocol header chaining.
7) The result was SIPP. We had 2 interoperable implementations: Naval
Research Labs, and KA9Q NOS (Phil Karn and me). There were others
8) As noted by John Curran, there was a committee of "powers that be".
After IETF had strong consensus for SIPP, and we had running code,
the "powers that be" decided to throw all that away.
9) The old junk was added back into IPv6 by committee.
There was also a mention that the Linux IP stack is fairly compact and
that IPv6 is somewhat smaller than the IPv4. That's because the Linux
stack was ported by Alan Cox from KA9Q NOS. We gave Alan permission to
change from our personal copyright to GPL.
It has a lot of the features we'd developed, such as packet buffers and
pushdown functions for adding headers, complimentary to BSD pullup.
They made SIPP/IPv6 fairly easy to implement.
Owen DeLong wrote:
IPv6 optional header chain, even after it was widely recognized that IPv4 options are useless/harmful and were deprecated is an example of IPv6 bloat.
Extensive use of link multicast for nothing is another example of
IPv6 bloat. Note that IPv4 works without any multicast.
Yes, but IPv6 works without any broadcast. At the time IPv6 was being
developed, broadcasts were rather inconvenient and it was believed
that ethernet switches (which were just beginning to be a thing then)
would facilitate more efficient capabilities by making extensive use
of link multicast instead of broadcast.
No, the history around it is that there was some presentation
in IPng WG by ATM people stating that ATM, or NBMA (Non-Broadcast
Multiple Access)in general, is multicast capable though not
broadcast capable, which was blindly believed by most, if not
all excluding *me*, people there.
Both Owen and Masataka are correct, in their own way.
IPv4 options were recognized as harmful. SIPP used header chains instead.
But the whole idea was to speed processing, eliminating hop-by-hop.
Then the committees added back the hop by hop processing (type 0).
Admittedly, I was also skeptical of packet shredding (what we called
ATM). Sadly, the Chicago NAP required ATM support, and that's where
my connections were located.
It should be noted that IPv6 was less bloat because
ND abandoned its initial goal to support IP over NBMA.
Neighbor Discovery is/was agnostic to NBMA. Putting all the old
ARP and DHCP and other cruft into the IP-layer was my goal, so
that it would be forever link agnostic.
> There is still a valid argument to be made that in a switched
> ethernet world, multicast could offer efficiencies if networks were
> better tuned to accommodate it vs. broadcast.
That is against the CATENET model that each datalink only
contain small number of hosts where broadcast is not a
problem at all. Though, in CERN, single Ethernet with
thousands of hosts was operated, of course poorly, it
was abandoned to be inoperational a lot before IPv6,
which is partly why IPv6 is inoperational.
Yes, we were also getting a push from Fermi Labs and CERN for very
large numbers of nodes per link, rather than old ethernet maximum.
That's the underlying design for Neighbor Discovery. Less chatty.
Also, my alma mater was Michigan State University, operating the
largest bridged ethernet in the world in the '80s. Agreed, it was
"inoperational". My epiphany was splitting it with KA9Q routers.
Suddenly the engineering building and the computing center each had
great throughput. Turns out it was the administration's IBM that
had been clogging the campus. Simple KA9Q routers didn't pass the
bad packets. That's how I'd become a routing over bridging convert.
Still, there are data centers with thousand port switches.