IPv4 address length technical design

Is anyone aware of any historical documentation relating to the choice of 32 bits for an IPv4 address?

Cheers.

I believe the relevant RFC is RFC 791 - https://tools.ietf.org/html/rfc791

I'll add that in the mid-90's, in a University Of Washington lecture hall, Vint Cerf expressed some regret over going with 32 bits. Chuckle worthy and at the time, and a fond memory
- K

Op 3-10-2012 18:33, Kevin Broderick schreef:

I'll add that in the mid-90's, in a University Of Washington lecture hall, Vint Cerf expressed some regret over going with 32 bits. Chuckle worthy and at the time, and a fond memory
- K

"Pick a number between this and that." It's the 80's and you can still count the computers in the world. :slight_smile:

It is/was a "experiment" and you have the choice between a really large and a larger number. Humans are not too good in comparing really large numbers. If it was ever decided to use a smaller value, for the size of the experiment it might have went quite different. The "safe" (larger) choice ended up bringing more pain.

As a time honored ritual, the temporary solution becomes the production solution.

Oops... And that was not quite what Mr Cerf meant to do.

Regards,

Seth

Chris Campbell <chris@ctcampbell.com> writes:

Is anyone aware of any historical documentation relating to the choice of 32 bits for an IPv4 address?

Cheers.

8 bit host identifiers had proven to be too short... :slight_smile:

-r

Actually that was preceded by RFC 760, which in turn was a derivative of IEN 123. I believe the answer to the original question is partially available on a series of pages starting at : http://www.networksorcery.com/enp/default1101.htm
IEN 2 is likely to be of particular interest ...

And yet, almost concurrently, IEEE 802 went with forty-eight bits. Go
figure. I'm pretty sure the explanation you're looking for is: It was
with the word size of the most popular minis and micros at the time.

It's worthwhile noting that the state of system (mini and
microcomputer) art at the time of the 1977 discussions was, for
example, the Intel 8085 (8-bit registers; the 16-bit 8086 was 1978)
and 16-bit PDP-11s. The 32-bit VAX 11/780 postdated these (announced
October 77).

Yes, you can do 32 or 64 bit network addressing with smaller
registers, but there are tendencies to not think that way.

The 48 bit MAC was 1980; notable that it was not primarily handled in
software / CPUs (ethernet key functionality is in dedicated interface
hardware, though the stack is MAC-aware obviously). CPU register bit
length is less critical when you have a dedicated controller of
arbitrary bittedness handling MACs.

Perhaps worth noting (for the archives) that a significant part of the early
ARPAnet was DECsystem-10's with 36-bit words.

http://en.wikipedia.org/wiki/PDP-10

http://en.wikipedia.org/wiki/Email

Tony Patti
CIO
S. Walter Packaging Corp.

And the -10s and -20s were the major reason RFCs refer to octets rather than bytes,
as they had a rather slippery notion of "byte" (anywhere from 6 to 9 bits, often multiple
sizes used *in the same program*).

It wasn't. At the time. But at some point people of vision figured out
that CPU word sizes would standardize on power-of-two powers of two.
Really helps when you want to align data elements in memory if exactly
2 16 bit integers fit in the 32 bit word and exactly 2 32 bit integers
fit in the 64 bit word. "And a half" is a phrase that makes life
miserable in both software development and hardware design.

IEEE figured it out later. The replacement for the MAC address is
EUI-64. I still haven't figured out Bell's excuse with ATM.

Regards,
Bill Herrin

Is anyone aware of any historical documentation relating to the
choice of 32

bits for an IPv4 address?

...

Actually that was preceded by RFC 760, which in turn was a derivative
of IEN 123. I believe the answer to the original question is

...

My theory is that there is a meta-rule to make new address spaces have 4
times as many bits as the previous generation.

We have three data points to establish this for the Internet, and that's
the minimum needed to run a correlation: Arpanet, IPv4, IPv6...

d/

So the address space for IPv8 will be...
</troll>

Cheers,
-- jra

Remember that at the time, IP was designed to be classful so having four 8 bit bytes was real convenient to look only at the bytes in the host portion of the address. Class A meant three significant bytes, Class B had two significant bytes, and Class C had three significant bytes as far as the host portion of the address. If we are looking for matches in a routing table it is much easier to search for an entire matching byte than to do it bitwise. Even though systems had varying byte lengths, 8 was still the most common because it was the easiest to map extended ASCII into.

Now we could discuss whether there should have been more bytes but at the time no one had really envisioned the public deployment of this at the scales we see today. Same reason IBM and Microsoft had barriers like 640k of RAM, no one just ever thought you would need more than that.

Steven Naslund

IEEE 802 was expected to provide unique numbers for all computers ever built.

Internet was expected to provide unique numbers for all computers actively on the network.

Obviously, over time, the latter would be a declining percentage of the former since the former is increasing and never decrements while the latter could (theoretically) have a growth rate on either side of zero and certainly has some decrements even if the increments exceed the decrements.

Owen

So the address space for IPv8 will be...
</troll>

In 100 years, when we start to run out of IPv6 addresses, possibly we
will have learned our lesson and done two things:

  (1) Stopped mixing the Host identification and the Network
identification into the same bit field; instead every packet gets a
source network address, destination network address, AND an
additional tuple of Source host address, destination host
address; residing in completely separate address spaces, with no
"Netmasks", "Prefix lengths", or other comingling of network
addresses and host address spaces.

And
  (2) The new protocol will use variable-length address for the Host
portion, such as used in the addresses of CLNP, with a convention of
a specified length, instead of a hardwired specific limit that comes
from using a permanently fixed-width field.

Need more bits? No protocol definition change required.

  (1) Stopped mixing the Host identification and the Network
identification into the same bit field; instead every packet gets a
source network address, destination network address, AND an
additional tuple of Source host address, destination host
address; residing in completely separate address spaces, with no
"Netmasks", "Prefix lengths", or other comingling of network
addresses and host address spaces.

Where's Noel Chiappa when you need him?

  (2) The new protocol will use variable-length address for the Host
portion, such as used in the addresses of CLNP,

This also was considered during the IPv6 design phase, and the router
designers had a collective cow, as it makes ASIC design a whole lot more
interesting. And back then, line speed was a lot lower than it is now...

Not saying it can't be done - but you're basically going to have to do CLNP
style handling at 400Gbits or 1Tbit. Better get those ASIC designers a *lot*
of caffeine, they're gonna need it...

(1) Stopped mixing the Host identification and the Network
identification into the same bit field;

Where's Noel Chiappa when you need him?

Saying "I told you so" I suspect.

(2) The new protocol will use variable-length address for the Host
portion, such as used in the addresses of CLNP,

This also was considered during the IPv6 design phase, and the router
designers had a collective cow, as it makes ASIC design a whole lot more
interesting.

Where are Tony Li, Paul Traina, and the whole TUBA orchestra when you need them? :slight_smile:

Regards,
-drc

Didn't work for DecNet Phase III, Decnet Phase IV, Decnet Phase V (8, 16, 128).