Stupid Question maybe?

Apologizes in advance for a simple question. I am finding conflicting definitions of Class networks. I was always under the impression that a class “A” network was a /8 a class “B” network was a /16 and a class “C” network was a /24. Recently, I was made aware that a class “A” was indeed a /8 and a class “B” was actually a /12 (172.16/172.31.255.255) while a class “C” is actually a /16.

Is this different depending on the IP segment, i.e. if it is part of a RC1918 group it is classed differently (maybe a course I missed?) Or aren’t all IP’s classed the same.

I was always under the impression, /8 = A, /16 = B, /24=C, so rightly, or wrongly I’ve always seen 10.x.x.x as “A”, and 192.168.x.x as “B”, with 172.16/12 as one that just a VLSM between the two.

Again, apologizes for the simple question, just can’t seem to find a solid answer.

Happy holidays all the same!

You may find this helpful in your search for knowledge:

https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing

“Classful” networking is rarely useful other than for understanding How We Got Here.

There’s a handy table in the linked article which expresses each IPv4 mask length in relation to how many A, B, or C networks it is.

jermudgeon

Class A,B,C represent the position of the first 0 bit in the address and a corresponding natural netmask. A=1st bit (/8), B=2nd bit (10xxxxxx, /16), and C=3rd bit (110xxxxx, /24).

The confusion you seem to be experiencing related to the number of A,B, and C networks defined in RFC-1918 (private address space).

In this case, a ingle A (10.0.0.0/8), 16 Bs (172.16.0.0/12), and 256 Cs (192.168.0.0/16) were set aside for private networks.

Later, an additional block was reserved for CGNAT intermediary space (100.64.0.0/10 IIRC).

Owen

Apologizes in advance for a simple question. I am finding conflicting
definitions of Class networks. I was always under the impression that a
class "A" network was a /8 a class "B" network was a /16 and a class "C"
network was a /24. Recently, I was made aware that a class "A" was indeed a
/8 and a class "B" was actually a /12 (172.16/172.31.255.255) while a class
"C" is actually a /16.

As others have mentioned, IP address classes are no longer relevant, beyond understanding how things were done in the past. Address classes haven't been used for assignment or routing purposes for over 20 years, but the term lives on because it keeps getting undeserved new life in networking classes and training materials.

Classfull address assignment/routing was horribly inefficient for two main reasons, both of which were corrected by a combination of CIDR and VLSM:

1. Assigning IP networks on byte boundaries (/8, /16, /24) was not granular enough to allow networks to be assigned as close as possible to actual need in many cases. If you only needed 25 addresses for a particular network, you had to request or assign a /24 (legacy class C), resulting in roughly 90% of those addresses being wasted.

2. Classfull routing was starting to bloat routing tables, both inside of and between networks. If a network had a little over 8,000 IPv4 addresses under its control, in the pre-CIDR days, that meant that they or their upstream provider would need to announce routes for 32 individual and/or contiguous /24s. In the post-CIDR world, under the the best circumstances (all of their address space is contiguous and falls on an appropriately maskable boundary like x.y.0.0 through x.y.31.0), that network could announce a single /19. When scaled up to a full Internet routing table, the possible efficiencies become much more obvious. The network operator community has has to continue to grapple with routing table bloat since then, but for different reasons.

Had CIDR, VLSM, and NAT/PAT not been implemented, we (collectively) would have run out of IPv4 addresses many years before we actually did.

Thank you
jms

If you want the full historical definition, blow the dust off RFC791, and open your hymnals to section 2.3.

“Addresses are fixed length of four octets (32 bits). An address
begins with a network number, followed by local address (called the
“rest” field). There are three formats or classes of internet
addresses: in class a, the high order bit is zero, the next 7 bits
are the network, and the last 24 bits are the local address; in
class b, the high order two bits are one-zero, the next 14 bits are
the network and the last 16 bits are the local address; in class c,
the high order three bits are one-one-zero, the next 21 bits are the
network and the last 8 bits are the local address.”

This is depicted visually if that’s your deal in RFC796.

Back in '81 this was totally fine, but times change, and CIDR / VLSM eventually made way more sense.

It’s good to have at least a passing understanding of the old terminology simply because documentation for newer stuff likes to reference it, but don’t get too hung up on it.

Recently, I was made aware that a class “A” was indeed a /8 and a class “B” was actually a /12 (172.16/172.31.255.255) while a class “C” is actually a /16.

You had it right to start with.

A is (was) /8, B is /16, C is /24

All on human easily readable byte boundaries in IPv4 space.

The RFC-1918 internal space was allocated from a /8, a /12, and a /16 sized block. Those aren’t A, B, or C network sizes. Whoever corrected you is confused.

Anyone who networked before and during the CIDR transition won’t forget this…

-george

Hi Joe,

Take everything you've ever heard about classful networking, throw it
away, and outside of trivia games never think about it again. Network
address classes haven't been a valid part of TCP/IP for more than two
decades now.

For historical trivia purposes only, the CIDR /16 replaced class B.
Had RFC 1918 come out before CIDR (RFC 1519),
172.16.0.0-172.31.255.255 would have described 16 contiguous class
B's, not just one, while 192.168.0.0-192.168.255.255 would have
described 256 contiguous class C's. In fact, this terminology is used
in RFC 1918's predecessor, RFC 1597.

And if you really like trivia, find the math error in RFC 1597.

Class A started at 0.0.0.0, class B started at 128.0.0.0 and class C
started at 192.0.0.0. There was also a class D (now the multicast
address space) starting at 224.0.0.0 and a class E (still reserved)
starting at 240.0.0.0.

Regards,
Bill Herrin

/24 is certainly cleaner than 255.255.255.0.

I seem to remember it was Phil Karn who in the early 80's suggested
that expressing subnet masks as the number of bits from the top end
of the address word was efficient, since subnet masks were always
a series of ones followd by zeros with no interspersing, which
was incorporated (or independently invented) about a decade later
as CIDR a.b.c.d/n notation in RFC1519.
  - Brian

seems to make more sense for humans and modern hardware make the
assumption for forwarding, like it does make assumptions for the
distribution of prefix sizes, anything to get more out of less. For
ACL that assumption is still not true. It's optimisation which loses
information which for the common case does not matter.

It is a matter of machine readability vs human readability. Remember the IP was around when routers did not have a lot of horsepower. The dotted decimal notation was a compromise between pure binary (which the equipment used) and human readability. VLSM seems obvious now but in the beginning organizing various length routes into very expensive memory and low horsepower processors meant that it was much easier to break routes down along byte boundaries. This meant you only had four different lengths of route to deal with. It was intended to eliminate multiple passes sorting the tables. I am not quite sure what you mean about interspersing zeros, that would be meaningless. Remember that it is a mask. The address bits which are masked as 1s are significant to routing. The bits that are masked with 0s are the host portion and don't matter to the network routing table.

Steven Naslund
Chicago IL

Why do we still have network equipment, where half the configuration requires netmask notation, the other half requires CIDR and to throw you off, they also included inverse netmasks.

tir. 18. dec. 2018 20.51 skrev Brian Kantor <Brian@ampr.org>:

Two reasons :

  1. Legacy configuration portability, people learned a certain way and all versions of code understand a certain way. The best way to correct that issue it to accept either of them.

  2. The inverse mask is indeed a pain in the neck but is technically correct. The subnet mask is used where the equipment cares to work with the network portion of the address (ignoring the host). The inverse mask is important where the equipment cares more about the host we are referring to (ignoring the network). It’s a bit of a cheat to allow for code used in routing to be used for ACL and firewall without modification to the code. For example, the same code piece that routes a network toward an Ethernet interface can be reused to route a host toward a null interface.

Steven Naslund

Chicago IL

Hi Steve,

That's like saying the inverse mask is technically correct when the
computer wants to decide whether to arp for the next hop. No sale man.

A AND NETMASK ?= B AND NETMASK

is exactly the same operation as

A OR inverse NETMASK ?= B OR inverse NETMASK

While A AND inverse NETMASK ?= B AND inverse NETMASK *never* yields
useful knowledge.

No sale.

Regards,
Bill Herrin

I see it more used in terms of firewall operations on what are normally network routing devices. I suppose someone with Cisco IOS architecture inside knowledge could tell us why they use that notation with ACLs primarily.

I have never seen a computer want or accept an inverse mask so it is irrelevant to ARP. The question with ARP is "are we on the same network".

The naming of inverse net mask is really tragic. It should be called net mask and host mask because that is what they really are. In a net mask the 1s denote the network portion, in the host mask (nee inverse netmask) the 1s denote the host portion. That's all there is too it.

The inverse mask could be used to figure out whether to ARP or not. You just have to decide if the 1s or 0s mean that something is significant or not significant to your calculation. Using the inverse mask I could decide to dump the portion = 1. Using the network mask I can dump the portion = 0. Nothing states how you have to use the information.

Steve

I seem to remember that before the advent of VLSM and CIDR there was no requirement for the 1 bits in the netmask to be contiguous with no intervening 0 bits and there was always someone who tested it out on a production network just to prove a point (usually only once)

Dave

- -----Original Message-----

History of non-contiguous network masks, as I observed it.

The rules did not prohibit discontiguous network masks. But no one was sure how to make them work. In particular, how to allocate subnets from discontiguous networks in a sensible fashion.

In the early 90s, during the efforts to solve the swamp and classful exhaustion problems, Paul Francis (then Tsuchia) and I each worked out table structures that would allow for discontiguous masks with well-defined "prefixes" / "parents". Both approaches were based on extensions of Knuth's Patricia trees. (It took some interesting analysis and extensions.)

When we were done, other folks looked at the work (I don't know if the Internet Drafts are still in repositories, but they shoudl be.) And concluded that while this would work, no network operations staff would ever be able to do it correctly. So as a community we decided not to go down that path.

Yours,
Joel

I would love to hear some confirmation of this, or even first hand experience.

/Mainly/ for historical / trivial purposes. (Don't ask, don't tell.)

I had a heck of a time a few years back trying to troubleshoot an issue where an upstream provider had an ACL with an incorrect mask along the lines of 255.252.255.0. That was really interesting to talk about once we discovered it, though it caused some loss of hair beforehand...

Actually, not really. In the time frame, there was quite a bit of discussion about "discontiguous" subnet masks, which were masks that had at least one zero somewhere within the field of ones. There were some who thought they were pretty important. I don't recall whether it was Phil that suggested what we now call "prefixes" with a "prefix length", but it was not fait accompli.

Going with prefixes as we now describe them certainly simplified a lot of things.

Take a glance at https://www.google.com/search?q=discontiguous+subnet+masks for a history discussion.

Juniper originally didn't support them even in ACL use-case but were
forced to add later due to customer demand, so people do have
use-cases for them. If we'd still support them in forwarding, I'm sure
someone would come up with solution which depends on it. I am not
advocating we should, I'll rather take my extra PPS out of the HW.

However there is one quite interesting use-case for discontinuous mask
in ACL. If you have, like you should have, specific block for customer
linknetworks, you can in iACL drop all packets to your side of the
links while still allowing packets to customer side of the links,
making attack surface against your network minimal.