How our young colleagues are being educated....

As a student I feel particularly concerned about this.

Not only are they skimming over new technologies such as BGP, MPLS and the
fundamentals of TCP/IP that run the internet and the networks of the world,
they were focusing on ATM , Frame Relay and other technologies that are on
their way out the door and will probably be extinct by the time this
student graduates. They are teaching classful routing and skimming over
CIDR. Is this indicative of the state of our education system as a whole?
How is it this student doesn't know about OSPF and has never heard of RIP?

On the point about learning "ancient" technologies like X.25, I strongly
believe it's not useless when put in comparison with newer ones .
The purpose of some protocols depends on their environment at a specific
time. IMHO, the evolution that resulted SPDY shows how TCP *was*
relevant when you had lots of noise on the line (back-off algorithms).
Furthermore, getting to know the past is the best way to avoid
perpetrating the same mistakes all over. Eventually providing bases and
theory of a simple communication (channeling, OSI model,
error-correction, etc.).

The administration's opinion is not to get hands on the latest
technology (mostly pushed by companies) since it can be valueless tomorrow.

On the other hand, people have to be very careful not keeping the rusty
engine working.
I never knew if one of my teacher was aware of the existence of CIDR
notation, meanwhile he taught us about IPv6 (sadly not as a turning
point with IPv4 exhaustion but more like a fancy feature).
On other courses, it ended with VxLAN, LTE and multicast.
I agree that SDN is becoming inevitable and is showing the tip of its nose.

In my experience, I've never waited courses to understand DNS or BGP
(yet they gave me strong roots thereafter).
I'm also one of the few to attend networking conferences. I get a glance
at a more political than technical view of what will be the future
Internet, not taught in class.
I believe lots students aren't aware of theses events, of the resources,
and would be very interested : they just need a little boost.
Some others, as anywhere, won't be very implicated going deeper than the
courses. So, even if they had the latest knowledge, I don't think it
would be so much more beneficial.

In lab we get the opportunity to configure on high-end material.
Our subjects are sometimes very restrictive, not helping to see past the
few commands, not involving "creative" things like seeing everyone a an
independent network, routing through some...
One of my disappointments is we only work on a unique brand. I don't
think we should go over a cheaper manufacturer (removing a somewhat
"precious" experience on the famous one) but we should be given
alternatives, the equivalent of pseudo-code : the router is only a mean
to achieve : how does a Linux construct the BGP command comparing to
Cisco...

Now, as a side, one problem that I often have with various academic-based
courses is that the people who teach them often don¹t have enough
real-world experience (or not current anyway) in order to pass along any
benefit in that matter. There are many things that need to be addressed
at this level within the higher-education arena, and I¹m sure it¹s not
just related to networking subjects!

When I did teaching, it was as an employee hired to do network ops
first and academic stuff a definite second. I'm still not qualified
to even apply to the courses I taught, but I did get nice evaluations;
simply because what we taught was very connected to the NREN we ran. Thus
we could pick examples from Actual Reality and make the binary -> hex
conversions relevant.

I'm thinking that network operations and design today is a field much
like workshop toolroom knowledge was back before CAD/CAM; there is a
solid and long scientific backing to what is done, in materials science,
maths, etc; the machines used are products from elevated precision
and experience centres, but still, you can't get them to do anything
useful without a well balanced theoretical background coupled to solid
hands-on experience. The rookie and the engineer from the construction
dept. will both need training to be useful and non-lethal in that
environment, even if the engineer can design a successful lathe.

The rôle of network courses in academia, then, is a lot like looking out
for the programmer with the soldering iron. People who know how things
ought to work in theory are quite likely to be dangerous in practice. (and
don't get me started on studio sound engineers in live sound...)

It might be though, that I've simply been watching Keith Fenner on
Youtube too many late nights. (That is a recommendation, btw.)

I am a university student that has just completed the first term of
the first year of a Computer Systems and Networks course. Apart from a
really out of place MATH module that did trig but not binary, it has
been reasonably well run so far. The binary is covered in a different
module, just not maths. The worst part of the course is actually the
core networking module, which is based on Cisco material. The cisco
material is HORRIBLE! those awkward "book" page things with the stupid
higherarchical menu. As for the content.. a scalable network is one
you can add hosts to, so what's a non-scalable network? will the
building collapse if i plug my laptop in?

As I have been following NANOG for years I do notice a lot of mistakes
or "over-simplifications" that show a clear distinction between the
theory in the university books and the reality on nanog, and
demonstrate the lecturers lack of real world exposure. As a simple
example, in IPv4 the goal is to conserve IP addresses therefore on
point to point links you use a /30 which only wastes 50% of the
address space. In the real world - /31's? but a /31 is impossible I
hear the lecturers say...

The entire campus is not only IPv4-only, but on the wifi network they
actually assign globally routable addresses, then block protocol 41,
so windows configures broken 6to4! Working IPv6 connectivity would at
least expose students to it a little and let them play with it...

Amoung the things I have heard so far: MAC Addresses are unique, IP
fragments should be blocked for security reasons, and the OSI model
only has 7 layers to worry about. All theoretically correct. All
wrong.
- Mike Jones

Cisco as the basis of networking material? Does nobody use Comer, Stallings, or Tannenbaum as basic texts anymore?

Miles Fidelman

Mike Jones wrote:

I used Stallings a couple years ago. Cisco is not the basis of
networking. It is the basis for TCP/IP.

-Grant

The Cisco "Networking Academy" program was used throughout my "CEGEP"(End of high-school/first college year equivalent in the US) education in Quebec. There was no deviation from the course work and the aim was to get the student CCNA certified at the end.

Well... to be accurate, and just a tad pedantic, the basis for TCP/IP is:
"A Protocol for Packet Network Intercommunication," Vinton G. Cerf & Robert E. Kahn, IEEE Trans on Comms, Vol Com-22, No 5 May 1974

Miles Fidelman

Grant Ridder wrote:

FYI, just checked, and,
Comer's "Internetworking with TCP/IP" seems to be in its 6th edition, published 2013
Stallings' "Data and Computer Communications" seems to be in its 10th edition, also 2013
Tannenbaum's "Computer Networks" seems to be in its 5th edition, published 2010

So... all still pretty current. (My personal copies are just a bit more dated, first editions that probably do qualify as "historical references," along with my old standby - the "DDN Protocol Handbook," complete with the MIL-STD versions of some classic RFPs :slight_smile:

Cheers,

Miles

Randy wrote:

Well let start with: Happy Holidays.

In my line of work anyone with a CCNA get put at the bottom of the pile =D

We're looking for proactive associates and found that applicants which
present themselves as a CCNA engineer foremost are only just that: Someone
that could follow the course and bother to pass it.

Best deal is to get Cisco 1000V image (or GNS) and a Virtual Server (about
$600 used with 72G of RAM lately, and you do not need huge amount of
disks) and start making test beds for real world needs.

The only drawback is that you may make the interviewer worried about his
own job =D

Good luck.

Merry Christmas! (Even if slightly late...)

I absolutely agree. The certification by itself doesn't prove much beyond a passing interest in networking and an ability to retain a fair amount of information. I suspect it's mostly a question of creating some kind of standard to judge applicants. It's also worth mentioning that I bet that many HR departments are actively hunting for keywords such as certifications acronyms.

It was just a bit sad to see the certification itself as the "real" goal of the program.

Cheers!

As for the content.. a scalable network is one
you can add hosts to, so what's a non-scalable network? will the
building collapse if i plug my laptop in?

Hi Mike,

A few starting points for interesting insight:

https://bill.herrin.us/network/bgpcost.html

According to the estimate, it costs about $8000/year (pennies here and
pennies there, they add up) to add a single multihomed network to the
Internet before you even consider the bytes sent and received. There
are around 500,000 such networks. If 10,000,000 such networks were
required, we would have difficulty building routers that could work.

Indeed, in the 90's the Internet's 50,000ish networks caught up to and
nearly exceeded the routers we were capable of building. We came close
to having to triage by cutting networks off the Internet.

That's an example of something that scales poorly.

On the other hand, adding a DNS zone costs $10/year or less. We could
add a billion or a trillion more and it might add a few million
dollars total to the cost of a few root and TLD name servers.

The DNS scales well.

As I have been following NANOG for years I do notice a lot of mistakes
or "over-simplifications" that show a clear distinction between the
theory in the university books and the reality on nanog, and
demonstrate the lecturers lack of real world exposure. As a simple
example, in IPv4 the goal is to conserve IP addresses therefore on
point to point links you use a /30 which only wastes 50% of the
address space. In the real world - /31's? but a /31 is impossible I
hear the lecturers say...

In the real world you often assign a /32 to a loopback address on each
router and make all of the serial interfaces borrow that address (ip
unnumbered in Cisco parlance) which wastes no addresses.

With non-point to point links there are other tricks you can play to
avoid wasting more addresses than strictly necessary.

Amoung the things I have heard so far: MAC Addresses are unique,

Except when they're not. The 802.3 standard is ambiguous about whether
a MAC address should be unique per interface or unique per host. Sun
(now Oracle) took the latter view and assigned the same MAC address to
every Ethernet port on a particular host leading to hideously confused
Ethernet switches.

The ambiguity even creeps into Linux. Unless the behavior is
overridden with a sysctl, Linux will happily answer an arp request on
eth0 for an IP address that lives on eth1.

IP fragments should be blocked for security reasons,

Not a smart move, IMO. In a stateful firewall (e.g. NAT) let the
firewall reassemble the packets. In a stateless firewall, block the
first fragment only, and only if it's too short for whatever filtering
you intend to apply. Any first fragment that's not an attack will be
at least a few hundred bytes long.

Also, pity the fool who blocks ICMP because he breaks TCP at the same
time. Path MTU discovery requires ICMP destination unreachable
messages to function. TCP will screech to a halt every time it
attempts to send a packet larger than the path MTU until the host
receives the ICMP notification.

and the OSI model
only has 7 layers to worry about. All theoretically correct. All
wrong.

Not exactly. The OSI layers exhibit a basically correct understanding
of packet networks. They just don't stack so neatly as the authors
expected. In particular, we keep finding excuses to stack additional
layer 2's and 3's on top of underlying layer 2's and 3's. We give this
names like "MPLS" and "VPN."

Regards,
Bill Herrin

In the real world you often assign a /32 to a loopback address on each
router and make all of the serial interfaces borrow that address (ip
unnumbered in Cisco parlance) which wastes no addresses.

Why would you want to waste 79228162514264337593543950336 addresses on a loopback?

More seriously, why does this discussion only briefly mention IPv6? Every
client comes with it (aggressvely) enabled -- it is there despite the
fat / happy parts of the networking community sitting on their legacy
space and laughing at Asia.

I've had, as mentioned earlier, a "cisco graduate" as intern and then
colleague for a year now. He's a fast learner, and that was needed. No
v6. Not much MPLS. No ISIS. Barely eBGP. No iBGP, especially not in
conjunction with a link-state IGP. Lots of RIP, Flame Delay and EIGRP.

There are two problems;

* The academic community is either outdated or married to a
  vendor-specific course -- and that marriage is not very
  academic, IMNSHO. Academia must be vendor agnostic.

* The vendor courses are too enterprisey, and an outdated
  enterprise at that. There is no course in "running a
  sensible chunk of the Internet".

And this in a world where the largest innovation the last 5 years is
abstraction (as in virtualisation and to some extent SDN). Not in
protocols. Should be reasonably easy to keep up.

I currently use a Comer book. I've also used a Tannenbaum book in the
past, but not recently. My favorite book, when I've used it was Radia
Perlman's.

Increasingly I'm seeing a trend away from actually relying on books if
even requiring them to be read anymore. This is both a trend with
faculty and students. I frequently get asked if the book is required,
even when the course page clearly says it is. Students and often
faculty often I find rely too heavily on Wikipedia pages, which I've
found myself going to update since they lead to wrong assumptions and
answers in questions I've assigned.

I like to augment, as many faculty do, classic or timely research papers
into assignments so that students are at least forced to look at
something other than vendor white papers and blog posts found in search
engines.

John

Then again, no course on networking can be complete without a
presentation involving ways in which things are not being used as
originally designed because someone had an idea of how they could do
it differently, for better or worse. (Ala the contradiction in terms
that is "HTTP streaming". Routers two continents away crashing as a
result of eBGP packets for interprovider VPNs is another good one.)
Nor can you call a course complete without a case study of where
things do not work as intended and either very large pFail is the
result or where a more complicated hack fix is needed as a workaround.
Especially relevant with interoperability concerns when multiple
vendors are involved.

Those sorts of things you likewise do not often find in text books or
white papers and probably not on Wikipedia either but they are at the
core of what engineering and operations has contend with day by day.
(Too often people conflate "engineering" with "architecture" and while
they are very much related, they are not one and the same.)

-Wayne