Serious Juniper Hardware EoL Announcements

When I last got pricing on the MX10003 in fall 2021, I was asked if I wanted pricing on something with exclusively 100GbE interfaces or with 10GbE capability.

I got pricing for both options.

Putting SFP+ 10GbE ports in a router of that total chassis+RE+linecard+support contract price is an extremely costly proposition on a dollar per port basis.

Would recommend that anyone who thinks they need them to look at ways to put the 10GbE ports in some other device and attach that to the router.

Just to put a little more flesh on that bone (having spent about a
decade going to ICANN conferences):

Although organized under ICANN, address allocation would generally be
the role of IANA which would assign address blocks to RIRs for
distribution.

It's a useful distinction because IANA and the RIRs act fairly
independently from the umbrella ICANN org unless there's some very
specific reason for, e.g., the ICANN board to interfere like some
notion that the allocation of these addresses would (literally)
threaten the stability and security of the internet, or similar.

Offhand (and following comments by people of competent jurisdiction) I
can't see why IANA or the RIRs would resist this idea in
principle. It's just more stock in trade for them, particularly the
RIRs.

Other than they (IANA, RIRs) wouldn't do this unless the IETF issued a
formal redeclaration of the use of these addresses.

Anyhow, that's roughly how the governance works in practice and has
for over 20 years.

So, I think the first major move would have to be the IETF issuing one
or more RFCs redefining the use of these addresses which would then
put them into the jurisdiction of IANA who could then issue them
(probably piecewise) to the RIRs.

It is disappointing that we are getting faceplates that are
exclusively cloud optimised, and service providers are scratching
their heads going 'how can i use this?'. But it may be that there
simply isn't a business case to build models with different faceplates
or to design yet another set of linecards.
Of course the fab doesn't charge different costs for different Trio,
we from cost POV, the chips always cost the ~same, if it's MX80 or
MX304 (except MX304 has up-to three of them). So there isn't any real
reason why you couldn't massively underutilize the chips and deliver
faceplates that are optimised for different use-cases. However, JNPR
does see ACX more for this role.

Now VLAN aggregation isn't without its problems:
   a) termination router must be able to do QoS under shaper, you need
to shape every VLAN to access rate and then QoS inside the shaper.
There are a lot of problems here, and even if the termination router
does support proper HQOS it may not support small enough burst values
that the access can handle.
   b) you lose link state information at termination, and you need to
either accept slower convergence (e.g. no BGP external fast fall over)
or investigate into CFM or BFD, where BFD would require active
participation from customer, which is usually not reasonable ask
   c) your pseudowire products will become worse, you may have MAC
learning (you might be able to turn it off) limiting MAC scale, you
will likely eat bunch of new frames which previously were passed, you
may be able to fix it with L2PT (rewrite MAC on L2 ingress, rewrite
MAC on L2 egress). And some things might become outright impossible,
for example paradise chipset will drop ISIS packets with VLAN headers
on the floor (technically impossible to have 802.3 and VLAN), so if
your termination is paradise, your pseudowire customers can't use
ISIS.
   d) most L2 devices have exceedingly small buffers and this solution
implies many=>one traffic flow, so you're going to have to understand
what amount of buffering you're going to need and how many ports you
can attach there
   e) provisioning and monitoring complexity, as you need to have
model where you decouple termination and access port, if you don't
already do this, it can be quite complicated to add, there are number
of complexities like how to associate these two ports for data
collection and rendering, where and how to draw vlans
   f) if you dual attach the L2 aggregation you can create loops due
to simple and complex reasons, termination may not have per-VLAN MAC
filter, so adding 1 pseudowire VLAN, may disable MAC filtering for
whole interface. And if you run default MAC/ARP timers (misconfig,
defaults are always wrong, ARP needs to be lower than MAC, but this is
universally not understood), primaryPE maybe send packets to L3
address which is in ARP, but not in MAC anymore (host down?), backupPE
will receive it due to lack of MAC filtering and will forward to
primaryPE, which will forward back to L2.

This was just what immediately occurred to me, I'm sure I could
remember more issues if I'd spent a bit more time thinking of it. L2
is hard, be it L2 LAN or L2 aggregation. And almost invariably
incorrectly configured, as L2 stuff apparently works usually
out-of-the-box, but is full of bees.

Now the common solution vendors offer to address many of these
problems is 'satellite', where the vendor takes HW and SW workarounds
to reduce the problems caused by VLAN aggregation. Unfortunately the
satellite story is as well regressing, Cisco doesn't have it for
Cisco8k, Juniper wants to kill Fusion.
Nokia and Huawei still seem to have love for provider faceplates.

Thank you for calling out the HMC point. I think that alone is worth retiring the platforms that were built around it.

The number of issues related the the HMC memory drivers were out of hand early on, and lingered long past the growing pains phase.

I’m sure in the big picture, supply chain / manufacturing constraints accelerated this, but part of me is happy to see HMC based stuff go.

I can't pinpoint HMC as a bad solution, yes we've had our share of HMC
issues, but we've also on JNPR and some other vendors previously
replaced all linecards due to memory issues, before stacked DRAMs were
a thing, memories are notoriously fragile.
I can pinpoint HMC as a huge risk due to no manufacturer :).

The memory issues are exacerbated by needing to reload the whole
linecard when one memory has issues, JNPR has now delivered on newer
images fixes where you can reduce both the collateral damage and
downtime by reloading individual PFEs (and connected memories).

I do think HMC was a solid engineering choice, and I am a bit annoyed
that it lost to HBM instead of co-existing with little different
optimization points. But that doesn't excuse the situation.

I don't want to glorify the idea of converting multicast space by commenting on it, however you're wrong in several particulars about the relationships around the IANA.

Most notably here is the issue that in relationship to what IP addresses can be handed out to who, and for what purpose, IANA is at the service of the IETF. At the end of the day the IP address registries are not that different from any of the other registries that IANA maintains on their behalf.

hope this helps,

Doug (Former IANA GM)

I just have one question?

Why are we discussing IP allocations and IANA in an email thread about EoL Juniper gear?

At the rate we are going, perhaps running Apple's M-chips for routers (to the extent possible) is not entirely unthinkable :-).

Mark.

Something about having more time to fix other softer issues we've long ignored, since we won't be busy installing any hardware :-).

Mark.

So there have been some developments re: this thread.

As it pertains to the both the MX204 and MX10003, Juniper have made the following amendments:

  • EoS = 2023.
  • End of new features = 2024.
  • End of bug fixes = 2028.
  • End of critical features = 2028.
  • EoL = 2029.

FWIW.

Mark.