Flexible OTN / fractional 100GbE

Hi NaNOG !

I'm looking for a muxponder that would take OTU4s on the network side
and provide 10/40/100GbE on the client side, with some kind of
oversubscription, as to provide a "fractional 100GbE" e.g. starting with
30-60Gbps commit that could be upgraded to 100GbE when network capacity
is available.

Is that something feasible at a decent price ?

I've read that Broadcom' StrataDNX (Qumran / Jericho) chips have OTN
support in addition to ethernet now, is there some vendor who leverages
this, preferably with OCP gear ?

Thanks !

Hi,

IP Infusion’s OcNOS is geared towards OCP gear, and while it’s not exactly what you’re looking for, they recently published[1] a note pertaining to IPoDWDM, so I could see them having maybe already done something along the lines of what you’re asking about, or they may have plans to.

[1] https://www.ipinfusion.com/news-events/ip-infusion-qualifies-inphi-colorz-in-its-latest-release-of-the-ocnos-network-operating-system/

Hello Jason,

Thanks for your answer.

IP Infusion’s OcNOS is geared towards OCP gear, and while it’s not
exactly what you’re looking for,

I didn't see any mention of OTN in their software's specs.

OTN is important for this project because we'd need many of its features
in terms of FEC, monitoring and low jitter.

they recently published[1] a note
pertaining to IPoDWDM, so I could see them having maybe already done
something along the lines of what you’re asking about, or they may
have plans to.

We're not really working on the optical side of things, it's really just
about replacing Ethernet wherever it's relevant. Regionnal to long-haul
P2P links for that matter.

Best regards,

I'm looking for a muxponder that would take OTU4s on the network side
and provide 10/40/100GbE on the client side, with some kind of
oversubscription, as to provide a "fractional 100GbE" e.g. starting with
30-60Gbps commit that could be upgraded to 100GbE when network capacity
is available.

When I was looking at something like this, this time last year, I got as
far as Packet Light:

A big part of my usecase was reselling the spare capacity, so I really
didn't want anything learning MACs at either end. :slight_smile:

I've read that Broadcom' StrataDNX (Qumran / Jericho) chips have OTN
support in addition to ethernet now, is there some vendor who leverages
this, preferably with OCP gear ?

Unsure about those product lines, but I believe the Facebook (Adva?)
"Voyager" fits the 'open' bill. Here's a PDF specific to running Cumulus
on it:

  Ethernet Switching for AI and the Cloud | NVIDIA

HTH,

Hi Tom,

A big part of my usecase was reselling the spare capacity, so I really
didn't want anything learning MACs at either end. :slight_smile:

Also tried with packetlight, but they offer no support for ODUFlex and
didn't implement latency/jitter monitoring. Also configuration through
their WebUI is mandatory, so no possible automation. Kind of a
dealbreaker nowadays.

Unsure about those product lines, but I believe the Facebook (Adva?)
"Voyager" fits the 'open' bill. Here's a PDF specific to running Cumulus
on it:

I don't find what I need there. I just want to plug an OTU4 uplink in a
standard QSFP28 port, no fancy photonics are required, and benefit from
inband monitoring and management, FEC and trafic isolation.

It would idealy be represented in ONL or Cumulus as a new type of
interfaces, with OSC and every muxponded ODU as sub-interfaces, and a
new type of bridge to patch them to other ODUs or map them to ethernet
service ports… Every sub-if having SNMP OIDs for load, FEC reporting and
latency measurement.

I hope it's a bit more clear now ?

Best regards,

Very clear. If you do find this veritable moon-on-a-stick device, please
do let me know.

Asking PacketLight to fix their software might not be a bad start, or
perhaps asking their competition if they can do better (see Infinera,
Coriant, Adva, Ciena, etc.)

Regards,

Tom,

Very clear. If you do find this veritable moon-on-a-stick device,
please do let me know.

I will. But I don't think it's for the lack of devices : every OCP
switch based on Broadcom's StrataDNX ASICs seems technicaly capable of
handling OTN, they even allow for a 250ns port-to-port fastpath for OTN
bridges if I got the datasheets right. It's the software that's missing.

Asking PacketLight to fix their software might not be a bad start,

Well, I learned today that these boxes may be manufactured by Nokia, who
also sell them under their brand as "WaveLite Metro 200", packed with
another software. I'm waiting for the manuals to assess their feature-set.

or perhaps asking their competition if they can do better (see
Infinera, Coriant, Adva, Ciena, etc.)

I'm already running Coriants and… well… $8k per 100G port is simply
unacceptable. That's the price for a 6*100G+48*10G OCP switch with the
right ASIC. Also their software is a mess.

I mean, it's 2019, we can't build networks like 20 years ago. Automation
is mandatory. And we lack so many network engineers on the market,
running a linux-based network would let us recruit and train sysadmins
instead. Running their windows-only TNMS really hurts my teams' morale.

ECI seems to have a nice lineup with Apollo (9904x) and Neptune ranges,
I'm waiting for pricing.

But if I had to choose, running Cumulus Linux (or equivalent) with
OTN+MEF extensions (also SRv6 to build a terastream-like network) on
broadcom-based OCP switches would be far more efficient and versatile.

Best regards,

Correct me if I'm wrong, but wouldn't "fractional" Ethernet services require packet switching with corresponding jitter anyway? It would be 1:1 switching which would keep the latency low, but you can't e.g. offer a 60Gbps bandwidth on 100GbE handoff and just put that straight on OTN. You'd have to either packet switch (and queue due to the incoming line rate being slower than the outgoing) it onto a FlexODU using GFP-F at 60Gbps (I'm not sure if anything even supports this) or just take the 100GbE staight onto OTN at 100Gbps.

Now, you could obviously put two native 40GbE services onto a single OTU-4 (along with a couple 10Gbps or single 20Gbps packet service) without packet switching by just framing up both Ethernet services using your choice of GFP-F or GFP-T, and it could be done without queuing since the incoming and outgoing line rates are the same.

The former would probably still end up being better than straight switched Ethernet, but it won't be the same as a straight re-framed Ethernet signal on OTN.

FEC and enhanced monitoring are always nice, though.

https://www.ekinops.com/products/flexrate-modules/aggregation-and-encryption-modules/pm-100g-agg

–Pete

I'm not sure I see how this particular product resolves the concern I expressed.

Yes, it will aggregate native subrate services at 10Gbps or 40Gbps native line rate up onto 100G OTN (on OTU4), and it can (presumably) do that without packet switching using either 1:1 frame mapping using GFP-F or straight cut-through stream mapping with GFP-T or similar.

What it doesn't appear to do, and where my concern lies with the OP's desires surrounding OTN, is take a "fractional" 100Gb service and aggregate it up with other services onto an OTU4. "Fractional" here means that the committed/available throughput is something less than the native line rate which is what OP appeared to be asking for ("'fractional 100GbE' e.g. starting with 30-60Gbps commit").

The only way I know to do this is to packet switch, as either Ethernet or GFP-F OTN traffic, the subscriber data onto a FlexODU at the desired subscriber rate within the OTU4. Other traffic could then be placed within the same OTU4 using the normal OTN TDM mechanisms including subrate (e.g. 10Gbps) traffic that might NOT require packet switching since it could be re-framed/re-transmitted onto the OTU4 at its native line rate.

I don't see any reason you can't do this, though I know of no equipment that will off hand, and it will incur some latency and jitter due to the packet switching which OP seemed to want to avoid. You'd still get the benefits of FEC, enhanced monitoring, etc. from the OTN side of things, and the latency and jitter should be generally better than switching it onto a higher-rate native Ethernet even if the high-rate side is not oversubscribed. It should certainly be no worse.

The crux of this is what happens when you have a subscriber service who's native line rate exceeds the provisioned OTN throughput which is a scenario OP alluded exactly to.

Brandon,

The only way I know to do this is to packet switch, as either Ethernet or GFP-F OTN traffic, the subscriber data onto a FlexODU at the desired subscriber rate within the OTU4. Other traffic could then be placed within the same OTU4 using the normal OTN TDM mechanisms including subrate (e.g. 10Gbps) traffic that might NOT require packet switching since it could be re-framed/re-transmitted onto the OTU4 at its native line rate.

You're right on spot !

What I have in mind is actually to combine line-rate ODUs with a static mapping and pipe the uncommitted capacity to a packet-switch.

Statically commited services will be muxponded in fastpath, hence no jitter and less latency, while the fractionnal ports use the remaining ports, mostly for low-priority IP traffic.

Now I was assuming ODUFlex in CBR mode would allow fractionnal services without packet switching, but mapping an ethernet service to it would require some equivalent glue logic I guess, specifically for this case :

The crux of this is what happens when you have a subscriber service who's native line rate exceeds the provisioned OTN throughput which is a scenario OP alluded exactly to.

Yup. Should it hard-drop ? Buffer ? Both are unthinkable in OTN terms (is that a cultural thing ?). It's what packet networks are made for. And that's why an alien device, with support for Ethernet, OTN and programmable pipelines, could bridge the gap, allowing for a more efficient use of optical bandwidth.

Best regards,

I see what you're getting at. It sounds like you want to light up a 100Gb wave, put some committed "dedicated wavelength" services on it, then take whatever's left and hand it off on 100GbE for best-effort/general Internet traffic.

Not a bad idea. I guess the idea would be to dynamically provision a FlexODU or something for the remaining bandwidth and switch the Ethernet traffic onto it encapsulated in GFP-F. I see no reason why this isn't possible, but I know of nothing that would do it.

So someone needs to make a box with a couple QSFP+ (breakout capable), a couple SFP+, a QSFP28 UNI for the 40/100GbE with subrate commit, and a QSFP28 NNI to speak OTU4 into the network that supports all that.

Let me know if you find it! :slight_smile:

Hi Jerome,

When you buy the kind of services that end-up being delivered on OTN, you expect to have a capacity that is dedicated to you, and only to you, and if you don't "use" it nobody else will. And you agree with the constraints that come with that (not protected, or protection is an extra paid option).

Than comes the fact that Ethernet is *NEVER* "fractional". It is either 0 (ZERO) or line-rate. It's the amount (in time) of ZERO present over several microseconds (often "several" == "several millions") that gives (by doing an average) the "sub-rate" bandwidth. So no, hard-drop or buffer on OTN are not only "cultural issues", their absence is technically part of the OTN promise.

If you are willing to accept to share unused bandwidth, then MPLS based services are the way to go, and you have that choice in a vast majority of the cases. You loose the hard guarantee of bandwidth availability and you usually get some trace of jitter.

Hi Radu,

There might be some misunderstanding here. I don't care about what's sold or done *today*, I'm thinking ahead of what newer hardware and software will allow us to build in the (near) future.

That's probably something most of us lack : freedom to look ahead, rather than being swamped in yesterday's concerns.

In that specific case, I'm the middleman buying fat pipes and selling slices from them. Should I stick to good'ol L2VPNoMPLS-TP ? Or does newer gear let me do things in a slightly better way ? May SDN allow me to fine-tune my network to fit functional needs or is technology still a barrier to efficient use-cases ?

As an optimist, I chose the former, and love to engage in such debates with my peers, as long as I can avoid backward-thinking "We've always done it that way" assertions.

Best regards,

From: NANOG <nanog-bounces@nanog.org> On Behalf Of Brandon Martin
Sent: Thursday, May 30, 2019 9:16 PM

I see what you're getting at. It sounds like you want to light up a 100Gb
wave, put some committed "dedicated wavelength" services on it, then take
whatever's left and hand it off on 100GbE for best-effort/general Internet
traffic.

There's got to be vendors out there having an optical chasses with a
combination of OTN and ethernet cards for the revenue side of the chasses
-look at transmode/Infinera maybe?

Otherwise I guess it doesn't matter whether the ethernet cards are part of
the optical chasses or a standalone PE on the stick.
If customer wants full OTN-step rate plug into optical chasses if customer
needs fractional rate plug into (Broadcom) PEs hanging off of the optical
switch.

On the "looking ahead" notion, is it really impossible to build low latency
packet-switched networks?
As you know selling full rate L1 circuits will render your core mostly empty
(...all that BW that could be monetized).
  
adam

Hi Adam,

There's got to be vendors out there having an optical chasses with a
combination of OTN and ethernet cards for the revenue side of the chasses
-look at transmode/Infinera maybe?

For now the only vendor which seems to fit the bill is ECI, with their Apollo (and maybe Neptune) lines. But it's far more expensive and less dense that what StrataDNX-based OCP switch could do.

Most OTN vendors I got feedback from insists on using their "point and click" NMS which is a PITA in terms of productivity and forbids automation. It simply bares them from my wannabe-future-proof network.

On the "looking ahead" notion, is it really impossible to build low latency
packet-switched networks?
As you know selling full rate L1 circuits will render your core mostly empty
(...all that BW that could be monetized).

Packet-switching at flexible bandwidth requires buffering, while OTN with CBR only channels would map frames to the uplink' bitstream in real-time.

This allows for a 250ns port-to-port latency instead of 450ns for the best-of-breed, and up to 4µs for looked-up switching or routing.

Also there won't be jitter on muxponded circuits, while buffering eliminates deterministic latency.

And you also got right to the point : most customers will never fill their pipes, so there's extra gain to get from that hybrid box.

It's my understanding that some of the largest networks are already working on it, but it's "classified material" for now. I may as well postpone my deployment waiting for their R&D to be contributed to the OCP project, but I'd rather contribute instead.

Best regards,