I've been working with 40 gig for a few years. When I first ordered a
switch, one of the first publicly available with full 40 gig, I was
appalled that I was going to have to use 4 pair of multimode fiber for each
of my connections. I had planned on using single mode because I can do that
with 1 pair.
Even today, we're still looking at MM fiber instead of SM, even with the
horrendous limitations and cost issues of MM. For instance, if you need to
go 301 meters or more, you've got to go OM4 which is very expensive. You
have to lay 4 times the number of pairs as SM and when we move to 100G,
it'll be even worse because they're still doing things in 6,12,etc... SM
can do 100G easily, up to 1K with the lower grade fiber, so in the SM 100G
world, you'd be installing 1/12 the strands as you would in multi mode. I
just can't figure where this makes sense....
I am aware that single mode has more expensive optics, and I know how much
they cost when I first looked at this, but if this were the standard, that
price would drop enormously.
Anyone know why the industry has their head stuck on MultiMode?
at 10G the optics costs are about 1/3 that of SMF (SR vs LR).
We tend to keep things SMF, but within many older datacenters MMF is broadly available and does meet the needs at a lower cost.
There seems to be a shifting trend as well in UPC vs APC connectors.
I think much of this problem is clearly articulated here: http://xkcd.com/927/
Everyones needs are a bit different.
My guess would be it's due to existing cable plants. I've worked at a
number of places that have tons of multimode fiber run everywhere. If
you can re-terminate and re-use, even if inefficiently, it often beats
the time and expense required to run new fiber, especially if it's a
place that pulling cable may involve trade unions; that gets very
expensive to pull what could be a not so expensive cable one or two
Playing devil's avocate here...
Compared to the cost difference between, say 40G SR equivalent optics and 40G LR equivalent optics, the cost of pulling, terminating, and testing new SMF between two relatively close points is pretty small. I say that with the following qualifier:
If you have usable path (conduit, innerduct, rackspace for new termination bays, etc) in place between points A and B. If it turns into a situation where you need to dig to lay in new conduit, that's a different matter altogether.
The problem is markedly worse at 100G. DPO-24 is just evil, but the cost difference between 100G SR10, LR4, and ER4 optics is still ridiculous.
MM optics come with looser tolerances and are therefore easier to
produce. The wider core of the fiber and higher dispersion allowances
also mean that the fiber is easier to make. The fiber, though, is the
small end of this equation. The optics are the big one.
For those who are buying two or three optics a year, a $150 price
difference is no big deal. For those who buy two or three hundred
optics every other month, this really makes a difference and those are
the ones driving the MM development.
Money, really. The optics and fiber cost is cheaper than SM. The
standards around SM optics are to reach relatively long distances, so the
transmitters and receivers are more expensive and they use way more power.
That being said, I see MM in modern datacenters being used in-rack or very
short distances due to the reasoning you mentioned, having to run at least
4 pair for 40G or 12 pair in the case of 100GBase-SR10. I know there are
structured cabling solutions for handling the bundles but it sure seems
like a pain. QSFP28 will bring 100G back down to 4 pairs of fibers at
least for those who want to use MM. There was significant push by Google
and others to come up with a shorter-reach 100G SM standard (LR10) because
people don't want to use 12-pair MTP cables around their datacenter and
LR4 wasn't a good fit.
We are pretty much all SM for anything 10G and above as a standard, but
have looked at 100GBase-SR10 for short-reach 100G interconnects due to the
significant reduction in cost and power compared to 100GBase-LR4.