The KB indicates that the problem is with the "LG TV WebOS 3.8 or above."
Doug
(not speaking for any employers, current or former)
The KB indicates that the problem is with the "LG TV WebOS 3.8 or above."
Doug
(not speaking for any employers, current or former)
Hi,
TL;DR: a combination of scale and incompetence means you can run out of 10/8
really quick.Indeed. Thank you for providing a demonstration of my point.
I'd question the importance of having an console on target in Singapore
be able to directly address an BMC controller in Phoenix (wait for it),
but I'm sure that's a mission requirement.
No, but the NOC that sits in between does need to access both. Sure, you can
use jumphosts, but now you're delaying troubleshooting of a potentially costly
outage.
But just in case you'd like to reconsider, can I interest you in NAT?
Like nutmeg, a little will add some spice to your recipe -- but too much
will cause nausea and hallucinations.
NAT'ing RFC1918 to other RFC1918 space inside the same datacenter, or even
company, is a nightmare. If you've ever been on call for any decently sized
network, you'll know that.
Let's just magic a rack controller to handle the NAT. We can just cram it
into the extra-dimensional space where the switches live.
And all less than an hour's chin pulling.
We both know that this is
A. An operational nightmare, and
B. Simply not the way things work in the real world.
The people who designed most of the legacy networks I've ever worked on did
not plan for the networks to grow to the size they became. Just like we would
never run out of the 640k of memory, people thought they would never run out
of RFC1918 space. Until they did.
And when that James May moment arrives, people start looking at a quick fix
(i.e., let's use unannounced public space), rather than redesigning and
reimplementing networks that have been in use for a long long time.
TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't
work.
Thanks,
Sabri
An embarrassing mistake. I'm not a computer and don't count from zero. It is, of course, at 172.18.7.12:2239 and not 11.
No, but the NOC that sits in between does need to access both. Sure, you can
A single NOC sitting in the middle of a single address space. I believe
I'm detecting an architectural paradigm on the order of "bouncy castle."
Tell me, do you also permit customer A's secondary DNS server to reach
out and touch customer B's tertiary MongoDB replica in some other AZ for
a particular reason? Or are these networks segregated in some
meaningful way -- a way which might, say, completely vacate the entire
point of having a completely de-conflicted 1918 address space?
use jumphosts, but now you're delaying troubleshooting of a potentially costly
outage.
Who's using jumphosts? I very deliberately employed one of my least
favorite networking "technologies" in order to give you direct
connections. I just had to break a different fundamental networking
principle to steal the bits from another header. No biggie. You won't
even miss the lack of ICMP or the squished MTU. Honest.
It's just "your" stuff anyway. The customers have all that delicious
10/8 to use. Imagine how nice troubleshooting that would be, where
anything that's 172.16/12 is "yours" and anything 10/8 is "theirs."
NAT'ing RFC1918 to other RFC1918 space inside the same datacenter, or even
company, is a nightmare. If you've ever been on call for any decently sized
network, you'll know that.
And that's different than NATing non-1918 addresses to a 1918 address
space how? Four bytes is four bytes, no? Or are 1918 addresses magic
when it comes to the mechanical process of address translation?
As far as being on call and troubleshooting, I'd think that identically
configured rack-based networks would be ideal, no? In the context of
the rack, everything is very familiar. That 192.168.0.1 is always the
gateway for the rack hosts. That 192.168.3.254 is always the iSCSI
target on the SAN. (Or is it more correctly NAS, since any random PDU
in Wallawalla WA can hit my disks in Perth via its unique address on a
machine which lives "not at all hypothetically" under the raised floor
or something. Maybe sitting in the 76-80th RU.)
Maybe I should investigate these "jumphosts" of which you speak, too.
They might have some advantages.
But I'm sure using your spreadsheets to look up everything all the time
works even better. Especially when you start having to slice your
networks thinner and thinner and renumber stuff. But I'm sure no
customer would ever say they needed more address space than was
initially allocated to them. It should be trivial to throw them
another /24 from elsewhere in the 10 space, get it all routed and
filtered and troubleshoot that on call. Much easier than handing them
very own 10/8.
We both know that this is
A. An operational nightmare, and
B. Simply not the way things work in the real world.
Right. What would I know about the real world? What madman would ever
deploy a system in a way other than the flat, star pattern in which you
suggest. Who even approaches that scale and scope?
not plan for the networks to grow to the size they became. Just like we would
never run out of the 640k of memory, people thought they would never run out
of RFC1918 space. Until they did.
Yes. Whoever could have seen that coming. If only we had developed
mechanisms for extending the existing IPv4 address space. Maybe by
making multiple hosts share a single address by using some kind of "proxy"
or committing a horrible sin and stealing bits from a different layer.
Or perhaps we could even deploy a different protocol with an even larger
address space. It could be done in parallel, even. Well. I can dream,
can't I?
And when that James May moment arrives, people start looking at a quick fix
(i.e., let's use unannounced public space), rather than redesigning and
reimplementing networks that have been in use for a long long time.
A long long time indeed. Why, I remember back in the late 1990s when
the cloud wars started. They were saying Microsoft would have to divest
Azure. Barnes and Noble had just started selling MMX-optimized instances for
machine learning. The enormous web farms at Geocities were really
pushing the envelope of the possible when it came to high availability
concurrent web connections by leveraging CDNs. Very little has changed
since then. We've hardly had the opportunity to look at these networks,
let alone consider rebuilding them. Who has the time or opportunity?
That Cisco 2600 may be dusty, but it's been holding the fort all this
time.
TL;DR: in theory, I agree with you 100%. In practice, that stuff just doesn't
work.
Well thanks for sharing. I think we've all learned a lot.
And how would you define "fully implement v6", anyhow?
Case in point: I helped deploy v6 at my employer *last century*, and the
entire network was (last I knew) totally v6 ready, and large segments were
v6-only. Yet Google *still* says that only 80% or so traffic to them are via
v6.
The other 20% being end-user devices that aren't using v6 for one reason or
another - I'm pretty sure that a lot of those are because companies have told
the user to "turn off ipv6" to solve connection problems, and I know that a lot
of them are gaming consoles from a vendor that had a brief shining chance to
Get It Right on the last iteration(*) but failed to do so....
And when I retired, I had several clusters of file servers that weren't doing
IPv6 because a certain 3-letter vendor who *really* should have been more on
the ball didn't have v6 support in the relevant software.
Even more problematic: What do you do with a company that's fully v6-ready, but
still has several major interconnects to other companies that *aren't* ready,
and thus still using v4?
(*) The PS4 has ipv6 support in the OS - it will dhcpv6 and answer pings from
on and off subnet. However, they didn't include ipv6 support in the development
software toolkit, so nothing actually uses it. They appear to have fixed this in the PS5,
but that still hits the "other company isn't ready" issue.
Hi,
Hi,
Financial incentives also work. Perhaps we can convince Mr. Biden to give a .5%
tax cut to corporations that fully implement v6. That will create some bonus
targets.And how would you define "fully implement v6", anyhow?
Fair point. I'm sure the a commission appointed by the appropriate legislators
will be happy to spend a few millions debating that issue. Personally, I would
argue that a full implementation of IPv6 means that v4 could be phased out without
adverse effect on the production network.
But of course, how would we define "adverse effect on the production network"?
Even more problematic: What do you do with a company that's fully v6-ready, but
still has several major interconnects to other companies that *aren't* ready,
and thus still using v4?
I totally agree with everything you wrote. It proves the point that having v6 ready
technologies in "the network", does not mean a network, or even a company is fully
v6 ready. Way too many stakeholders and outside dependencies.
To me, it means that "we", as in network professionals, should be ready to save
the day when company leaders finally realize they have no option and need v6 to
be implemented fast.
And secretly, I've been hoping for that moment. "Well, sir, the network has been
IPv6 ready for years, but the software groups and their leadership have so far
blatantly refused to update their code and support it".
I guess that I'll join you in retirement before that moment comes.
Thanks,
Sabri
You don't need to patronize me. I'm merely explaining the real life realities of
working in a large enterprise.
Patronize you? Ohh, heavens no! I fully intend to use your replies as
educational material. Why, I've passed them to colleagues of mine
already. It's not every day that an off-handed comment made in
frustration at the state of the industry is so immediately and
thoroughly expanded upon.
I think patronizing would look more like: assuming a position of great
authority and noteworthy insight on a list full of professionals by
vaguely citing a situation which they were once exposed to as some kind
of instructive lab of how the "real world" works -- perhaps going
farther to summarizing each of the lessons into a one-line takeaway for
those who were either unable or unwilling to understand their point.
And the key takeaway here is: we can come up with the most efficient solutions,
in the end it's all about budgets and stakeholder requirements.
Ahh, I see! Thanks. I'll put that with the rest of my notes.
I have personally seen the issue with streaming from a Samsung cell phone and the Disney+ app to a Google chrome cast and a regular not-smart TV.
Travis
There's no error code. Customer only sees the message "DRM license resquest failed" on LG TV WebOS 3.8 or above.
Translation “I use a broken GEOIP database that doesn’t handle IPv6 correctly. If you turn off IPv6 then the request will use IPv4 and it may work.”.
Mark
Presumably because you have reason to connect to the internet.
Presumably you intend that connection to the internet to be able to reach
a variety of third parties.
As such, there is some reasonable basis for the idea that how third parties
choose to manage their network impacts decisions you need to make about
your own network.
E.G. Facebook has decided to go almost entirely IPv6, yet they maintain an
IPv4 presence on their front-end in order to support users that are victims of
IPv4-only networks and devices. Facebook faces a cost in having to maintain
those services to reach those customers. That cost could be reduced by the
providers in question (and in some cases the device manufacturers) providing
robust IPv6 implementations in their products and services.
Unfortunately, NAT, CGNAT, and IPv4 in general are an unrecognized cost
inflicted on people who are not involved in the decision to implement those
processes vs. deploying IPv6, thus creating. a situation where those who
have deployed IPv6 yet wish to maintain connectivity to those who have not
are essentially subsidizing those who have not in order to maintain that
connectivity.
Now, if the true cost of that were more transparent and the organizations
not deploying IPv6 could be made more aware of the risks of what happens
when a variety of organizations choose to put an end to that subsidy,
it might get more attention at the CxO level. Unfortunately, the perverse
incentives of the market (providers that are willing to offer legacy services
are more likely to retain customers than providers that aren’t) prevent
those paying the subsidy from opting out (at least for now) because the
critical mass of customers still clinging to their legacy networks presumably
comes with a value that exceeds the cost of that subsidy.
There was actually some excellent work done to try and quantify this
in terms of Per User Per Year costs to an average ISP by
Lee Howard: https://www.rmv6tf.org/wp-content/uploads/2012/11/TCO-of-CGN1.pdf
Owen
At the bottom of that page, there is a question “Was this answer helpful.” I clicked NO. It gave me a free form text box to explain why I felt it was not helpful… Here’s what I typed:
The advice is just bad and the facts are incorrect.
IPv6 is not blocking the Disney application. Either IPv6 is broken in the users environment (in which case, the user should work with their network administrator to resolve this) or Disney has failed to implement IPv6 correctly on their DRM platform.
IPv6 cannot “Block” an application.
Turning off IPv6 will degrade several other services and cause additional problems. This is simply very bad advice and shame on Disney for issuing it.
Hopefully if enough people follow suit, Disney will get the idea.
Owen
His example may have included incompetence. However, it takes longer, but
it is definitely possible to run out of RFC-1918 space with scale and no incompetence.
No rational network will ever be able to put every single /32 endpoint on a host, but
I know of several networks that have come darn close and still run multiple partitioned
RFC-1918 “zones” because RFC-1918 just isn’t enough for them.
The good news is that IPv6 has plenty of addresses available for all of these applications
and there’s absolutely no need for separate private addressing unless you really want it.
Owen
WebOS implemented IPv6 in 3.8 IIRC.
Owen
ROTFL! I’m sorry, but the imagery of people paying rent for a piece of Randy’s mind is just too much
Owen,
I am genuinely curious, how would you explain the problem, and describe a solution, to an almost exclusively non-technical audience who just wants to get the bits flowing again?
Doug
(still not speaking for anyone other than myself)
"The people who did Disney's software wrote it for the Internet protocols
of last century, so it fails with this century's Internet. Adding insult to injury,
the reason you even notice a problem is because it reacts badly to the failure,
because it doesn't even include *last* century's well-known methods of
error recovery".
I would define it this way: if something can be done using IPv4, it has an obvious IPv6 counterpart that is usable by the same community to the extent that the community is itself able to use such. Web sites, mail, bandwidth, routing, ROAs, firewalls with appropriate rules, and so on. The problem with my suggested wording is that if one turns IPv4 off, by implication someone turns IPv6 off, and I don't intend that. So reword to make IPv6 the surviving service in some way, and I think you're pretty much there.
No, it isn't. It's the year 2021. Stop making excuses.
Please explain to me how you uniquely number 40M endpoints with RFC-1918 without running out of
addresses and without creating partitioned networks.
If you can’t, then I’m not the one making excuses.
Owen