Death of the Internet, Film at 11

Laszlo Hanyecz wrote:

What does BCP38 have to do with this?

Your're right. That's not specifically related to *this* attack. Nobody
needs to spoof anything when you've got a zillion fire hoses just lying
around where any 13 year old can command them from the TRS 80 in his mom's
basement. (I've seen different estimates today. One said there's about
a half million of these things, but I think I saw where Dyn itself put
the number of unique IPs in the attack at something like ten million.)

I just threw out BCP 38 as an example of something *very* minimal that
the collective Internet, if it had any brains, would have made de rigueur
for everyone ten+ years ago. BCP 38 is something that I personally view
as a "no brainer", that is already widely accepted as being necessary,
and yet is a critical security step that some (many?) are still resisting.
So, it's like "Well, if the Internet-at-large can't even do *this* simple
and relatively non-controversial thing, then we haven't got a prayer in
hell of ever seeing a world-wide determined push to find and neutralize
all of these bloody damn stupid CCTV things. And when the day comes when
somebody figures out how to remotely pop a default config Windoze XP
box... boy oh boy, will *that* be a fun day... NOT! Because we're not
ready. Nobody's ready. Except maybe DoD, and I'm not even taking bets
on that one."

I didn't intend to focus on BCP 38. Everybody knows that's only one
thing, designed to deal with just one part of the overall problem. The
overall problem, in my view, is the whole mindset which says "Oh, we
just connect the wires. Everything else is somebody else's problem."

Ok, so this mailing list is a list of network operators. Swell. Every
network operator who can do so, please raise your hand if you have
*recently* scanned you own network and if you can -honestly- attest
that you have taken all necessary steps to insure that none of the
numerous specific types of CCVT thingies that Krebs and others identified
weeks or months ago as being fundamentally insecure can emit a single
packet out onto the public Internet.

And, cue the crickets...

Recent events, like the Krebs DDoS and the even bigger OVH DDoS, and
today's events make it perfectly clear to even the most blithering of
blithering idiots that network operators, en mass, have to start scanning
their own networks for insecurities. And you'd all better get on that,
not next fiscal year or even next quarter, but right effing now, because
the next major event is right around the corner. And remember, *you*
may not be scanning your networks for easily pop'able boxes, but as we
should all be crystal clear on by now, that *does not* mean that nobody
else is doing so.

Regards,
rfg

P.S. The old saying is that idle hands are the devil's playground. In
the context of the various post-invasion insurgancies, etc., in Iraq, is
is often mentioned that it was a somewhat less than a brilliant move for
the U.S. to have disbanded the Iraq army, thereby leaving large numbers
of trained young men on the streets with no jobs and nothing to do.

To all of the network operators who think that (or argue that) it will
be too expensive to hire professionals to come in an do the work to
scan your networks for known vulnerabilities, I have a simple suggestion.
Go down to your local high school, find the schmuck who teaches the
kids about computers, and ask him for the name of his most clever student.
Then hire that student and put him to work, scanning your network.

As in Iraq, it will be *much* better to have capable young men inside the
tent, pissing out, rather than the other way around.

"taken all necessary steps to insure that none of the numerous specific types of CCVT thingies that Krebs and others identified"

Serious question... how?

In a message written on Sat, Oct 22, 2016 at 07:34:55AM -0500, Mike Hammett wrote:

"taken all necessary steps to insure that none of the numerous specific types of CCVT thingies that Krebs and others identified"

From Hacked Cameras, DVRs Powered Today’s Massive Internet Outage – Krebs on Security

The part that should outrage everyone on this list:

        That's because while many of these devices allow users to change
        the default usernames and passwords on a Web-based administration
        panel that ships with the products, those machines can still be
        reached via more obscure, less user-friendly communications services
        called "Telnet" and "SSH."

        "The issue with these particular devices is that a user cannot
        feasibly change this password," Flashpoints Zach Wikholm told
        KrebsOnSecurity. "The password is hardcoded into the firmware, and
        the tools necessary to disable it are not present. Even worse, the
        web interface is not aware that these credentials even exist."

As much as I hate to say it, what is needed is regulation. It could
be some form of self regulation, with retailers refusing to sell
products that aren't "certified" by some group. It could be full
blown government regulation. Perhaps a mix.

It's not a problem for a network operator to "solve", any more than
someone who builds roads can make an unsafe car safe. Yes, both
the network operator and rood operator play a role in building safe
infrastructure (BCP38, deformable barriers), but neither can do
anything for a manufacturer who builds a device that is wholely
deficient in the first place.

Network operators can only do so much. By the time traffic enters into
an ISP's traffic aggregation point, any flow monitoring and throttling
would have a minimal effect. Not saying that it shouldn't be
considered. The correct answer includes throttling the traffic much
closer to the source.

The obvious answer is that the device that bridges IoT to the upstream
link in the home or office have the capability of rate-limiting upstream
traffic. Perhaps on a per-MAC basis. When does a thermostat, light
bulb, or refrigerator need 1-megabyte/s uplink channels? For that
matter, how many computers -- especially laptops -- need that kind of
upstream capacity?

(Yes, yes, YouTube publishers and VLAN links to the office, to name two,
will need that kind of channel; see below. Gamers need small,
low-latency channels, so the throttling can't be too aggressive.
Public-access storage, web and mail servers, obviously. IP-connected
Web cameras need some upstream capacity, but not a full-bore one. The
uplink throttle can take into consideration "reasonable" upstream rates
for cameras.)

For wireless access points, the place to start would be with the OpenWRT
package, to serve as a model for what *can* be done. Once we have a
proof of concept, it would raise the bar for "commercial"
implementations. THAT would then provide an opportunity for the
three-letter Federal agencies to specify reasonable regulations, should
Congress so decide this is necessary. It's much easier for regulatory
bodies to say "this software does it, why can't yours?" instead of
saying "you [manufacturer] go figure it out".

The ripple effect throughout the world would go a long way to curbing
the problem. Especially if other regulatory administrations follow
suit, so that the enabling crap routers are weeded out.

What about the exceptions? For those rare cases where one needs a
high-rate upstream channel for a node on the wireless network (or wired
network, for that matter), the firmware in the traffic aggregating
device can allow for specific exceptions to the rate-limit rules. One
method is to tie exceptions to the device MAC address, or range of MAC
addresses. Another is to tie exceptions to ports, with WiFi being a
single "port" in this context. Generators of high-speed upstream
traffic would, for example, need a wired connection in order to do this.
This would *not* affect most WiFi-connected peripherals, like printers,
because the AP would limit upstream traffic, not downstream.

The ISP would then have something to sell to the customer, to replace
the local POS router/WAP that the customer is currently using.

Hmmm...something to thing about as I build the Linux IPTABLES Firewall
Rule Generator Mk III...

Putting them behind a firewall without general Internet access seems to work for us. We have a lot of cheap IP cameras in our facility and none of them can reach the net. But this is probably a bit beyond the capabilities of the general home user.

—Chris

It is also likely the desired use case. In my office I like to be able to
login when needed when on the road, when the alarm company calls me at 2am
for a false alarm so I don't have to get someone else out of bed to have
them dispatched to check on the site.

-jim

It's also generally counter to them being available outside of that network. (web and proprietary interfaces needed, SSH and telnet not). That's also not much I can do as a network operator.

VPNs can accomplish this without opening ports directly to devices.

Luke

Sure, but now we put it outside the skill level of 99.99% of the people
that don't read and understand this list.

-jim

I was referring to your use case and it being a business, for residential I agree with you.

Generic question:

The media seems to have concluded it was an "internet of things" that
caused this DDoS.

I have not seen any evidence of this. Has this been published by an
authoritative source or is it just assumed?

Has the type of device involved been identified?

I am curious on how some hacker in basement with his TRS80 or Commodore
Pet would be able to reach "bilions" of these devices to reprogram them.
Vast majority of homes are behind NAT, which means that an incoming
packet has very little chance of reaching the IoT gizmo.

I amn guessing/hoping such devices have been identified and some
homweoners contacted ans asked to volunteer their device for forensic
analysis of where the attack came from ?

Is it more plausible that those devices were "hacked" in the OEM
firmware and sold with the "virus" built-in ? That would explain the
widespread attack.

Also, in cases such as this one, while the target has managed to
mitigate the attack, how long would such an attack typically continue
and require blocking ?

Since the attack seemed focused on eastern USA DNS servers, would it be
fair to assume that the attacks came mostly from the same region (aka:
devices installed in eastern USA) ? (since anycast would point them to
that).

OPr did the attack use actual IP addresses instead of the unicast ones
to specifically target servers ?

BTW, normally, if you change the "web" password on a "device", it would
also change telnet/SSH/ftp passwords.

Vast majority of homes are behind NAT, which means that an incoming
packet has very little chance of reaching the IoT gizmo.

UPNP exposes many IoT devices to the Internet, plus they're always exposed on the LAN, where many viruses find them and use backdoors to conscript them. Several bad actors are currently selling access to their IoT minions for ddos purposes.

This is not new. What's new is that minion control seems to have been aggregated into a small number of malicious twerps.

-mel beckman

Generic question:

The media seems to have concluded it was an "internet of things" that
caused this DDoS.

I have not seen any evidence of this. Has this been published by an
authoritative source or is it just assumed?

Flashpoint[0], krebs[1], arstechnica[2]. I'm not sure what credible
looks like unless they release a packet but this is probably
consensus.

Has the type of device involved been identified?

routers and cameras with shitty firmware [3]

Is it more plausible that those devices were "hacked" in the OEM
firmware and sold with the "virus" built-in ? That would explain the
widespread attack.

The source code has been released. krebs [4], code [5]

Also, in cases such as this one, while the target has managed to
mitigate the attack, how long would such an attack typically continue
and require blocking ?

  This is an actual question that hasn't been answered.

Since the attack seemed focused on eastern USA DNS servers, would it be
fair to assume that the attacks came mostly from the same region (aka:
devices installed in eastern USA) ? (since anycast would point them to
that).

Aren't heat maps just population graphs?

BTW, normally, if you change the "web" password on a "device", it would
also change telnet/SSH/ftp passwords.

Seems like no one is doing either.

[0] https://www.flashpoint-intel.com/mirai-botnet-linked-dyn-dns-ddos-attacks/
[1] https://krebsonsecurity.com/2016/10/hacked-cameras-dvrs-powered-todays-massive-internet-outage/
[2] Double-dip Internet-of-Things botnet attack felt across the Internet | Ars Technica
[3] IoT Home Router Botnet Leveraged in Large DDoS Attack
[4] Source Code for IoT Botnet ‘Mirai’ Released – Krebs on Security
[5] GitHub - jgamblin/Mirai-Source-Code: Leaked Mirai Source Code for Research/IoC Development Purposes

Until Dyn says or someone says Dyn said, everything is assumed.

That's what VPNs are for.

One way to deal with this would be for ISP's to purchase DoS attacks
against their own servers (not necessarially hosted on your own
network) then look at which connections from their network attacking
these machines then quarantine these connections after a delay
period so that attacks can't be corollated with quarantine actions
easily.

This doesn't require a ISP to attempt to break into a customers
machine to identify them. It may take several runs to identify
most of the connections associated with a DoS provider.

And then what? The labor to clean up this mess is not free. Who's
responsibility is it? The grandma who got a webcam for Christmas to watch
the squirrels? The ISP?... No... The vendor? What if the vendor had
released a patch to fix the issue months back, and grandma hadn't installed
it?

Making grandma and auntie Em responsible for the IT things in their house
is likely not going to go well.

Making the vendor responsible might work for the reputable ones to a point,
but won't work for the fly by night shops that will sell the same products
under different company names and model names until they get sued or "one
starred" into oblivion. Then they just change names and start all over.

The ISPs won't do it because of the cost to fix... The labor and potential
loss of customers.

So once identified, how do you suggest this gets fixed?

The person who owns the internet connection still has responsibility for what happens on it.

So if the owners are educated to select reputable brands in order to prevent themselves from being implicated in a DDoS and liable for a fine or some other punitive thing, they 'vote with their feet' and the fly-by-nighters suddenly lose a chunk of marketshare, unless they up their game?

I'm as sympathetic to Aunty Em and Grandma as the next I-started-on-a-helpdesk guys, but 'you get what you pay for' applies here as much as it does everywhere else...?

https://urldefense.proofpoint.com/v2/url?u=http-3A__hub.dyn.com_dyn-2Dblog_dyn-2Dstatement-2Don-2D10-2D21-2D2016-2Dddos-2Dattack&d=DQIBAg&c=n6-cguzQvX_tUIrZOS_4Og&r=r4NBNYp4yEcJxC11Po5I-w&m=iGvkbfzRJPqKO1A6YGa-c1m0RBLNkRk03hCjvVGTH3k&s=bScBNFncB3kt_cG0L3iys0mfXBmwwUR7A8rIDmi94D4&e=

I wish you luck with your plan, and please subscribe me to your newsletter
in digest format.