TWC (AS11351) blocking all NTP?

This evening all of my servers lost NTP sync, stating that our on-site NTP
servers hadn't synced in too long.

Reference time noted by the local NTP servers:
  Fri, Jan 31 2014 19:11:29.725

Apparently since then, NTP has been unable to traverse the circuit. Our
other provider is shuffling NTP packets just fine, and after finding an
NTP peer that return routed in that direction, I was able to get NTP back
in shape.

Spot checking various NTP peers configured on my end with various looking
glasses close to the far-end confirm that anytime the return route is
through AS11351, we never get the responses. Outbound routes almost always
take the shorter route through our other provider.

Is anyone else seeing this, or am I lucky enough to have it localized to
my region (Northern NY)?

I've created a ticket with the provider, although with it being the weekend,
I have doubts it'll be a quick resolution. I'm sure its a strange knee-jerk
response to the monlist garbage. Still, stopping time without warning is
Uncool, Man.

-- Jonathan Towne

While I do not profess to know the cause of your particular NTP sync
problem, this *might* be due to knee-jerk reactions to the NTP
reflection/amplification DDoS attacks that have been quite an
annoyance and operational issue lately. suspect that some operators
have found that perhaps they harbored some device inside their own
networks are being used (or might be used) to facilitate these attacks:

https://www.us-cert.gov/ncas/current-activity/2014/01/10/Network-Time-Protocol-NTP-Amplification-Attacks

See also:

http://openntpproject.org/

- - ferg

The provider has kindly acknowledged that there is an issue, and are
working on a resolution. Heads up, it may be more than just my region.

-- Jonathan Towne

On Sat, Feb 01, 2014 at 11:03:19PM -0500, Jonathan Towne scribbled:
# This evening all of my servers lost NTP sync, stating that our on-site NTP
# servers hadn't synced in too long.

In article <20140202163313.GF24634@hijacked.us> you write:

The provider has kindly acknowledged that there is an issue, and are
working on a resolution. Heads up, it may be more than just my region.

I'm a Time-Warner cable customer in the Syracuse region, and both of
the NTP servers on my home LAN are happily syncing with outside peers.

My real servers are hosted in Ithaca, with T-W being one of the
upstreams and they're also OK. They were recruited into an NTP DDoS
last month (while I was at a meeting working on anti-DDoS best
practice, which was a little embarassing) but they're upgraded and
locked down now.

R's,
John

The provider has kindly acknowledged that there is an issue, and are
working on a resolution. Heads up, it may be more than just my region.

And not just your provider, everyone is dealing with UDP amp attacks.

These UDP based amp attacks are off the charts. Wholesale blocking of
traffic at the protocol level to mitigate 10s to 100s of gigs of ddos
traffic is not "knee jerk", it is the right thing to do in a world where
bcp 38 is far from universal and open dns servers, ntp, chargen, and
whatever udp 172 is run wild.

People who run networks know what it takes to restore service. And
increasingly, that will be clamping ipv4 UDP in the plumbing, both
reactively and proactively.

And, i agree bcp38 would help but that was published 14 years ago.

CB

-- Jonathan Towne

On Sat, Feb 01, 2014 at 11:03:19PM -0500, Jonathan Towne scribbled:
# This evening all of my servers lost NTP sync, stating that our on-site

NTP

# servers hadn't synced in too long.
#
# Reference time noted by the local NTP servers:
# Fri, Jan 31 2014 19:11:29.725
#
# Apparently since then, NTP has been unable to traverse the circuit. Our
# other provider is shuffling NTP packets just fine, and after finding an
# NTP peer that return routed in that direction, I was able to get NTP

back

# in shape.
#
# Spot checking various NTP peers configured on my end with various

looking

# glasses close to the far-end confirm that anytime the return route is
# through AS11351, we never get the responses. Outbound routes almost

always

# take the shorter route through our other provider.
#
# Is anyone else seeing this, or am I lucky enough to have it localized to
# my region (Northern NY)?
#
# I've created a ticket with the provider, although with it being the

weekend,

# I have doubts it'll be a quick resolution. I'm sure its a strange

knee-jerk

# response to the monlist garbage. Still, stopping time without warning

is

Please note that it's not that UDP is at fault here; it's
applications that are structured to respond to small
input packets with large responses.

If NTP responded to a single query with a single
equivalently sized response, its effectiveness as
a DDoS attack would be zero; with zero amplification,
the volume of attack traffic would be exactly equivalent
to the volume of spoofed traffic the originator could
send out in the first place.

I agree the source obfuscation aspect of UDP can be
annoying, from the spoofing perspective, but that
really needs to be recognized to be separate from
the volume amplification aspect, which is an application
level issue, not a protocol level issue.

Thanks!

Matt
PS--yes, I know it would completely change the
dynamics of the internet as we know it today to
shift to a 1:1 correspondence between input
requests and output replies...but it *would*
have a nice side effect of balancing out traffic
ratios in many places, altering the settlement
landscape in the process. :wink:

> >
> > The provider has kindly acknowledged that there is an issue, and are
> > working on a resolution. Heads up, it may be more than just my

region.

> >
>
> And not just your provider, everyone is dealing with UDP amp attacks.
>
> These UDP based amp attacks are off the charts. Wholesale blocking of
> traffic at the protocol level to mitigate 10s to 100s of gigs of ddos
> traffic is not "knee jerk", it is the right thing to do in a world where
> bcp 38 is far from universal and open dns servers, ntp, chargen, and
> whatever udp 172 is run wild.
>
> People who run networks know what it takes to restore service. And
> increasingly, that will be clamping ipv4 UDP in the plumbing, both
> reactively and proactively.
>

Please note that it's not that UDP is at fault here; it's
applications that are structured to respond to small
input packets with large responses.

I dont want to go into fault, there is plenty of that to go around.

If NTP responded to a single query with a single
equivalently sized response, its effectiveness as
a DDoS attack would be zero; with zero amplification,
the volume of attack traffic would be exactly equivalent
to the volume of spoofed traffic the originator could
send out in the first place.

I agree the source obfuscation aspect of UDP can be
annoying, from the spoofing perspective, but that
really needs to be recognized to be separate from
the volume amplification aspect, which is an application
level issue, not a protocol level issue.

Source obfuscation is not annoying. Combined with amplification, it is the
perfect storm for shutting down networks... whereby the only solution is to
shutdown ipv4 udp. Or wave the magic wand that makes bcp38 universal,
patches boxes, and so on.

My point is: dont expect these abbused services on UDP to last. We have
experience in access networks on how to deal with abused protocols. Here is
one reference

http://customer.comcast.com/help-and-support/internet/list-of-blocked-ports/

My crystal ball says all of UDP will show up soon.

CB

I'd hate to think that NetOps would be so heavy handed in blocking all of UDP, as this would essentially halt quite a bit of audio/video traffic. That being said, there's still quite the need for protocol improvement when making use of UDP, but blocking UDP as a whole is definitely not a resolution, and simply creating a wall that not only keeps the abusive traffic out, but keeps legitimate traffic from flowing freely as it should.
Sent on the TELUS Mobility network with BlackBerry

"We had to burn down the village to save it."

I'd hate to think that NetOps would be so heavy handed in blocking
all of UDP, as this would essentially halt quite a bit of audio/video
traffic. That being said, there's still quite the need for protocol
improvement when making use of UDP, but blocking UDP as a whole is
definitely not a resolution, and simply creating a wall that not only
keeps the abusive traffic out, but keeps legitimate traffic from
flowing freely as it should.

"We had to burn down the village to save it."

Close. More like a hurricane is landing in NYC so we are forcing an
evacuation.

But. Your network, your call.

CB

We block all outbound UDP for our ~200,000 Users for this very reason
(with the exception of some whitelisted NTP and DNS servers). So far we
have had 0 complaints, and 0 UDP floods sourced from us

Actually, you could've (and should've) been far more selective in what you filtered via ACLs, IMHO.

What about your users who play online games like BF4?

I'm a big believer in using ACLs to intelligently preclude reflection/amplification abuse, but wholesale filtering of all UDP takes matters too far, IMHO.

My suggestion would be to implement antispoofing on the southward interfaces of the customer aggregation edge (if you can't implement it via mechanisms such as cable ip source verify even further southward), and then implement a default ingress ACL on the coreward interfaces of the customer aggregation gateways to block inbound UDP destined to ntp, chargen, DNS, and SNMP ports only.

I also think that restricting your users by default to your own recursive DNS servers, plus a couple of well-known, well-run public recursive services, is a good idea - as long as you allow your users to opt out.

This has nothing to do with DDoS, but with other types of issues.

The recently publicized mechanism to leverage NTP servers for amplified DoS attacks is seriously effective.
I had a friend who had a local ISP affected by this Thursday and also another case where just two asterisk servers saturated a 100mbps link to the point of unusability.
Once more - this exploit is seriously effective at using bandwidth by reflection.

From a provider point of view, given the choices between contacting the end-users vs. mitigating the problem, if I were in TW position if I was unable to immediately contact the numerous downstream customers that were affected by this, I would take the option to block NTP on a case-by-case basis (perhaps even taking a broad brush) rather than allow it to continue and cause disruptions elsewhere.

- Mike

Per my previous post in this thread, there are ways to do this without blocking client access to ntp servers; in point of fact, unless the ISP in question isn't performing antispoofing at their customer aggregation edge, blocking client access to ntp servers does nothing to address (pardon the pun) the issue of ntp reflection/amplification DDoS attacks.

All that broadband access operators need to do is to a) enforce antispoofing as close to their customers as possible, and b) enforce their AUPs (most broadband operators prohibit operating servers) by blocking *inbound* UDP/123 traffic towards their customers at the customer aggregation edge (same for DNS, chargen, and SNMP).

Actually, this can cause problems for ntpds operating in symmetric mode, where both the source and destination ports are UDP/123. Allowing inbound UDP/123 - UDP/123 and then rate-limiting it would be one approach; another would be to block outbound UDP/123 emanating from customers based upon packet size, if one's hardware allows matching on size in ACLs.

From a provider point of view, given the choices between contacting the end-users vs. mitigating the problem, if I were in TW position if I was unable to immediately contact the numerous downstream customers that were affected by this, I would take the option to block NTP on a case-by-case basis (perhaps even taking a broad brush) rather than allow it to continue and cause disruptions elsewhere.

Per my previous post in this thread, there are ways to do this without blocking client access to ntp servers; in point of fact, unless the ISP in question isn't performing antispoofing at their customer aggregation edge, blocking client access to ntp servers does nothing to address (pardon the pun) the issue of ntp reflection/amplification DDoS attacks.

Agreed, and I was not trying to get into arguments about saying whether 'blocking' is appropriate or not. I was simply suggesting that a provider, if they found themselves in a position where this was causing lots of troubles and impacting things in a large, might have had taken a 'broad brush' approach to stabilize things while they work on a more proper solution.

All that broadband access operators need to do is to a) enforce antispoofing as close to their customers as possible, and b) enforce their AUPs (most broadband operators prohibit operating servers) by blocking *inbound* UDP/123 traffic towards their customers at the customer aggregation edge (same for DNS, chargen, and SNMP).

I certainly would not want to provide as part the AUP (as seller or buyer), a policy that fundamentals like NTP are 'blocked' to customers. Seems like too much of a slippery slope for my taste.

In regards to anti-spoofing measures - I think there a couple of vectors about the latest NTP attack where more rigorous client-side anti-spoofing could help but will not solve it overall. Trying to be fair and practical (from my perspective) - it is a lot easier and quicker to patch/workaround IPv4 problems and address proper solutions via IPv6 and associated RFCs?

- Michael DeMan

I certainly would not want to provide as part the AUP (as seller or buyer), a policy that fundamentals like NTP are 'blocked' to customers. Seems like too much of a slippery slope for my taste.

The idea is to block traffic to misconfigured ntpds on broadband customer access networks, not to limit their choice of which ntp servers to use.

In regards to anti-spoofing measures - I think there a couple of vectors about the latest NTP attack where more rigorous client-side anti-spoofing could help but will not solve it overall.

Rigorous antispoofing would solve the problem of all reflection/amplification DDoS attacks. My hunch is that most spoofed traffic involved in these attacks actually emanates from compromised/abused servers on IDC networks (including so-called 'bulletproof' miscreant-friendly networks), but I've no data to support that, yet.

Trying to be fair and practical (from my perspective) - it is a lot easier and quicker to patch/workaround IPv4 problems and address proper solutions via IPv6 and associated RFCs?

There's nothing in IPv6 which makes any difference. The ultimate solution is antispoofing at the customer edge.

a message of 49 lines which said:

If NTP responded to a single query with a single equivalently sized
response, its effectiveness as a DDoS attack would be zero; with
zero amplification, the volume of attack traffic would be exactly
equivalent to the volume of spoofed traffic the originator could
send out in the first place.

It is a bit more complicated. Reflection with amplification is
certainly much less useful for an attacker but it has still some
advantages: the attack traffic coming to the victim's AS will be
distributed differently (entering via different peers), making
tracking the attacker through Netflow/Ipfix more difficult.

a message of 20 lines which said:

I also think that restricting your users by default to your own
recursive DNS servers, plus a couple of well-known, well-run public
recursive services, is a good idea - as long as you allow your users
to opt out.

That's a big "as long". I agree with you but I'm fairly certain that
most ISP who deny their users the ability to do DNS requests directly
(or to run their own DNS resolver) have no such opt-out (or they make
it expensive and/or complicated). After all, when outside DNS is
blocked, it is more often for business reasons (forcing the users to
use a local lying resolver, with ads when NXDOMAIN is returned) than
for security reasons.