Tor and network security/administration

Apologies if this has been brought up before.

Being as I'm not a network administrator myself (although I do filter
some stuff using pf and ipfw on my severs), I'm curious what NAs
think of the following technology:

http://tor.eff.org/overview.html.en

The problem I see is that this technology will be used (literally,
not ideally) solely for harassment (especially via IRC). I do not
see any other practical use for this technology other than that.
The whole "right to privacy/anonymity" argument is legitimate, but I
do not see people using* Tor for legitimate purposes.

A colleague of mine stated his opinion of my opinion: "Your problem
with Tor is that you can't control it, isn't it?" And he's right --
that's the exact problem I have with it.

Comments/concerns?

We've had considerable problems with Tor.

Idiots who like to use stolen credit cards to buy things online find Tor a nice haven of deniability and covering their tracks. Before we got a little more proactive with it, about 20% of our credit card fraud was coming through IPs that we could confirm were Tor hosts.

I spent a few hours with a sheriff in Alabama trying to explain how Tor worked, why people used it, and why that even though he had an IP address of who used a 75 year old woman's credit card number to spend a few hundred dollars on one of our client's sites, it wasn't really their IP.

Our IRC servers, and discussion sites also have had to ban all Tor IPs that we've seen because of troublemakers using them to evade bans. Specifically because of the totally unregulated/uncontrolled nature of Tor, they're finding themselves banned from a great many things, which is probably hurting the people it was designed for. Because of one jerk who hopped from one Tor host to the next to get around IP bans on our site, all those IPs are banned now, preventing any legit use of Tor on any of our sites.

I don't find the anonymity a bad thing, but I would be a whole lot happier if the default configuration for people running Tor servers included an option to add HTTP headers saying that it's going through Tor, so we could decide if we wanted to conduct financial transactions with them or not.

The problem I see is that this technology will be used (literally,
not ideally) solely for harassment (especially via IRC). I do not
see any other practical use for this technology other than that.
The whole "right to privacy/anonymity" argument is legitimate, but I
do not see people using* Tor for legitimate purposes.

A colleague of mine stated his opinion of my opinion: "Your problem
with Tor is that you can't control it, isn't it?" And he's right --
that's the exact problem I have with it.

if you believe in privacy and anonymity, you get the downsides as
well as he upsides. such is life.

the problem with the net is that there are people on it.

randy

My legitimate use of Tor is because I object to companies following me
around on the net. Yes, I block ads and reject cookies, too. I choose
to not disclose my browsing to others. I get enough random commercial
crap foisted upon me that I have no time or patience for the targetted
commercial crap. To paraphrase Zimmerman's philosophy of PGP - you may
be having a hot affair, or you may be doing something politically
sensitive, but it's nobody's business but yours.

As for an attempt at a technical control, maybe set up a box with Tor
on it, get a list of exit servers and null-route them automagically.

CK

The TOR abuse FAQ is here:

   http://tor.eff.org/faq-abuse.html.en

They provide a script to track TOR exit nodes as well:

   http://tor.eff.org/cvs/tor/contrib/exitlist

I agree with Randy -

   The problem (with the net) is that there are people (on it).

cheers!

For IRC servers running BOPM and mail servers, I've found: http://www.sectoor.de/tor.php to be useful.

You're complaining about a network of several hundred IP addresses that are,
for the most part, documented as being the source of anonymized connections.

Obviously, if you're worried about *that*, you've already solved the problem of
identifying a connection as coming from one of the millions of machines that
has backdoor software on it, and thus potentially a port forwarder(*).

Please share your secret. The rest of us would love to have a net where Tor
nodes are a "problem" big enough to worry about.

(*) Yes, Tor intentionally anonymizes the true source *very* well. On the flip
side, what are your *REAL* chances of tracking somebody through more than 2 or
3 hops across cablemodems, unless you manage to mobilize everybody by invoking
one of the Four Horsemen of the Internet (copyright, terrorism, drug dealers,
and child pornographers)?

It's a proxy botnet, created by social engineering, rather than compromised
machines, but apart from that it's indistinguishable from any other.

The approaches you're using for abuse from other open proxies and
botnets should work fine for tor. If you've not dealt with the general
case then fixating on tor is pretty much a waste of time (unless you're
running an IRC network, perhaps).

Cheers,
   Steve

Tor is just a brand name. Its not the first, last or only way.

As long as there are people, there will be people that abuse things.
Every open service has been abused: USENET, SMTP, IRC, DNS, DHCP,
TTY/TDD Relay for the deaf, etc. The Internet is just a small community
of 500 million or so of your closest friends you don't know.

People have known since rlogin, rexec, rsh relying on IP addresses as a
method to control access has limitations. Caller ID isn't that much more
secure. It is extremely unlikely we will ever make all or even most of
the network hosts secure, and there will continue to be new applications
being created all the time. Applications designers should probably
consider using application and higher layer authentication methods if
they don't want their applications used as open relays for abuse. You
can't control what the rest of the world does, but you can set the policy
for using your own application.

You don't do your financial transactions over HTTPS? If you do, by the
very design of SSL, the tor exit node cannot add any HTTP header. That
would be a man-in-the-middle attack on SSL. (Unless you count that
users will click "accept" on any "this could be a forged certificate"
warning.)

More generally, tor is not an HTTP proxy, but a TCP proxy. Which
doesn't mean it cannot (as in "there is a Turing machine that does
it") also go up from layer 4/5 to layer 7 for certain specific
application protocols; it would only be harder, ask for more
resources from the node, ...

Which, for an anonymizing network, could be a deliberate situation.

Tor users are already encouraged to filter through a localhost
instance of a second-stage proxy such as Privoxy. There are other
projects underway to provide similar second-stage proxy services,
possibly capable of functioning as HTTPS m-i-t-m on an intentional
basis. If a user desires to filter browser headers even if
SSL-secured, certainly s/he would know why the "forged" SSL
certificate warning was being presented by the browser.

And there's also the possibility of importing such a proxy's
certificate into the browser as a trusted CA -- at which point the
proxy could generate a "valid" (from the browser's POV) cert for any
remote site.

All this is an exercise in social vs. technical
vulnerability/security. You cannot fix social vulnerabilities via
solely technical methods, and vice versa.

The user then loses end-to-end encryption with the final server he
want to connect to. That is unacceptable for a whole range of uses. If
a _user_ wants to control browser headers, he can instruct the
_browser_ in what headers to send or not.

Let's suppose the tor exit node does this https-man-in-the-middle
dance. It is not desirable for all connections, so you need some way
for the user to say per connection what whether it should happen or
not. SOCKS doesn't have such a thing in its protocol, so... you use
another protocol and fix all programs on the face of earth to support
it? You do an UI call-back where the tor daemon on the user's machine
pops up a question "should this HTTPS connection get the extra
headers"? So suddenly this daemon needs an UI on every single user on
the desktop of the user. Text if that's what the user is using, X11 if
that's what he is using, ... And on every single desktop of every
logged in user on the system. Wow.

And how do you handle client certificates in there? By very design of
SSL (unless it is _broken_), the tor exit node won't be able to fake
that, too.

And how do you handle the verification of the server certificate? How
do you know which CA's the client trusts?

And even if you have solved all this for SSL, then there is the _next_
protocol that you have to "man in the middle fiddle with". This way
lies madness.

And above all, it still does not solve your problem. Because the
malicious user can choose not to have the additional header added.

>> You don't do your financial transactions over HTTPS? If you do, by
>> the very design of SSL, the tor exit node cannot add any HTTP
>> header. That would be a man-in-the-middle attack on SSL.

> Which, for an anonymizing network, could be a deliberate situation.

The user then loses end-to-end encryption with the final server he
want to connect to.

Depends on your definition of "end-to-end" -- if one "end" is "an
agent on the user's computer", it still fits. But I think you
misunderstand the reason for a filtering proxy in the context of
anonymizing networks; read on:

That is unacceptable for a whole range of uses. If
a _user_ wants to control browser headers, he can instruct the
_browser_ in what headers to send or not.

The reason filtering proxies exist (and are popular with anonymizing
networks) is because most browsers don't provide a deep level of
configurability for this sort of thing.

Let's suppose the tor exit node does this https-man-in-the-middle
dance. It is not desirable for all connections, so you need some way
for the user to say per connection what whether it should happen or
not. SOCKS doesn't have such a thing in its protocol, so...

With SOCKS, automated filter control based on IP address (and
hostname, if using SOCKS4a or SOCKS5 with DOMAINNAME address type) is
trivial.

So suddenly this daemon needs an UI on every single user on
the desktop of the user.

Here's where your misunderstanding is evident. The filtering proxy is
not at the Tor exit node; it's at the *entry*.

Marrying the UI and the user using the proxy is precisely the point --
the filter is controlled by the person using it. Thus the UI is
provided to the user who both installed, and is using, the filtering
proxy. This is typically the way in which e.g. Privoxy+Tor is used,
as Privoxy has no facility for per-user filter settings.

And how do you handle client certificates in there?

Install the client certs into the proxy agent.

And how do you handle the verification of the server certificate? How
do you know which CA's the client trusts?

Use the proxy agent's UI to pop up the same sort of dialog-box
validation that the browser would traditionally provide. There happen
to be ready-made code libraries for just this purpose.

And even if you have solved all this for SSL, then there is the _next_
protocol that you have to "man in the middle fiddle with". This way
lies madness.

Filtering proxies target a somewhat narrow scope, but broad use,
subset of possible protocols. HTTP + HTTPS cover a pretty huge chunk
of traffic and user involvement. Certainly some other common
protocols could be filtered for anonymizing purposes in their own
ways.

You don't do your financial transactions over HTTPS? If you do, by
the very design of SSL, the tor exit node cannot add any HTTP
header. That would be a man-in-the-middle attack on SSL.

Which, for an anonymizing network, could be a deliberate situation.

The user then loses end-to-end encryption with the final server he
want to connect to.

Depends on your definition of "end-to-end" -- if one "end" is "an
agent on the user's computer", it still fits. But I think you
misunderstand the reason for a filtering proxy in the context of
anonymizing networks; read on:

So suddenly this daemon needs an UI on every single user on
the desktop of the user.

Here's where your misunderstanding is evident. The filtering proxy
is not at the Tor exit node; it's at the *entry*.

If the proxy is not at the Tor exit node, how can the tor network
enforce the addition of the "this connection went through tor" HTTP
header that Kevin Day was asking for? Fundamentally, if you rely on a
program sitting on the user's computer adding that header, then
malevolent users can not add this header, so Kevin Day's purpose is
not served. And that is what is being discussed here.

Let's suppose the tor exit node does this https-man-in-the-middle
dance. It is not desirable for all connections, so you need some
way for the user to say per connection what whether it should
happen or not. SOCKS doesn't have such a thing in its protocol,
so...

With SOCKS, automated filter control based on IP address (and
hostname, if using SOCKS4a or SOCKS5 with DOMAINNAME address type) is
trivial.

What I was trying to say was: The SOCKS protocol has no mechanism for
the SOCKS proxy to tell the SOCKS client "before I establish that
connection, please ask the user that question and report the answer
back to me".

Just to chime in before this gets any further off what I meant:

I know any intermediary nodes can't inject headers into HTTPS connections, that kinda defeats the purpose of SSL. :slight_smile:

When doing any financial transaction, before any user enters anything sensitive, we bounce them to an HTTP page first, then look for common proxy headers on that request. If none are found, they're given a cookie that allows them to continue on that IP only for HTTPS transactions for the next 15 minutes.

Failing that, having an exit node look at HTTP headers back from the server that contained a "X-No-Anonymous" header to say that the host at that IP shouldn't allow Tor to use it would work.

*Anything* would be better for Tor users if we could keep Tor abuse off our financial services without having to just ban all Tor IPs at the border. On a credit card transaction page, you have no anonymity anyway, since you're having to give us your credit card number, home address, etc. Yet, until we banned as many known Tor IPs as we could find from our network, Tor IPs accounted for a pretty high percentage of our credit card fraud, and nearly zero non-fraudulent use. Tor IPs had some significant(legitimate) use on some of our other sites, but that's gone because they're all null routed at the border now.

Tor may have some legit uses, but when it's costing us $BIGNUM in credit card fraud, I'm not going to spend too much time trying to only selectively ban it from our network.

> Here's where your misunderstanding is evident. The filtering proxy
> is not at the Tor exit node; it's at the *entry*.

If the proxy is not at the Tor exit node, how can the tor network
enforce the addition of the "this connection went through tor" HTTP
header that Kevin Day was asking for?

And Tor users will desire to do this ... why? I have been referring
to the proxying behavior *currently in use* on Tor and likely to be
developed further in the near future. It is highly *unlikely* that
Tor will add such a header by default, so there's little point in
thinking that such a so-called "solution" might actually come to
light.

Note that nowhere have I implied that Tor HTTP requests would look
like anything but regular HTTP requests, and in fact, that's exactly
the point of Tor's design. I am NOT using this thread to comment on
the appropriateness of that behavior (I have mixed personal opinions
on that), but rather, to point out what its *users* want, which is
what is likely to be implemented. Hence my earlier comment about
addressing social vulnerabilities via solely technological methods.

if you rely on a
program sitting on the user's computer adding that header, then
malevolent users can not add this header,

And non-malevolent users who simply wish to avoid marketeers'
statistical data tracking. There's more than one use for the
technology, y'know.

so Kevin Day's purpose is not served.

If the point of the technology is to add a degree of anonymity, you
can be pretty sure that a marker expressly designed to state the
message "Hi, I'm anonymous!" will never be a standard feature of said
technology. That's a pretty obvious non-starter.

What's to stop one or more exit node operators from hacking such a
check right back out of the code?

This is a better idea, but still has a bit of defeats-the-whole-point
to it, as it would depend on people obeying that header voluntarily.
Social vs. technological divide, again.

Which begs the original question of this thread which I started: with
that said, how exactly does one filter this technology?

"You can't" doesn't make for a very practical solution, by the way.
The same was said about BitTorrent (non-encrypted) when it came out,
and the same is being said about encrypted BT (which has caused
some ISPs to induce rate-limiting).

I'm also left wondering something else, based on the "Legalities"
Tor page. The justification seems to be that because no one's ever
been sued for using Tor to, say, perform illegitimate transactions
(Kevin's examples) or hack a server somewhere (via SSH or some other
open service), that somehow "that speaks for itself".

I don't know about the rest of the folks on NANOG, but telling a
court "I run the Tor service by choice, but the packets that come
out of my box aren't my responsibility", paraphrased, isn't going
to save you from prison time (at least here in the US). Your box,
your network port, your responsibility: period.

Why bother?

If the traffic is abusive, why do you care it comes from Tor? If there's
a pattern of abusive traffic from a few hundred IP addresses, block
those addresses. If you're particularly prone to idiots from Tor (IRC,
say) then preemptively blocking them might be nice, but I doubt the
number of new Tor nodes increases at a fast enough rate for it to be
terribly interesting.

If you want to take legal action you know exactly who is responsible
for the traffic, so whether it's coming from a Tor exit node or not isn't
terribly interesting in that case either.

If you still do want to then there are some very obvious ways to do
so, combining a Tor client and a server you run.

(And this is from the perspective of someone who does not believe
there is any legitimate use for Tor at all.)

Cheers,
   Steve

Failing that, having an exit node look at HTTP headers back from the
server that contained a "X-No-Anonymous" header to say that the host
at that IP shouldn't allow Tor to use it would work.

What's to stop one or more exit node operators from hacking such a
check right back out of the code?

Nothing, but it's the same nothing that stops me from just blocking all Tor exit nodes at the border.

If they showed a little bit of responsibility and allowed other people to make the decision if they wanted to deal with anonymous users or not, I'd be more than willing not to ban the whole lot of them.

Areas where there already is no expectation of anonymity don't allow you to hide your identify in the "real world", so I'm not sure why there is the notion that it's a right on the internet. Try applying for a credit card anonymously, or cashing a check in a bank wearing a ski mask and refusing to show any ID.

I realize fighting open proxies(even ones like this that aren't the result of being trojaned/backdoored) is a losing battle, but the sheer ease in ANYONE being able to click "Give me a new identity" with Tor has really invited the masses to start playing with credit card fraud at a level I hadn't seen before. I'm willing to bet others are experiencing the same thing, but just don't realize they are because they're unfamiliar with Tor and don't know where to look.

On top of all of that, I fully understand that the authors of Tor would have no desire to add such a feature. Their users are the end users, and placating pissy network operators gives them no benefit. All I can say is that if we had a better way of detecting Tor nodes automatically, and making policy decisions based around that fact, we'd be less likely to flat out ban them all.

I'm also left wondering something else, based on the "Legalities"
Tor page. The justification seems to be that because no one's ever
been sued for using Tor to, say, perform illegitimate transactions
(Kevin's examples) or hack a server somewhere (via SSH or some other
open service), that somehow "that speaks for itself".

I don't know about the rest of the folks on NANOG, but telling a
court "I run the Tor service by choice, but the packets that come
out of my box aren't my responsibility", paraphrased, isn't going
to save you from prison time (at least here in the US). Your box,
your network port, your responsibility: period.

We had a sheriff in a small town in Alabama quite ready to test that theory at one point. A Tor exit node was used to purchase several hundred dollars of services on a 75 year old woman's credit card that had never used a computer in her life. It took a LOT of explaining, but after he and the county DA understood what Tor was about, they were completely willing to bring charges against the owner of the IP of the exit node. The credit card holder, however, asked that they drop the matter, so it never went anywhere. I would have been very curious to see how it turned out though.

Why bother?

If the traffic is abusive, why do you care it comes from Tor? If there's
a pattern of abusive traffic from a few hundred IP addresses, block
those addresses. If you're particularly prone to idiots from Tor (IRC,
say) then preemptively blocking them might be nice, but I doubt the
number of new Tor nodes increases at a fast enough rate for it to be
terribly interesting.

Normally if we get a lot of fraud from one user, we force all transactions inside that /24 (or whatever the bgp announcement size is) to be manually approved.

This is different because one cranky/pissed off/thieving user has control of hundreds of IPs scattered across the world. You can play whack-a-mole with them for hours, and they can keep coming back on a new IP. Each one can be a fraudulent credit card order, costing us hundreds of dollars each.

We have preemptively blocked all the Tor exit nodes we can find, but they do change at a rate fast enough that a static list isn't sufficient. Many run off cable modems out of a DHCP pool that get a new address periodically.