Suggestions for the future on your web site: (was cookies, and before that Re: Dreamhost hijacking my prefix...)

------- mpalmer@hezmatt.org wrote: -------
[Cookies on stat.ripe.net]

------- mpalmer@hezmatt.org wrote: -------
From: Matt Palmer <mpalmer@hezmatt.org>
[Cookies on stat.ripe.net]

> The cookie stays around for a YEAR (if I let it), and has the
> following stuff:

CSRF protection is one of the few valid uses of a cookie.
<snip>
By the way, if anyone *does* know of a good and reliable way to prevent CSRF
without the need for any cookies or persistent server-side session state,
I'd love to know how. Ten minutes with Google hasn't provided any useful
information.
-----------------------------------------

But, if I understand correctly, it only only if you are authenticated can
anything bad be made to happen:

https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)

[...]

So, if someone is just looking around, why is the cookie needed?

Primarily abuse prevention. If I can get a few thousand people to do
something resource-heavy (or otherwise abusive, such as send an e-mail
somewhere) within a short period of time, I can conscript a whole army of
unwitting accomplices into my dastardly plan. It isn't hard to drop exploit
code on a few hundred pre-scouted vulnerable sites for drive-by
conscription.

- Matt

Primarily abuse prevention. If I can get a few thousand people to do
something resource-heavy (or otherwise abusive, such as send an e-mail
somewhere) within a short period of time, I can conscript a whole army of
unwitting accomplices into my dastardly plan. It isn't hard to drop

You can prevent this without cookies. Include a canary value in the
form; either a nonce stored on the server, or a hash of a secret
key, timestamp, form ID, URL, and the client's IP address.

If the form is submitted without the correct POST value, if their IP
address changed, or after too many seconds since the timestamp,
then redisplay the form to the user, with a request for them to
visually inspect and confirm the submission.

> Primarily abuse prevention. If I can get a few thousand people to do
> something resource-heavy (or otherwise abusive, such as send an e-mail
> somewhere) within a short period of time, I can conscript a whole army of
> unwitting accomplices into my dastardly plan. It isn't hard to drop

You can prevent this without cookies. Include a canary value in the
form; either a nonce stored on the server, or a hash of a secret
key, timestamp, form ID, URL, and the client's IP address.

Nonce on the server is a scalability hazard (as previously discussed). You
can't put a timestamp in a one-way hash, because then you've got to hash all
possible valid timestamps to make sure that the hash the user gave you isn't
one you'll accept.

You *can* put all those details into the form, then generate a HMAC (or
symmetrically encrypt those details) to prevent tampering, but without
server-side storage -- again, scalability hazard -- you can't prevent replay
attacks (for as long as the timestamp is valid).

The problem with this method, though, is that the only thing that stops the
attacker from retrieving the entire chunk of data out of your form and
tricking the client into submitting it is the client IP address. Now,
you've got a decent idea here:

If the form is submitted without the correct POST value, if their IP
address changed, or after too many seconds since the timestamp,
then redisplay the form to the user, with a request for them to
visually inspect and confirm the submission.

Which is decidedly more user-friendly than most people implement, but
suffers from the problem that some subset of your userbase is going to be
using a connection that doesn't have a stable IP address, and it won't take
too many random "please re-confirm the form submission you made" requests
before the user gives your site the finger and goes to find something better
to do.

I just realised that I may have been insufficiently clear in my original
request. I'm not looking for *any* solution to the CSRF problem that
doesn't involve cookies; I'm after a solution that has a better cost/benefit
than cookies.

Things that require me to worry (more) about scalability are out, as are
things that annoy a larger percentage of my userbase than cookies (at least
with cookies, I can say "you're not accepting cookies, please turn them on",
whereas with randomly resubmitting forms, I can't say "please stop changing
your IP address" because that might not even be the problem).

- Matt

There comes a poimt when the user confirms the order where you have to
store the data on your server. And store the credit card, send the
transaction, store authorisation etc.

If a person needs to login before proceeding, you will need to store
some info in a server side database to indicate a valid session and
timestamp of last transaction (so you can time out sessions at the
server level)

You need to update this record everytime the user does a transaction to
reset the timeout and keep the session alive.

Might as well store cart information in it too as more and more items
are added.

One advantage of server side storage is that you have a record of users
abandonning their transactions, can look at possible trends that pont to
something that causes customers to go away before completing
transaction. This would be important informatioon to help improve the
shopping experience.

Either way, you still need to have either a cookie or a hidden form
field for some session ID token. Advantage of cookie is that you can
switch to a static page (when you display standard shipping iformation
for instancem or a help page, and you don't have to convert all those
pages to a form that sends the session ID as a hidden field.

Nonce on the server is a scalability hazard (as previously discussed). You

It's not really a scalability hazard. Not if its purpose is to
protect a data driven operation, or the sending of an e-mail; in
reality, that sort of abuse is likely need to be protected against
via a captcha challenge as well, requiring scalability hazards such
as performing image processing operations on the fly....

The logistical challenge with a nonce, is ensuring that the server
generated and stored a long enough list of nonces for request load;
you need to make sure that you never give out the same nonce twice,
and you make sure you wipe out old sets of of nonces frequently,
and then the only really hard part: when a nonce is used, you persist
the fact that it is no longer valid.

So you come to consider, the bottleneck: "Persisting the fact that
nonce X was used"
versus "Sending this e-mail message" or "Posting entries to the
database to complete the operation this form is supposed to do"

"The operation this form is supposed to do" will normally be the
larger scalability hazard, usually involving more complicated
database operations, than some nonce record maintenance.

can't put a timestamp in a one-way hash, because then you've got to hash
all possible valid timestamps to make sure that the hash the user gave you
isn't one you'll accept.

No, but you can use

codevalue = "<at_timestamp>:SHA1(<secret>:<at_timestamp>:<submission_id>:<formaction>:<client

)"

If current_time - at_timestamp > X :
        require_resubmission

The problem with this method, though, is that the only thing that stops the
attacker from retrieving the entire chunk of data out of your form and

Yeah... about that... if they can do that, they can surely steal a cookie,
which persists, beyond the time the form is displayed in a browser.

The adversary may be able to get the actual site to set the cookie in
the unwitting user's browser by using an invisible IFRAME or other
techniques, including ones to set a cookie for a different domain,
circumventing the use of cookie as abuse prevention methods.

The cookie is also susceptible to replay attack if something such as
the client IP address is not a factor.

Which is decidedly more user-friendly than most people implement, but
suffers from the problem that some subset of your userbase is going to be
using a connection that doesn't have a stable IP address, and it won't take

That would be quite unusual, and would break many applications for that user...

Although there is nothing mutually exclusive about cookies and other methods.
It is possible to set a cookie to be used as an additional factor,
after detecting that
the user's IP address might be unstable.

I just realised that I may have been insufficiently clear in my original
request. I'm not looking for *any* solution to the CSRF problem that
doesn't involve cookies; I'm after a solution that has a better
cost/benefit than cookies.

How about the issue that: cookies don't necessarily address CSRF?
Cookies are OK for storing user preferences, but not to authenticate
that the user actually authorized that their browser make that HTTP
request.

The user can have been browsing the form legitimately.
The user unwittingly opens a malicious web page in another window,
after having accessed the form recently.

The required cookie is already set: the user might even have a logged
in session, with an authentication cookie set in the browser.

The malicious page can abuse an already-logged-in session by sending a
POST request to it. Or have persuaded the user to login, while the
malicious page is still in memory,
and able to make quiet discrete POST requests.

Cross-site POST operations are allowed operations; and the cookie was
already set.

On the other hand... a value in the form presented, should be
protected against the malicious site, by the same origin policy.

So perhaps if you need to use a value in the form anyways, the
cookie is redundant

...

If the form is submitted without the correct POST value, if their IP
address changed, or after too many seconds since the timestamp,
then redisplay the form to the user, with a request for them to
visually inspect and confirm the submission.

Which is decidedly more user-friendly than most people implement, but
suffers from the problem that some subset of your userbase is going to be
using a connection that doesn't have a stable IP address, and it won't take
too many random "please re-confirm the form submission you made" requests
before the user gives your site the finger and goes to find something better
to do.

You want to stop the CSRF problem, but you want to support a user
making the login in a IP, and submiting a "delete account" button *the
next second* from a different IP. then you want this solution to be
better cost effective than cookies.

Maybe ask the user his password.

<form method="post">
<input type="hidden" name="id_user" value="33">
<input type="hidden" name="action" value="delete_user">
<input type="submit" value="Delete user">
<p>For this action you must provide the password. </p>
<input type="password" name="password" value="">
</from>

Even if this request come from a IP in china, you can allow it.

So this solution can be read has:
- Do nothing to avoid CSRF.
- Except for destructive actions, where you ask for the password.

Once again: captchas have zero security value. They either defend
(a) resources worth attacking or (b) resources not worth attacking. If it's
(a) then they can and will be defeated as soon as someone chooses to
trouble themselves to do so. If it's (b) then they're not worth the
effort to deploy. See, for example:

  http://www.freedom-to-tinker.com/blog/ed-felten/2008/09/02/cheap-captcha-solving-changes-security-game
  http://www.physorg.com/news/2011-11-stanford-outsmart-captcha-codes.html
  http://arstechnica.com/news.ars/post/20080415-gone-in-60-seconds-spambot-cracks-livehotmail-captcha.html
  http://cintruder.sourceforge.net/
  http://arstechnica.com/security/2012/05/google-recaptcha-brought-to-its-knees/
  http://www.troyhunt.com/2012/01/breaking-captcha-with-automated-humans.html
  http://it.slashdot.org/article.pl?sid=08/10/14/1442213

Now I'll grant that captchas aren't as miserably stupid as constructs
like "user at example dot com" [1] but they really are worthless the
moment they're confronted by even a modestly clueful/resourceful adversary.

---rsk

[1] Such constructs are based on the proposition that spammers capable
of writing and deploying sophisticated malware, operating enormous botnets,
maintaining massive address databases, etc., are somehow mysteriously
incapable of writing

  perl -pe 's/[ ]+dot[ ]+/./g; s/[ ]+at[ ]*/@/g; print $_, "\n";'

and similar trivial bits of deobfuscation code.

CAPTCHAS are a "defense in depth" that reduce the number of spam
incidents to a number manageable by humans.
Not all bot writers have the same quality. A lot of them are crappy.

Because of this, maybe are worth the effort.

No, they do not. If you had actually bothered to read the links that
I provided, or simply to pay attention over the last several years,
you would know that captchas are not any kind of defense at all.

They're like holding up tissue paper in front of a tank: worthless.

(Yes, yes, I'm well aware that many people will claim that *their* captchas
work. They're wrong, of course: their captchas are just as worthless
as everyone else's. They simply haven't been competently attacked yet.
And relying on either the ineptness or the laziness of attackers is
a very poor security strategy.)

---rsk

This is a fairly common mistake.

Security isn't about prevention, it's about deterrence.

If you have a locked screen door, someone can still trivially break
the screen and unlock it.

If you have a glass door, a brick.

If you have a hollow core wood door, a shoulder.

If you have a solid core wood door, a sledge.

If you have a steel door, a prybar.

If you have a safe-style door, explosives.

If you're Fort Knox, a larger military force. :slight_smile:

Basically there is no door that cannot be overcome with sufficient
force; the point of a door is not to absolutely prevent a bad guy
from entering under all circumstances, but rather to deter the
average attacker to go bother the neighbors instead. You can do
many things to augment your physical security, unpickable locks,
reinforced doors, motion sensor lights, alarm systems, etc. but all
of these are merely enhancers that are designed to make a criminal
look for an easier target. A determined and properly resourced
attacker who is determined to attack a given resource is going to
be successful eventually.

And that's where the so-called argument against CAPTCHAs falls apart.

A CAPTCHA doesn't need to be successful against every possible threat,
it merely needs to be effective against some types of threats. For
example, web pages that protect resources with a CAPTCHA are great at
making it much more difficult for someone with l33t wget skills from
scraping a website.

It isn't a high bar anymore, it isn't a strong defense anymore. All
quite true, so I'll even agree with your inevitable answer that many
websites are using CAPTCHA as protection against attacks that it is
no longer capable of guarding against. Agreed!

However, as part of a "defense in depth" strategy, it can still make
sense. It's much more of a locked screen door at this point, but if
you've got threats that can be easily deterred, then it's still viable.

... JG

Well, yes and no. Lately, AFAICT, most CAPTCHAs have been so
successfully attacked by wgetters that they're quite easy for machines
to break, but difficult for humans to use. For example, I can testify
that I now fail about 25% of the reCAPTCHA challenges I perform,
because the images are so distorted I just can't make them out (it's
much worse on my mobile, given the combination if its small screen and
my middle-aged eyes).

So it's now more like airport security: a big hassle for the
legitimate users but not really much of a barrier for a real
attacker. A poor trade-off.

Best,

A

"A Modest Proposal": Maybe we need to turn it around and fail on successful
recognition of the CAPTCHA, then?

Well, yes and no. Lately, AFAICT, most CAPTCHAs have been so
successfully attacked by wgetters that they're quite easy for machines

I wasn't aware that there was now a -breakCAPTCHA flag to wget.

The point I was making is that it's a defense against casual copying
of certain types of protected content and other stupid tricks that
used to go on. Someone who has made a business out of copying web
sites and has arranged to defeat CAPTCHAs is not a casual attacker.

to break, but difficult for humans to use. For example, I can testify
that I now fail about 25% of the reCAPTCHA challenges I perform,
because the images are so distorted I just can't make them out (it's
much worse on my mobile, given the combination if its small screen and
my middle-aged eyes).

I agree that this problem has gotten worse; as time goes on, it
seems likely that the computers will be able to read CAPTCHA's
(and then solve the new generation of CAPTCHA's) more easily than
many humans.

So it's now more like airport security: a big hassle for the
legitimate users but not really much of a barrier for a real
attacker. A poor trade-off.

Don't think we're quite there yet. However, it is certainly moving in
that direction.

However, Ace Hardware still sells hook-and-eye latches, and that's
something to think about.

One of the businesses we run here had a "problem"; the website had a
"contact us" page that had been recycled out of some script with
changes to hardcode where mail went, which didn't stop some exploit
script from finding it and then trying to spam through it, which
meant all their spam went to the company contact address. The coder
who maintained the website noted that only a particularly stupid
spammer (or completely automated system of some sort) would try to
exploit a script without bothering to check if the mail was being
delivered to victims, so he figured that the correct fix was to put
a very simple CAPTCHA on it.

I was skeptical, since even five years ago I saw the effectiveness of
CAPTCHAs as being in severe decline, but you know what, he was right.
The CAPTCHA is VERY readable, even has ALT text so you can use it in
your favorite text browser, because the point WASN'T to make it
impossible (or even difficult) to abuse, but rather to address a
particular problem.

It helps to keep your perspective on things.

... JG

+1000

I routinely fail CAPTCHAs, and am certainly less accurate than a decent machine at the OCR required. Those of us whose eyes don't correct to 20/20 would greatly appreciate some other form of "slow down the spammers" than this.

David Barak
Need Geek Rock? Try The Franchise:
http://www.listentothefranchise.com

It's true that relying on the laziness of attackers is statistically
useful, but as soon as one becomes an interesting enough target that
the professionals aim, then professional grade tools (which walz
through captchas more effectively than normal users can, by far) make
them useless.

I disagree that they're entirely ineffective. The famous Wiley
cartoon (found also in the frontspiece of the original Firewalls
book...) "You have to be this tall to storm the castle" does apply.
But knowing the relative height and availability of storm-the-captcha
tools is important. They are out there, pros use them all the time,
they are entirely effective.

This is true. However, if CAPTCHAS stop the bulk of casual hacking
attempts because the simple hacking scripts just flag that site as not
worth the effort and move onto the next, then the site manager has to
deal with far fewer true hacking attempts (those which are determined to
get in or hurt your web site).

It is better to have a tent with holes in the screen door than no screen
door. If the damaged screen door still prevents 90% of mosquitoes from
getting in, it does let you chase down and kill those that do get in.

Just because a security technique is not bullet proof does not mean it
isn't useful.

I get this argument, but it seems to miss the point I was trying to
make earlier. This isn't like a screen door with holes in it, but
more like a screen door with holes in it and a trick hinge that, from
time to time, bounces back and whacks the humans entering right in the
nose.

To resort to plain language instead of overworked metaphor, the
problem with CAPTCHAs is that they're increasingly easier for
computers to solve than they are for humans. This is perverse,
because the whole reason they were introduced was that they were
_hard_ for computers but _easy_ for humans. The latter part was a key
design goal, and we are increasingly ditching it in favour of "just
using a CAPTCHA" because they're what we think works.

(Of course, this is really just a special case of the usual problems in
HCI when security becomes an issue. We have this kind of problem with
passwords too.)

A

To resort to plain language instead of overworked metaphor, the
problem with CAPTCHAs is that they're increasingly easier for
computers to solve than they are for humans. This is perverse,
because the whole reason they were introduced was that they were
_hard_ for computers but _easy_ for humans. The latter part was a key
design goal, and we are increasingly ditching it in favour of "just
using a CAPTCHA" because they're what we think works.

So the point that seems reasonable to make is that people deploy
CAPTCHA in environments where it is insufficient to the task.

True enough.

At the point where an arms race has developed over such technology,
or other circumvention technologies (such as hiring cheap labor) is
being used, it seems to me that in such an environment, the
technology is fundamentally not suited to the task. It seems fair
to say that CAPTCHA is rapidly evolving to the level of hook-and-eye
latch protection, suitable for rudimentary protection on low-value
assets, keeping the rabbit in its cage, etc.

So, then, "replace it with what, exactly?" What if we all wake up
one morning to find that our computers have gained an IQ of 6000?
Will the computers be making jokes about "as dumb as a human" and
debating ways to identify if they're talking with another computer
or just a human? :slight_smile:

... JG