5

For a while I've been pondering a user authentication protocol that aims to ensure that a client does some computational work before the server will attempt to authenticate the password.

The reason for this is to remove some of the asymmetry involved when a bad actor hits a login endpoint. The server has to perform an expensive password hash operation, while the client just sends a request. This means that the server will be using a lot more resources than the clients.

To dissuade bad actors, I had the following idea, inspired by bitcoin, for a login protocol:

  1. Client sends request to server, stating that they would like to do a login attempt.
  2. Server responds that the client may proceed. It also includes some random string and asks the client to produce a string such that the random string appended with the string generated by the client will have X number of 0s at the end of its SHA1 hash.
  3. Client generates random strings and SHAs them until a string matching the condition is found. and sends that string with the login request, along with the user's credential
  4. Server validates the string does in fact generate such a SHA when appended to the initially returned random string. Then performs a (much more expensive than SHA) hash check on the user's password.

Is there a good reason that such a pattern is not being used today, or would such a process achieve the goal?

Cruncher
  • 153
  • 5
  • Related: https://security.stackexchange.com/questions/222930/how-and-why-is-my-site-being-abused/222938#222938 – mti2935 Apr 20 '23 at 18:14
  • There was some guy at a conference I went to maybe 10y ago that was pitching this idea. He wanted to compete with the Captcha services with algorithmic solutions like this. This was before JS-based cryptocurrency mining, so I'm not sure how well it would be received today. – Adam Katz Apr 21 '23 at 18:30
  • Wouldn't it be much simpler if the party evaluating the login attempt just delayed processing it, for an exponentially increasing amount of time? – ddyer Apr 21 '23 at 19:48
  • @ddyer what exactly do you mean by delayed processing? How are we applying the exponentially increasing amount of time to an attacker that is using a bot net? – Cruncher Apr 25 '23 at 15:15
  • @cruncher I see, you're proposing making all login attempts expensive for the remote party by wasting cpu. I suppose, but you could just make them slow and accomplish the same result, without punishing the legitimate logins. – ddyer Apr 26 '23 at 17:12
  • Are you aware of the tor PoW project? – User65535 Jun 02 '23 at 06:35
  • You might be interested in the delegation feature of the Makwa password hashing function. Makwa was a finalist in the Password Hashing Competition and received special recognition. Delegation allows you to enlist untrusted systems for the bulk of the processing cost. – Andre D Jul 09 '23 at 01:18

3 Answers3

2

This sounds like a solution to a non-issue. Nothing forces the server to calculate a password hash whenever a client sends a request. Rate limits can trivially be implemented by making the clients wait for a while. Granted, there's a risk of an attacker locking out legitimate users by constantly hitting the rate limit. An alternative solution which doesn't have this problem is to make the user solve a CAPTCHA when they want to log in. In any case, the server has full control over how much work it spends on password hashing.

The approach you describe also has the problem that the computational power available to a client can vary a lot. A legitimate client may only have an ancient mobile device which struggles to solve even the simplest challenges, while an attacker could use a powerful GPU, a web service like AWS or even specialized hardware. It's going to be very difficult to find a balance which makes the challenge both acceptable for legitimate users and hard enough for attackers.

Ja1024
  • 5,769
  • 14
  • 21
  • "Rate limits can trivially be implemented by making the clients wait for a while." We have very aggressive rate limits in place, but they do little to nothing against a bot network. They're just too easy to circumvent for very little cost. As for the other part, yes they can use a powerful GPU to crack these as fast as possible, but there is a large cost associated with that. At that point the attackers are better off spending their time against easier targets. – Cruncher Apr 25 '23 at 15:09
  • 1
    If the rate limit can be easily circumvented, this is an implementation issue, not a conceptual problem. The server can enforce a fixed limit for the log-in frequency, both on a per-account basis or globally across all user accounts. As to the GPU part: A consumer GPU like, say, a mid-range GeForce which may already be in the system is sufficient for calculating SHA hashes at an extremely high rate. So for an attacker, solving a reasonable challenge (which doesn't completely overwhelm the devices of legitimate users) is not expensive at all. – Ja1024 Apr 25 '23 at 15:32
  • "The server can enforce a fixed limit for the log-in frequency, both on a per-account basis or globally across all user accounts." A global rate limit is not even remotely a consideration as it is way too easy to deny service to the system. Per account rate limits do not work for our use case either, as they're using a database on email/password pairs from other breaches so it's not many requests for the same account. We do already have account locking in place for this use case however. This leaves IP rate limiting left which we do very aggressively, but does not protect against a bot net. – Cruncher Apr 25 '23 at 16:12
0

I just implemented something similar on my site (justinthomas.pro) to require proof of work to submit a form request:

https://gitlab.com/justindthomas/justinthomas.pro/-/blob/master/src/routes/components/Compose.svelte

I was seeing a spate of what looked like programmatic garbage submissions, so I'm experimenting (I really dislike CAPTCHAs). I'm using Argon2 in WASM on the client and server to hash the user-agent and a time-limited (10s) nonce generated by the server using a Blake3 hash of the time and a secret value. The Argon2 hash is sent with the POST request and verified by the server before inserting the record.

You can see the Rust WASM components in that repository in the master/wasm directory. I'm still experimenting and tweaking (i.e., I haven't ratcheted up the Argon2 parameters yet) but I think it's a workable (if over-engineered) approach.

-1

I would hash the passwords client side using Argon2 and a salt sent from the user table in the server. Use an Argon2 memory parameter as high as your lowest common denominator device. If you are using common consumer devices including mobiles this could be 64MB. This calculation will act as the proof-of-work.

This method will have the added benefit (as opposed to a non hashing proof-of-work) of increasing security on your passwords against dictionary and brute force attacks in the case your database in compromised, due to Argon2’s memory intensive algorithm. It will somewhat prevent DOS attacks but really there is nothing you can do to stop your client sending random messages to your authentication server that don’t contain any proof-of-work.

Note that the password needs to be hashed server side also with SHA256 for instance in order to avoid pass-the-hash attacks.

Jonathan
  • 107
  • 3
  • i reckon you would need to somehow bind the pwd and client-side pwd-hash together on the server side, otherwise you'll be storing a fast, unsalted hash of the pwd in the data store .. i'd be inclined to reapply argon2 (or some other pwd-kdf) .. it's not clear from your answer if this was your intent – brynk Apr 24 '23 at 01:37
  • @brynk The username is entered and sent server-side then the salt is sent from the server to the client. The password+salt is then hashed client-side with Argon2 and then sent to the server to be hashed again with SHA256. The result will be compared with the existing double hashed password in the db to see if the login was successful or not. The proof-of-work is the client-side Argon2 hashing. Hopefully that's clearer. – Jonathan Apr 24 '23 at 07:13
  • https://security.stackexchange.com/a/263088 RWilliams'22 talks about hash shucking, where a fast hash is exploited to find some input .. as described, this scheme would be vulnerable to this concept: the adversary will only need the preimage for the final sha2 step, supplying this for authorisation instead of the result of argon2 applied client side (of course, they would need to steal the login auth data store, and possibly side-step the laws of thermodynamics) – brynk Apr 24 '23 at 09:55
  • 2
    Yeah, I struggle to see how to make client-side hashing work, as you still have to apply a hash on the server side, or else the hash becomes a plaintext password. If you use a fast hash like sha256 as suggested, now you can brute force the database again because it's a fast hash. Now if we re-argon the argon hash that would potentially be a solution. It means both the client and server have to do approximately equal work – Cruncher Apr 25 '23 at 15:11
  • "It will somewhat prevent DOS attacks but really there is nothing you can do to stop your client sending random messages to your authentication server that don’t contain any proof-of-work." Well, with the protocol outlined in my question, the server wouldn't do the hash if the PoW was not completed by the requestor. – Cruncher Apr 25 '23 at 15:17
  • @Cruncher Good point! My suggestion fails in that regard in that it always has to hash. I'm going to to revisit it and see if I can combine the two: proof-of-work more like your suggestion & slow hashing. Doing only proof-of-work seems a waste of energy to me cause it's quicker and easier to implement rate limiting. I feel it's much better if the work does something useful. – Jonathan Apr 26 '23 at 06:29
  • @brynk Thanks for the link to hash shucking. The example he gives is bcrypt(MD5(password)) (no salts). A rainbow table could certainly be created to crack the weak passwords as bcrypt can be done very cheaply these days. In order to create the rainbow table you need to feed it the weak passwords and bcrypt those. If salts had have happened to be applied to the passwords originally, bcrypt(MD5(password+salt)), you would have to brute force each password hash i.e. rainbow tables would be ineffective. This would be quite expensive with bcrypt and much more so with a well configured Argon2. – Jonathan Apr 27 '23 at 05:47
  • @Cruncher With regards to client-side hashing with a slow hash which has a salt from the server, the result of which is then fast hashed server side: You suggest you can brute force the fast hash. The thing is that SHA256 cannot be brute forced for long random passwords. It takes too long. It can only brute force by inputting short random, or weak passwords, and trying those. In order to feed the attack the attacker would have to feed the SHA256 with computed Argon2(password+salt) variations. This would be prohibitively expensive with a properly configured Argon2 hash. – Jonathan Apr 27 '23 at 06:21