In which application CAPTCHA applicable?

How can you ensure human interaction with your application besides enforcing a captcha system?

  • Answer:

    (sorry for long response, this is generally a non-trivial problem that deserves some thought) I prefer to think of CAPTCHAs as a fallback mechanism for when a system thinks a user might be doing something robot like. This is contrary to how CAPTCHA's are usually implemented (i.e., to assume all users are robots, and must prove they're human). It seems silly to treat users as robots since most robots have wildly different intents - and for a robot to match our human users will take quite a bit of work (that frankly, most spammers wont spend too much time trying to match). That being said, lets look at what our human users will potentially do in the process of signing up for a website: Our user will create a single account, and then login (or verify, etc..) We can usually assume that users are not visiting from a datacenter IP, but there are some situations where proxies _are_ useful They'll probably have come from either a search engine or another point on our website (maybe this is our homepage, or a content page) Once they start filling out our form, we may do some things like ping the server to see if the username they wish to use is taken, or check if the email address already exists in our system We know that users will likely hit "enter" or physically click a submit button when they've finished. Our user will be using a real browser which has its own set of quirks Lets contrast this to a robot: Our robot will probably try and create 2 or 3 different accounts while visiting from a single IP The robot visiting may not have had the desire to look around our website, so it'll probably directly hit our page with a registration form (to get the CSRF token), and then POST to our registration endpoint Additionally, our robot probably didn't bother to execute any of our JS, render our images, or fetch our CSS files and therefore didn't ask if a username was taken prior to submitting the form In the case our robot does load our images and other media - I'll bet it doesn't handle ETags or other various caching headers properly. Given that the profiles have some distinct features, we can now look to score the probability that a user is a robot and if we think they are, prompt them with a CAPTCHA. Ultimately, we should be showing no humans our CAPTCHA and only robots. Some factors that we can look at when adjusting the aggressiveness of our profiles: Not every form and action has the same value. The process of signing up it-self may not provide much value to the spammer. If our spammer needs an account to post a comment, then the comment is the goal - not the sign up. Since most people who will sign up will engage in various ways before posting a comment, we can have even more aggressive profiles that robots will almost never replicate during the comment process, but real users will almost always pass. (Seriously, how often do you visit the exact URL to the register page, then type in the URL to the send message page and compose a message complete with links and images within 35 seconds?) If we let sophisticated robots interact with our service and attempt spam, we can retroactively identify patterns that only robots perform and then prospectively look for additional actions which will validate our filter and then prompt for a CAPTCHA when they attempt an action which will result in spam. OR... It might be valuable to not immediately ban these robots or give them any indication of suspicion, but instead "ghost" their content in hope that they'll continue to perform spammy actions all while our system can collect and identify more patterns (Ultimately, there will be very few of robots which get to this point, so some human intervention is OK) Most websites assume all registered users are real and human - so they place CAPTCHA at the account creation phase to verify this. But in reality, most spammers only need registration to perform other spammy actions. Instead of punishing our users who engage naturally with our service, we should try and identify patterns where users are not engaging like other human users do and then ask for validation at that point.

Joe Tyson at Quora Visit the source

Was this solution helpful to you?

Other answers

You can randomize your form field names. A bot won't adjust for the changes, and, cosequentially, will fail validation.  Some frameworks such as seaside will do this by default. There is still a small cost to the user in that browser "features" like autofill will be negated by the randomization.

Bradford Mar

PeopleSign is a quick 2-mouse click alternative that is effective. http://www.peoplesign.com/main/officialDemo.html I am not sure if they track multiple mouse automation and disable entry automatically though.

Pandurang Nayak

I have used flash as the GUI and 'flash remoting' in php to generate the database transaction. It makes it difficult for a 'bot' to hook into the interface, but not impossible for a determined attacker to manually connect to the php code.

Drew Coalson

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.