(Completely Automated Public Turing test to tell Computers and Humans Apart) A category of technologies used to ensure that a human is making an online transaction rather than a computer. Developed at Carnegie Mellon University, random words or letters are displayed in a camouflaged and distorted fashion so that they can be deciphered by people, but not by software. Users are asked to type in the text they see to verify they are human.|
CAPTCHAs were created in response to bots (software agents) that automatically fill in Web forms as if they were individual users. Bots are used to overload opinion polls, steal passwords (see dictionary attack) and, most popular, to register thousands of free e-mail accounts to be used for sending spam. CAPTCHAs were designed to circumvent non-humans from performing such transactions.
The Battle of the Bots and CAPTCHAs
After CAPTCHAs were deployed in 2001, the felonious bots were updated to analyze the distorted text, enter the correct text and thereby render many CAPTCHA styles ineffective. In an on-going battle between the bots and the CAPTCHAs, the CAPTCHA text is increasingly more distorted and camouflaged, often making it difficult for humans to decode.
Other approaches have been incorporated to validate humanness; for example, displaying several images and asking what object is common among them, such as a tree or dog. Or, a phrase might be displayed and the user is asked to re-type a word; for example, "Enter the second word in the phrase." See reCAPTCHA, dictionary attack and Turing test.
In this early CAPTCHA example from Carnegie Mellon, a random word is camouflaged, and users are asked to type what they see. (Image courtesy of Carnegie Mellon School of Computer Science, www.captcha.net)
CAPTCHAs are increasingly more distorted in order to fool the bots, and real words have given way to random letters and digits. However, just like virus writers, who learn to code their programs more effectively, so do the bot writers... a fun-loving, creative bunch.