ELIZA effect
From Wikipedia, the free encyclopedia
The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors, despite conscious knowledge to the contrary. It is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program. The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test.
Contents |
[edit] Origin
The effect is named for the 1973 chatterbot ELIZA, which simulated a Rogerian psychotherapist. ELIZA was programmed to ask questions about whatever topics the user mentioned:
HUMAN: "I don't have anything against my father. He was okay ..." ELIZA: "Tell me more about your family."
ELIZA was found to be surprisingly successful in causing emotional responses from users. However, none of ELIZA's code was designed to do so. Upon observation, researchers discovered users subconsciouly assuming ELIZA's questions implied interest in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.
[edit] Logical fallacy
The ELIZA effect is a special case of the logical fallacy of affirming the consequent:
If something is motivated by X, it behaves in manner Y. This program behaves in manner Y. Therefore, this program is motivated by X.
Even if the program is motivated by X, it does not follow that the observed behavior Y resulted from motivation X. Furthermore, it cannot even be demonstrated that the program is ever motivated by X. Indeed, in many cases, motivation by X is impossible (example: "The program thinks I am attractive".)
The ELIZA effect is a lesser logical fallacy than anthropomorphization, as the computer user knows that the computer is not a human or a complete artificial intelligence. The user nonetheless implictly assumes the behavior has the same causes as the same behavior would have in a human. The assumption is a fallacy because the computer cannot experience human motives. While the programmer may have had the motivations the user assumes, this cannot be deduced solely from the programmer's response: the program's behavior may be an unintended side effect.
The ELIZA effect ends if the user consciously recognizes that the computer cannot be motivated in the assumed manner.
[edit] Positive and negative consequences
AI and human–computer interaction programmers may intentionally use the ELIZA effect as part of computational semiotics or as a strategy to pass the Turing test. While this strategy permits efficient coding (a few lines of code have large effects on human perception of output), it is also a risky proposition. If the user observes that the ELIZA effect has occurred, the rejection of unconscious assumptions often leads to the deduction of the programming method used. This constitutes failure of the Turing test. AI programmers try to avoid the ELIZA effect during testing, as it can blind them to other deficiencies in program output.
The ELIZA effect is also used in the construction of programming languages. The symbol +, for example, is assumed by users to mean 'addition' regardless of context. The symbol + is sometimes also used to represent an algorithm for string concatenation. Program authors can use the + without knowing that it is an overloaded operator that implements two different algorithms. The ELIZA effect can cause users to assume that the program (or programmer) has the same assumptions about the meaning of + that they do.
The ELIZA effect can also cause negative consequences if the user's assumptions do not match program behavior. For instance, it may interfere with debugging by obscuring the actual causes of program behavior. Programming languages are usually designed to prevent unintended ELIZA effects by restricting keywords and carefully avoiding potential misinterpretations.
[edit] See also
[edit] References
- Hofstadter, Douglas. Preface 4: The Ineradicable Eliza Effect and Its Dangers. (from Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books: New York, 1995)
- Turkle, S. Eliza Effect: tendency to accept computer responses as more intelligent than they really are (from Life on the screen- Identity in the Age of the Internet, Phoenix Paperback: London, 1997)
- ELIZA effect, from the Jargon File, version 4.4.7. Accessed 8 October 2006.
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.