Chatterbot
From Wikipedia, the free encyclopedia
A chatterbot is a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods. Though many appear to be intelligently interpreting the human input prior to providing a response, most chatterbots simply scan for keywords within the input and pull a reply with the most matching keywords or the most similar wording pattern from a local database. Chatterbots may also be referred to as talk bots, chat bots, or chatterboxes.
Contents |
[edit] Method of operation
A good understanding of a conversation is required to carry on a meaningful dialog but most chatterbots do not attempt this. Instead they "converse" by recognizing cue words or phrases from the human user, which allows them to use pre-prepared or pre-calculated responses which can move the conversation on in an apparently meaningful way without requiring them to know what they are talking about.
For example, if a human types, "I am feeling very worried lately," the chatterbot may be programmed to recognize the phrase "I am" and respond by replacing it with "Why are you" plus a question mark at the end, giving the answer, "Why are you feeling very worried lately?" A similar approach using keywords would be for the program to answer any comment including (Name of celebrity) with "I think they're great, don't you?" Humans, especially those unfamiliar with chatterbots, sometimes find the resulting conversations engaging. Critics of chatterbots call this engagement the ELIZA effect.
Some programs classified as chatterbots use other principles. One example is Jabberwacky, which attempts to model the way humans learn new facts and language. ELLA attempts to use natural language processing to make more useful responses from a human's input. Some programs that use natural language conversation, such as SHRDLU, are not generally classified as chatterbots because they link their speech ability to knowledge of a simulated world. This type of link requires a more complex artificial intelligence (eg., a "vision" system) than standard chatterbots have.
[edit] Early chatterbots
The classic early chatterbots are ELIZA and PARRY. More recent programs are Racter, ALI, A.L.I.C.E., and ELLA.
The growth of chatterbots as a research field has created an expansion in their purposes. While ELIZA and PARRY were used exclusively to simulate typed conversation, Racter was used to "write" a story called The Policeman's Beard is Half Constructed. ELLA includes a collection of games and functional features to further extend the potential of chatterbots.
The term "ChatterBot" was coined by Michael Mauldin in 1994 to describe these conversational programs in a conference paper written for the Twelfth National Conference on Artificial Intelligence.
[edit] Malicious chatterbots
Malicious chatterbots are frequently used to fill chat rooms with spam and advertising, or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger.
[edit] Chatterbots in modern AI
Modern AI research focuses on practical engineering tasks. This is known as weak AI and is distinguished from strong AI, which would have sapience and reasoning abilities.
There are several fields of AI, one of which is natural language. Many weak AI fields have specialised software or programming languages created for them. For example, one of the 'most-human' natural language chatterbots, A.L.I.C.E., uses a programming language AIML that is specific to its program, and the various clones, named Alicebots. Nevertheless, A.L.I.C.E. is still based on pattern matching without any reasoning. This is the same technique Eliza, the first chatterbot, was using back in 1966. Jabberwacky is a little closer to strong AI, since it learns how to converse from the ground up based solely on user interactions. In spite of that, the result still appear fairly poor, and it seems reasonable to state that there is currently no general purpose conversational artificial intelligence. This has lead some software developers to focus more on the practical aspect of chatterbot technology - information retrieval.
However, British scientist, philosopher and futurologist, Paul Malish has argued that a fundamental paradigm shift may be required in the present understanding of what 'intelligence' and 'understanding' actually are, before any truly objective or dismissive assessment of AI can be carried out, given existing criticisms against the validity of the currently accepted formulations of the Turing Test.[1]
Malish is also a proponent of the common rebuttal used within the AI community which asks, "How do we know humans don't also just follow some cleverly devised rules?" Two famous examples of this line of argument against the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument.
[edit] See also
- Turing test
- Loebner prize
- ChatterBox challenge
- Chatbot competitions
- Albert_one
- Natachata
- Markov chain
- Verbot
- V.A.I.C
[edit] External links
- Chatterbots at the Open Directory Project
- ChatterBots Paper from AAAI-94
- XTAGON Chat Bot created by Team VIR0S
- Incognita - Artificial Intelligence Conversationalist