
Incomputer science, theELIZA effect is a tendency to project human traits — such as experience,semantic comprehension orempathy — onto rudimentary computer programs having a textual interface.ELIZA was asymbolic AIchatbot developed in 1966 byJoseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
The effect is named forELIZA, the 1966chatbot developed by MIT computer scientistJoseph Weizenbaum.[1] When executing Weizenbaum'sDOCTORscript, ELIZA simulated aRogerianpsychotherapist, largely by rephrasing the "patient"'s replies as questions:[2]
Well, my boyfriend made me come here.
Your boyfriend made you come here?
He says I'm depressed much of the time.
I am sorry to hear you are depressed.
It's true. I'm unhappy.
Do you think coming here will help you not to be unhappy?
Though designed strictly as a mechanism to support "natural language conversation" with a computer,[3] ELIZA'sDOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output.[4] As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."[5] Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.[6]
In the 18th century, the tendency to understand mechanical operations in psychological terms was already noted byCharles Babbage. In proposing what would later be called acarry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant.[7]
In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers".[8] A trivial example of the specific form of the Eliza effect, given byDouglas Hofstadter, involves anautomated teller machine which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.[8]
More generally, the ELIZA effect describes any situation[9][10] where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve"[11] or "assume that [outputs] reflect a greater causality than they actually do".[12] In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of thedeterminate nature of output produced by the system.
From a psychological standpoint, the ELIZA effect is the result of a subtlecognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of theprogram.[13]
The discovery of the ELIZA effect was an important development inartificial intelligence, demonstrating the principle of usingsocial engineering rather than explicit programming to pass aTuring test.[14]
ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "generalpersonal assistants" and "specialized digital assistants".[15] General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks".[15] Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans".[16]
(Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems – a phenomenon now known as the 'Eliza Effect'.
Although Hofstadter is emphasizing the text mode here, the "Eliza effect" can be seen in almost all modes of human/computer interaction.
This is a particular problem in digital environments where the "Eliza effect" as it is sometimes called causes interactors to assume that the system is more intelligent than it is, to assume that events reflect a greater causality than they actually do.
But people want to believe that the program is "seeing" a football game at some plausible level of abstraction. The words that (the program) manipulates are so full of associations for readers that they CANNOT be stripped of all their imagery. Collins of course knew that his program didn't deal with anything resembling a two-dimensional world of smoothly moving dots (let alone simplified human bodies), and presumably he thought that his readers, too, would realize this. He couldn't have suspected, however, how powerful the Eliza effect is.
The "Eliza effect" — the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters.