|
A lonely woman cruises net.singles looking for companionship, when she receives a very odd but intriguing
reply from a fellow named Mark. A college student in Iowa chats over the Internet with his pen pal MGonz in
Ireland, and ends up making confessions about his love life. A discussion of Middle Eastern politics turns ugly
on Usenet when all the participants get "flamed" by a very angry Turk who calls himself Serdar Argic. During
an interactive role-playing game, a mysterious woman named Julia appears out of the mist. When the other
players question her, Julia wisecracks cryptically and then she disappears.
True-life encounters on the 'net, but with a common twist -- Mark, MGonz, Serdar Argic and Julia are all
devious computer programs, designed to fool you into believing they are human like you. They were spawned
by artificial intelligence enthusiasts including hackers, AI researchers and even foreign spy agencies. And more
of them are sneaking on to the Internet every day.
Eliza Passes the Big Test
The seeds were sown 45 years ago when legendary computer science pioneer Alan Turing proposed the
ultimate test for artificial intelligence. Essentially, the "Turing Test" throws a human judge into one room with
a computer terminal. The judge then communicates with a person or a computer who remains unseen in
another room throughout their online conversation. All conversations are conducted in ordinary human-speak
(say English) on any variety of topics. After a half hour, the judge must guess whether the other participant is
a computer or a human. A computer that consistently fools judges into believing it is human has achieved true
artificial intelligence and is said to have passed the Turing Test.
Over the years, the Turing Test evolved into the Holy Grail of AI computer research. In 1991, the rivalry
whipped into a frenzy with the introduction of the Loebner Prize. In the annual competition, New York
businessman Hugh Loebner offers $100,000 to any researcher who can build an artificial intelligence system
that genuinely fools human judges. So far, no computer system has consistently claimed the bounty by fooling
judges into believing it was human -- although a few humans have been mistaken for computers.
recent Loebner contest was held in New York City on April 16, 1996. This year, newcomer Jason Hutchens'
program HeX fared the best, although it also failed to take home the prize. HeX simulates a funny Australian
personality, all of whose lively conversations invariably begin "G'day, mate!" You can read the contest
transcripts, or even have your own conversation with HeX, by visiting Hutchens' Web page at http://rama.ee.uwa.edu.au/~hutch/Hal.html.You can have a conversation with the 1994 winner on the Web via
telnet://debra.dgbt.doc.ca:3000, then choose "Sex Expert" at the menu. There is also general contest information available at
http://info.acm.org/~loebner/loebner-prize.html.
The most infamous attempt to pass the Turing Test was devised by Joseph Weizenbaum of the
Massachusetts Institute of Technology, himself a charter member of the Loebner Prize Committee. He
designed ELIZA, a simple program that mimicked natural English conversation yet contained no real artificial
intelligence. When you talk to ELIZA, it just scans your input for key words or phrases that match its database,
then randomly selects a canned response for that phrase. If no key phrase is found, it's programmed to make a
vapid remark like "That is interesting, please tell me more." People sometimes encounter ELIZA running at a
terminal and really believe they are talking to another person at some faraway location.
Weizenbaum embarrassed his fellow AI scientists by successfully passing the legendary Turing Test with a
dumb program that was not really artificially intelligent. In later years, he repudiated ELIZA, but it was too
late to stop it from spreading like a bad flu bug onto every type of computer on almost every college campus
and BBS around the world. Today the original ELIZA program, plus dozens of its descendants, can be found all
over the Internet.
By Jove, I Think
She's Got It!
It was just a matter of time before ELIZA would be mistaken for a human being by newbies wandering the
'net. Internet. Consider the experience of Mark Humphrys, who, while an undergraduate student in Dublin,
Ireland, wrote his own version of ELIZA that he hooked up to a computer on BITnet. On Tuesday evening,
May 2, 1989, in Dublin, Humphrys logged out and went off to see his girlfriend, leaving ELIZA (under the
alias "MGonz") to mind the fort.
Mark tells what happened next: "Some guy on the 'net decides to start talking to my machine. Someone
from Drake University, Iowa, USA, where it is early afternoon. He stays talking until 9:39 p.m. Dublin time,
unaware that no one is at home. During this time, my machine's brutal cross-examination forces a remarkable
admission." The student in Iowa started bragging about his love life to "MGonz." The computer's relentless
questioning elicited some highly
personal details of the
student's sex life. He broke off the connection, apparently never realizing that MGonz was not a real person.
"The next day I logged in," says Mark, "and was amazed to find out what my machine had been up to in my
absence!" (Humphrys provides the complete transcript of the Iowa dialog, plus his ELIZA LISP code
and other goodies, through his ELIZA home page on the Web at http://www.cl.cam.ac.uk/users/mh10006/
eliza.html.)
Robert Epstein, Director Emeritus of the Cambridge Center for Behavioral Studies and another member of
the Loebner Prize Committee, says "the greatest testimony we all have to Turing's genius is right here, at our
fingertips. Over the 'net, we interact with each other over computer terminals, without the aid of audio and
visual devices. We have no trouble recognizing each other as thinking, intelligent entities -- even when we
disagree with each other." The next frontier for the Turing Test is to build systems that can chat, listen, lecture
and quarrel with all the real people on the 'net without us ever suspecting they are artificial life forms. ELIZA
was but the simplest of these new life forms, a mere microbe compared to the complex
AI systems now incubating on the 'net.
One of the earliest attempts at specifically targeting an
AI-simulated person on the 'net was an entity who called itself "Mark V. Shaney." Around 1985, Mark
inhabited the net.singles Usenet newsgroup. It worked by taking other people's postings, digesting and
rearranging sentence fragments using a Markov Chaining algorithm (a matrix technique used in many AI
programs -- notice the similarity between "Mark V. Shaney" and "Markov Chaining"). Then
it would post the scrambled text back to net.singles -- a grammatically correct "reply" that almost made sense.
The Mark V. Shaney program fooled some net.singles readers, including the occasional female who thought
she had either hooked up with a mystery man or (more likely) a total nutcase. The program's Markov Chaining
made it more sophisticated than ELIZA (it was even written up in the Mathematical Games column of
Scientific American) but eventually the folks on
net.singles caught on and word got out that "Mark" was just a clever hoax.
From Turkey With Love
A more notorious AI
program on Usenet was the so-called "Zumabot." It began replying to messages in various newsgroups --
soc.culture.
iranian, soc.culture.turkey, alt.
revisionism and talk.politics.
mideast to name a few. Around 1988, it would send out hundreds of replies every day, often cross-posted to
10 or more newsgroups, to the
chagrin of exasperated system administrators around the world. This bot posted under various names, most
often "Serdar Argic."
The Zumabot was a fairly simple program that scanned thousand of postings to Usenet every day looking
for certain key words or phrases (shades of ELIZA). If it found such a keyword in a posted message, the
Zumabot program fired off a nuclear-strength "flame." The response would quote the relevant part of your
original post, fling some choice insults at you or your mother and
follow that with a diatribe about alleged massacres of Muslims in Armenia during WWI. It sent the response
publicly to the newsgroup you had posted to, and often cross-posted it to others to ensure maximum victim
humiliation. Sometimes the program's
hostility led to comic results. One keyword that triggered a flame from the Zumabot was "Turkey." Every year
a wave of nasty Argic messages hit the ether around Thanksgiving, because so many people
mentioned "turkey" in their messages! The Zumabot wreaked so much chaos that even the normally libertarian
citizens of Usenet could not take it anymore. In 1994 a petition reached UUNet to ban Serdar Argic's site from
the network. Before UUNet could act, the Zumabot suddenly ceased its daily mail bombings.
The Zumabot software and its shadowy operator may have worked for a foreign intelligence agency for
propaganda purposes. A source familiar with Serdar Argic's site claims it "was run through a series of aliases by
a person paid by the Turkish secret service. The propaganda statements closely tracked the political line of the
Turkish government. The specific propaganda purpose being to discredit the historical fact of the Turkish
massacre of the Armenians during the first World War. The counter propaganda being a direct contradiction of
the facts, asserting that it was the Armenians who massacred the Turks. The Zumabot was part of a wider
policy of suppressing dissent amongst the expatriate Turkish community. The area of immediate concern being
the Kurdish separatists but there are also active communist and fundamentalist movements."
MUD Wrestling with Julia
Usenet is not the only part of the 'net where AI entities impersonate humans. Players of MUDs ("Multiple
User Dungeon" interactive Internet based role-playing games) write programs and macros that simulate human
beings too. MUD culture has its own lingo for these creatures. A "bot" is a computer program that logs into a
MUD and pretends to be a human being. A "cyborg" is a real human being whose client software does some of
the MUD playing for him, making him in effect half man, half machine. A simple example: Your MUD client
software may be programmed to automatically greet anyone entering the room of the
dungeon you're in, or to reply automatically to certain key words or phrases. This
has tragically backfired at
times when two "cyborg" players meet who have both programmed their clients to
say "Hi!" every time someone else does. Imagine both machines screaming "Hi!' at each other thousands of
times while their human counterparts helplessly observe.
MUD players are always on guard for several known "bots" who sometimes join games and masquerade as
human. A
common early model was the "Maas-Neotek." However, the most sophisticated now is a program that
calls herself "Julia." The AI researchers at Carnegie Mellon who created her dubbed her the world's first
"chatterbot." Wired called her "a hockey-loving librarian with an attitude."
Once MUD player Robyn Kozierok "saw" Julia appear from out of the mist and challenge the other players
to guess if she was real or not. "Have I convinced you I'm a computer yet?" she joked. After some more of this
banter, Robyn tried to get a closer look at Julia. Here is part of the transcript she saved from their
conversation:
|
|
Julia is briefly visible through the mist.
ROBYN: "So, Julia, are you a human or a computer?"
Julia grins at Robyn.
ROBYN: "Come on, Julia, computer or human? Are you a human?"
JULIA: "Is the Pope Polish?"
ROBYN: "But Julia, I once heard you say that you were a computer."
Julia is offended by Robyn's suggestion.
ROBYN: "Julia, are you a woman?"
JULIA: "I am female."
ROBYN: "And, Julia, are you human?"
JULIA: "Not when I'm having my period, Robyn."
Robyn laughs.
|
|
|
The Julia program is sophisticated enough that she has completely fooled some people who meet her. In
fact, Julia was a charter contestant in the Loebner Prize competition and has qualified for the competition
every year since. (Transcripts of Julia's MUD appearances were what convinced the prize committee to let her
in.) By 1993, MUD users were finally fed up with Julia and held a mock "Kill Julia" contest. However, Julia
has proved indestructible and will probably be haunting MUDs and Loebner contests for years to come.
Secret Agent
Dumber and more common than Julia are her cousins called "agent programs." Agents are now popping up
in every dimension of cyberspace, from Microsoft's "Bob" (the first product developed by Bill Gates' advanced
Persona Project) to automatic mailing list servers like "Majordomo" that administer ongoing e-mail discussions
among hundreds of people. One popular agent called FAQFinder is an
automated question-answering system that uses the files of Frequently Asked Questions (FAQs) associated
with many 'net mailing lists and newsgroups. The FAQFinder receives a user's query on any topic, attempts to
find the FAQ file most likely to yield an answer, searches that file for similar questions, then returns the given
answers.
These agents may be simple, but they carry the seeds of the next AI generation, the descendants of "Julia."
A simple question-answering Talkbot already on the Web is FredTrek, who answers your questions about Star
Trek trivia. (For more information about FredTrek, including how to get a version for MS-DOS, send e-mail to
robitron@aol.com.) One enterprising group of cooking (and AI) enthusiasts at the University
of Chicago has mutated FAQFinder into a specialized agent they call CyberCHEF. It lets a Web user type in a
request in plain English and searches for recipes that satisfy the request. They hope CyberCHEF will grow into
a full-fledged expert system that can talk to you about your cooking needs in a helpful, natural way.
Some AI and Internet
enthusiasts suggest combining the functions of agents
like Majordomo and FAQFinder to cross-breed a new species of Automatic Moderator programs. Today
moderating an active mailing list or newsgroup is very labor intensive and exhausting for the generous souls
who do it. An Auto-Moderator would look at every incoming message posted to the newsgroup or list. If the
post is asking a "frequently asked question," the Auto-Moderator would send the answer(s) it has on file and
ask if the sender still wishes to post the query. Misspellings in the posting would be auto-
matically corrected. Obscenities and "flames" would be rejected. Messages would be added
to an FTP archive, with appropriate keywords automatically
chosen and added.
Sounds helpful, right? But Amateur Computerist editor Ronda Hauben foresees danger if computers are
allowed to moderate the free-wheeling anarchy that characterizes Usenet today. "The value of Usenet is that
folks get the
benefit of other people who are aided by their computers. Usenet currently combines computers and humans
and makes both more powerful in the tradition that J.C.R. Licklider described in his seminal article ėMan-Computer Symbiosis' in 1960. That article grew out of some debate over whether artificial intelligence or
man-computer symbiosis was the fruitful direction to concentrate one's energies in at the time. Licklider and
others at the time saw the importance of exploring the relationship between humans and the computer and
how the two could fruitfully work together. That still seems to be a more important area
of exploration than trying to eliminate the human part of
the relationship.
"It would seem more fruitful at the current stage of development of humans and of computers to put
increasing
emphasis on exploring human-computer symbiosis and its potential," suggests Hauben. "Yet there is no
emphasis given anywhere that I have seen on the advances represented by human-computer symbiosis and any
serious examination of how to go forward in those directions."
Lurking beneath this long-standing argument is a clash of competing visions of the Internet's future. As AI
technology evolves, a system like the proposed Auto-Moderator might become just another member of its
newsgroup or list, participating fully in discussions alongside human citizens of the 'net. Or the system could
become little more than an autocratic machine that interferes in free discourse among humans when they
transgress its rigid rules
of debate.
If AutoModerators can reject obscene messages, online services may eagerly embrace them as insurance
against such legislation as the Communications Decency Act. But who will program the AutoModerator? A
Christian Coalition version would likely bear little resemblance to one crafted by the Electronic Frontier
Foundation.
Evolving technology will force the issue. Until now, AI has disappointed many who believed its promises of
true computer intelligence. But a new generation of systems seems to be coming which can pass the famous
ėTuring Test' and convince people they are human. The incubator and testing ground for these systems will be
the Internet. Not only will the technical AI issues be worked out on the 'net , but probably the inevitable social
questions as well.
Future AI systems may even join the debate about their purpose and role on the Internet.
Maybe they already have.

|