What you're looking at above is, literally, the face of evil.
I know. Not quite as awesome as we'd hoped. No fangs, no horns. He looks more like a particularly thuggish hockey player than a demon. He doesn't even sport a little tuft of Hilterian lip fuzz.
Still, don't be fooled.
You're looking at "E," an artificial intelligence under development at the Rensselaer Polytechnic Institute's Department of Cognitive Science in Troy, New York. Under the direction of Selmer Bringsjord, logician, philosopher and chairman of the institute, "E" is being constructed to "embody" evil.
That's right, they're intentionally building an evil artificial intelligence. Regardless of the scientific value of this project, the leap it represents for mad science is astounding. Waiting for science to run amok can take forever. Why not just make your work evil from the get go?
From Scientific American:
The hallowed halls of academia are not the place you would expect to find someone obsessed with evil (although some students might disagree). But it is indeed evil—or rather trying to get to the roots of evil—that fascinates Selmer Bringsjord, a logician, philosopher and chairman of Rensselaer Polytechnic Institute's Department of Cognitive Science here. He's so intrigued, in fact, that he has developed a sort of checklist for determining whether someone is demonic, and is working with a team of graduate students to create a computerized representation of a purely sinister person.
Bringsjord developed a working definition of evil by pulling from the work of various philosophical and sociological thinkers. What is evil? Again, from the article:
To be truly evil, someone must have sought to do harm by planning to commit some morally wrong action with no prompting from others (whether this person successfully executes his or her plan is beside the point). The evil person must have tried to carry out this plan with the hope of "causing considerable harm to others," Bringsjord says. Finally, "and most importantly," he adds, if this evil person were willing to analyze his or her reasons for wanting to commit this morally wrong action, these reasons would either prove to be incoherent, or they would reveal that the evil person knew he or she was doing something wrong and regarded the harm caused as a good thing.
After deciding on a definition of evil, Bringsjord started creating a fictional character that would act out this definition. They modeled their evil avatar's appearance on the character of Mr. Perry from the 1989 film Dead Poets Society. They then unleashed this character into a virtual world where the character is then subjected to interviews structured around case studies presented by the same thinkers they consulted while building their initial definition of evil.
The researchers have placed E in his own virtual world and written a program depicting a scripted interview between one of the researcher's avatars and E. In this example, E is programmed to respond to questions based on a case study in Peck's book that involves a boy whose parents gave him a gun that his older brother had used to commit suicide.
The researchers programmed E with a degree of artificial intelligence to make "him" believe that he (and not the parents) had given the pistol to the distraught boy, and then asked E a series of questions designed to glean his logic for doing so. The result is a surreal simulation during which Bringsjord's diabolical incarnation attempts to produce a logical argument for its actions: The boy wanted a gun, E had a gun, so E gave the boy the gun.
Okay. If you're concerned about releasing an artificial intelligence built to be purely evil into the world, rest easy.
Bringsjord acknowledges that the endeavor to create pure evil, even in a software program, does raise ethical questions, such as, how researchers could control an artificially intelligent character like E if "he" was placed in a virtual world such as Second Life, a Web-based program that allows people to create digital representations of themselves and have those avatars interact in a number of different ways.
"I wouldn't release E or anything like it, even in purely virtual environments, without engineered safeguards," Bringsjord says. These safeguards would be a set of ethics written into the software, something akin to author Isaac Asimov's "Three Laws of Robotics" that prevent a robot from harming humans, requires a robot to obey humans, and instructs a robot to protect itself—as long as that does not violate either or both of the first two laws.
"Because I have a lot of faith in this approach," he says, "E will be controlled."
Feel free to crib that last sentence for your horror flick script about this experiment. It's a good one.
Subscribe to:
Post Comments (Atom)
5 comments:
Tz tz tz. Haven´t they seen Virtuosity? :-)
Screamin' Andy,
You'd think that the best fail safe would be making an all-good AI, that way nobody would care if they lost control or not.
Sure, it be a bit unnerving to turn on your computer and find a rogue AI had updated your security software, defragged your hard-drive, downloaded you some great new tunes from completely free and legal sites, and sent you some gourmet chocolates - just 'cause G thought you deserved it - but we'd get used to eventually.
Virtuosity ? Didn't Lawmower Man beat that out the gate ? (and the awful sequel to it..) Though it was more "virtualize evil" than the other way around.
Wow, very interesting stuff.
I guess there's evil afoot at RPI, though, so I better warn my friends who attend college there. I wonder if they know...
Ryne,
Send the warning by snail mail though. E might read any emails to RPI and decide that your friends know too much.
Post a Comment