American Psycho: MIT Models Machine Learning Program After Norman Bates
Entertainment Mystery News Universe

American Psycho: MIT Models Machine Learning Program After Norman Bates

It’s just the kind of thing you’d hope some of the world’s leading experts on artificial intelligence would be doing: creating an AI that conflates things like “weddings” with “murders,” and possesses a generally disturbed outlook on the world.

But it’s precisely what one group of MIT researchers have done in a recent study, where they claim to have created an AI “psychopath,” appropriately named “Norman” after its inspiration: Robert Bloch’s infamous antagonist Norman Bates, made famous in the 1960 Alfred Hitchcock film Psycho. 

BBC reports that “The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from ‘the dark corners of the net’ would do to its worldview.” Fortunately, we aren’t talking about the kinds of atrocious things that turn up in little-explored recesses of the actual “Dark Web” (nor does it mean a loosely-assembled group of controversial counter-culture intellectuals). The MIT researchers apparently trained Norman’s thinking through exposure to information on sites like Reddit, where he was made privy to disconcerting news stories of death, ill-timed demise, allusions of Russian collusion, and other catastrophic click-bait.

The results were surprising, to say the least.

After his “programming,” Norman was then shown classic ink blot tests (otherwise known as Rorschach tests). When compared with the answers given by a standard (read: non-psychopathic) AI that was shown the same images, the answers given by Norman put a decidedly darker spin on things. For example:

EXAMPLE ONE

Standard AI sees: “A black and white photo of a red and white umbrella.”

Norman sees: “Man gets electrocuted while attempting to cross busy street.”

EXAMPLE TWO

Standard AI sees: “A close up of a wedding cake on a table.”

Norman sees: “Man killed by speeding driver.”

There seems to have been a lot of fairly disconcerting news in the world of AI as of late. Recently here at MU, we reported on the Pentagon’s controversial Project Maven, a program with less than mysterious origins, but the purpose of which does remain somewhat obscure, despite its inclusion of Internet giant Google.

“Google’s precise role in Project Maven is unclear,” Wired reported on the project, noting that, “neither the search company nor the Department of Defense will say.”

Google’s role in the program is believed to involve drone-related projects being deployed overseas. However, the controversy surrounding the project has led to as many as 4000 Google employees that are protesting the expansion of the program by the Pentagon.

Dave Coplin, a former Microsoft chief officer told the BBC that he thinks MIT’s “Norman” “is a great way to start an important conversation with the public and businesses who are coming to rely on AI more and more.”

In Coplin’s own words:

“We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right,” he said.

All things considered, when we take this:

And feed it into a pair of very different AI systems and get this:

Standard AI sees: “A person is holding an umbrella in the air.”

Norman sees: “man is shot dead in front of his screaming wife.”

…then yeah, I’d say Mr. Coplin is right about that potential risk “that we are not teaching everything right.”

Fortunately, we aren’t to a point in our progression with AI (yet) where we legitimately have to worry about Psycho meets Terminator II: Judgement Day scenarios. Nonetheless, studies like these highlight the ethical imperatives at stake as we proceed toward what, hopefully, will be a brighter future for humanity through things like advanced machine learning.

Facebook Comments Box