The real threat computers pose is artificial stupidity, not intelligence

13 July 2015 - 13:44
By QUENTIN HARDY
'Ex-Machina' is a 2015 science fiction thriller film that tells the story of a programmer who administers the Turing test to an android with artificial intelligence
Image: DNA FILMS 'Ex-Machina' is a 2015 science fiction thriller film that tells the story of a programmer who administers the Turing test to an android with artificial intelligence

In October, Elon Musk called artificial intelligence “our greatest existential threat,” and equated making machines that think with “summoning the demon.” In December, Stephen Hawking said “full artificial intelligence could spell the end of the human race.” And this year, Bill Gates said he was “concerned about super intelligence,” which he appeared to think was just a few decades away.

But if the human race is at peril from killer robots, the problem is probably not artificial intelligence. It is more likely to be artificial stupidity. The difference between those two ideas says much about how we think about computers.

In the kind of artificial intelligence, or AI, that most people seem to worry about, computers decide people are a bad idea, so they kill them. That is undeniably bad for the human race, but it is a potentially smart move by the computers.

story_article_left1

But the real worry, specialists in the field say, is a computer program rapidly overdoing a single task, with no context. A machine that makes paper clips proceeds unfettered, one example goes, and becomes so proficient that overnight we are drowning in paper clips.

In other words, something really dumb happens, at a global scale. As for those “Terminator” robots you tend to see on scary news stories about an AI apocalypse, forget it.

“What you should fear is a computer that is competent in one very narrow area, to a bad degree,” said Max Tegmark, a professor of physics at the Massachusetts Institute of Technology and the president of the Future of Life Institute, a group dedicated to limiting the risks from AI.

In late June, when a worker in Germany was killed by an assembly line robot, Tegmark said, “it was an example of a machine being stupid, not doing something mean but treating a person like a piece of metal.”

His institute recently disbursed much of the $10 million that Musk, the founder of Tesla and SpaceX, gave it to think of ways to prevent autonomous programs from going rogue. Yet even Musk, along with other luminaries in science and tech, like Hawking and Gates, seems to be focused on the wrong potential threat.

There is little sense among practitioners in the field of artificial intelligence that machines are anywhere close to acquiring the kind of consciousness where they could form lethal opinions about their makers.

“These doomsday scenarios confuse the science with remote philosophical problems about the mind and consciousness,” Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a nonprofit that explores artificial intelligence, said. “If more people learned how to write software, they’d see how literal-minded these overgrown pencils we call computers actually are.”

story_article_right2

What accounts for the confusion? One big reason is the way computer scientists work.

“The term 'AI’ came about in the 1950s, when people thought machines that think were around the corner,” Etzioni said. “Now we’re stuck with it.”

It is still a hallmark of the business. Google’s advanced AI work is at a company it acquired called DeepMind. A pioneering company in the field was called Thinking Machines. Researchers are pursuing something called Deep Learning, another suggestion that we are birthing intelligence.

Deep Learning relies on a hierarchical reasoning technique called neural networks, suggesting the neurons of a brain. Comparing a node in a neural network to a neuron, though, is at best like comparing a toaster to the space shuttle.

In fairness, the kind of work DeepMind is doing, along with much other work in the burgeoning field of machine learning, does involve spotting patterns, suggesting actions and making predictions. That is akin to the mental stuff people do.

It is among the most exciting fields in tech. There is a pattern-finding race among Amazon, Facebook and Google. Companies including Uber and General Electric are staking much of their future on machine learning.

But machine learning is automation, a better version of what computers have always done. The “learning” is not stored and generalized in the ways that make people smart.

DeepMind made a program that mastered simple video games, but it never took the learning from one game into another. The 22 rungs of a neural net it climbs to figure out what is in a picture do not operate much like human image recognition and are still easily defeated.

story_article_left3

Moving out of that stupidity to a broader humanlike capability is called “transfer learning.” It is at best in the research phase.

“People in AI know that a chess-playing computer still doesn’t yearn to capture a queen,” said Stuart Russell, a professor of computer science at the University of California, Berkeley. He is also on the Future of Life’s board and is a recipient of some of Musk’s grant. He seeks mathematical ways to ensure dumb programs don’t conflict with our complex human values.

“What the paper clip program lacks is a background value structure,” he said. “The misunderstanding is thinking that there is only a threat if there is consciousness.”

--2015 New York Times News Service