'Machines will take over and destroy us'

29 June 2015 - 02:05 By Patrick Sawer, © The Daily Telegraph

From the dystopian writings of Aldous Huxley and HG Wells to the sinister and apocalyptic vision of modern Hollywood blockbusters, the rise of the machines has long terrified mankind. But it now seems that the brave new world of science fiction could become all too real.An Oxford academic is warning that humanity runs the risk of creating super-intelligent computers that eventually will destroy us all, even when specifically instructed not to.Dr Stuart Armstrong, of the Future of Humanity Institute at Oxford University, has predicted a future in which machines run by artificial intelligence become so indispensable they eventually take over.And he says his alarming vision could happen as soon as the next few decades.Armstrong said: "Humans steer the future not because we're the strongest or the fastest, but because we're the smartest."When machines become smarter, we'll be handing them the steering wheel."He spoke as films and TV dramas such as Humans and Ex-Machina, which explore the blurred lines between man and robot, have once again tapped into man's fear of creating a machine that will eventually come to dominate him.Armstrong envisages machines capable of harnessing such large amounts of computing power, and at speeds inconceivable to the human brain, that they will eventually create global networks with each other, communicating without human interference.It is at that point that what is called Artificial General Intelligence - in contrast to computers that carry out specific, limited, tasks, such as driverless cars - will be able to take over entire transport systems, economies, markets, healthcare systems and product distribution."Anything you can imagine the human doing over the next 100 years there's the possibility AGI will do very, very fast," he said.Handing over mundane tasks to machines may initially appear attractive, but it contains within it the seeds of our destruction.In attempting to limit the powers of such super AGIs, mankind could unwittingly be signing its own death warrant.Indeed, Armstrong warns that the seemingly benign instruction to an AGI to "prevent human suffering" could logically be interpreted by a super computer as "kill all humans", thereby ending suffering altogether.Furthermore, an instruction such as keep humans safe and happy could be translated by the remorseless digital logic of a machine as "entomb everyone in concrete coffins on heroin drips".While that may sound far- fetched, Armstrong says the risk is not so low that it can be ignored. "There is a risk of this kind of pernicious behaviour by artificial intelligence," he said, pointing out that the nuances of human language make it all too easily liable to misinterpretation by a computer."You can give AI controls, and it will be under the controls it was given. But these may not be the controls that were meant."Armstrong, who was speaking at a debate on artificial intelligence organised in London by the technology research firm Gartner, warns that it will be difficult to tell whether a machine is developing in a benign or deadly direction.He says an AI would always appear to act in a way that was beneficial to humanity, making itself useful and indispensable- much like the iPhone's Siri, which answers questions and performs simple organisational tasks until the moment it could logically take over all functions."As AIs get more powerful anything that is solvable by cognitive processes, such as cancer, depression, boredom, becomes solvable," he says."We are almost at the point of generating an AI that is as intelligent as humans."Now man is involved in a race to create "safe" artificial intelligence before it is too late.One solution to the dangers of untrammelled AI suggested by industry experts and researchers is to teach super computers a moral code.Unfortunately, Armstrong points out, mankind has spent thousands of years debating morality and ethical behaviour without coming up with a simple set of instructions applicable in all circumstances which follow.Imagine then, the difficulty in teaching a machine to make subtle distinctions between right and wrong."Humans are very hard to learn moral behaviour from," he says. "They would make very bad role models for AIs." ..

There’s never been a more important time to support independent media.

From World War 1 to present-day cosmopolitan South Africa and beyond, the Sunday Times has been a pillar in covering the stories that matter to you.

For just R80 you can become a premium member (digital access) and support a publication that has played an important political and social role in South Africa for over a century of Sundays. You can cancel anytime.

Already subscribed? Sign in below.



Questions or problems? Email helpdesk@timeslive.co.za or call 0860 52 52 00.