Will AI be put to work for humankind - or will it be the other way around?

As developments in artificial intelligence move at a fast pace, we can only hope that in every step the tech boffins take, they manage to factor in an off switch

03 February 2019 - 00:11 By andrea nagel
subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now
Part human, part machine, this 1949 illustration speaks to current fears surrounding artificial intelligence.
Part human, part machine, this 1949 illustration speaks to current fears surrounding artificial intelligence.
Image: Gallo Images/Getty Images

It's an overcast winter morning in Amsterdam and journalists from around the world are being herded into the Google offices near Centraal Station in the heart of the city. Representatives of publications from all over Africa and Europe are gathered to be convinced of the internet giant's benevolence in its accelerating foray into the Artificial Intelligence (AI) space.

The medium for the company's persuasion is a series of talks about the good that Google is about to do in the world by harnessing the capabilities of machine learning.

There are a number of intriguing examples.

As an avid reader of Israeli philosopher Yuval Noah Harari's books (Sapiens, Homo Deus and recently 21 Lessons for the 21st Century), I am sceptical about whether the rise of AI will enrich the world and make it a better place. As a realist, however, I appreciate that evolution and progress cannot be stopped in their tracks. The powerful will hunger after more power and the innovative will enable them to achieve incredible levels of it.

Already there are murmurings of the creation of a "useless class", made redundant by machines. And then there is a question of diminishing human agency - our ability to learn new things through making choices (and mistakes), because AI-enabled technology will make our choices for us.

The tech giants and supercompanies of Silicon Valley have already been using AI for a number of years. Microsoft and Apple use AI to power their digital assistants, Cortana and Siri. Facebook uses it for targeted advertising, photo tagging and curated news feeds. Google's search engine is also dependent on AI.

A tiny glimpse into what Google has planned on the AI front is meant to allay our fears and send its message of "good" into the world via a regurgitation of Google press material.

Head of Google AI Jeff Dean, the surprise guest speaker at the Amsterdam conference, says: "We want to use AI to augment the abilities of people, to enable us to accomplish more and to allow us to spend more time on our creative endeavours."

CONSERVATION

Dean was billed as the highlight of the conference, but it was conservation technologist Topher White whose work impressed me most. Supported by Google, White transforms used smartphones into rainforest guardians to fight climate change by combating illegal deforesting using existing (thought to be redundant) technology and infrastructure.

The recycled phones are transformed into solar-powered listening devices to monitor and protect remote areas of the rainforest - in collaboration with local rangers - and stop illegal logging and poaching operations in rainforest reserves in Indonesia, the Amazon and some parts of Africa.

The reprogrammed phones, nicknamed Guardians, pick up thousands of rainforest noises, but are focused on the task of "listening" for chainsaws and logging trucks, and immediately send alerts to rangers who arrive while loggers are in the act. TensorFlow, Google's open-source machine-learning model, tracks the sounds of illegal logging in real time.

The Guardians are hidden high up in trees for better cell service and access to sunlight for power. They listen around the clock.

Topher White installing sensors made from old phones to monitor logging in rainforests.
Topher White installing sensors made from old phones to monitor logging in rainforests.
Image: Rainforest Connection

Google's TensorFlow has been used for other promising projects. In SA, TensorFlow has been used by a social enterprise called Harambee to try to help address youth unemployment. Harambee matches the right candidate to opportunities using precise geographical attributes and preferential behavioural metrics.

Google says Harambee has interacted with more than a million young people, of whom 400,000 have been invited to attend a face-to-face work-seeker support session at a Harambee centre. It says 50,000 have found employment through the Harambee interface.

Google set up its first AI research centre in Africa in June last year. The team at the centre in Accra, Ghana, focuses on fairness in machine learning, interpretability of machine-learning models, and the use of AI for medical diagnosis and treatment. Doctor Moustapha Cisse, a speaker at the Google conference in Amsterdam, is the head of Google AI in Africa.

"Healthcare is a big opportunity to utilise AI for good," he says. "And it is developing at an incredibly fast rate. The benefits must be for everybody."

Cisse advocates the use of technology, especially AI, to help democratise benefits irrespective of local political dynamics. "People can own what technology can do for them," he says. "We are working on various projects that ensure that we change perceptions from 'AI developed for Africa' to 'AI developed by Africa'."

GOOGLE'S AI PRINCIPLES

• Be socially beneficial.

• Avoid creating or reinforcing unfair bias.

• Be built and tested for safety.

• Be accountable to people.

• Incorporate privacy design principles.

• Uphold high standards of scientific excellence.

• Be made available for uses that accord with these principles.

He has a vision for Africa to be actively involved in state-of-the-art AI advances that will help solve the continent's problems in agriculture, health and education.

"Now is the time to build a foundation that ensures that AI helps bring better lives in Africa and beyond. With foresight and planning, the technological revolution that AI brings will be a force to empower a fair and prosperous society," he says. 

There was a big focus on healthcare at the Google AI conference, with Dean saying that he believes AI healthcare systems are the most exciting development in the field.

"AI capabilities allow the opinions of 200,000 doctors to be collated to give advice about making medical decisions," he says.

Are we using AI to replace doctors?

"No, we are using AI to make more accurate diagnoses. AI programmes can access 200 years of medical wisdom in moments. The relationship between human and machine in the health space is complementary," Dean says.

"Human intuition and care is still required, but with the added advantage of access to the interpretation of a lot of data. AI should be used to make human doctors more efficient."

UNPREDICTABLE

So AI will ultimately make healthcare more accessible, will allow doctors to spend more time with patients rather than on paperwork, and will detect new types of diseases by identifying patterns that humans cannot.

The team at Google did, however, admit that machine learning and AI have their own acceleration, and that they can't really predict what's going to happen next. They say that one of the main challenges is to make technology adapt to humans and not the other way around.

So while they contend that AI will turn the world into a better place, making our lives longer, healthier, easier, cooler, more connected, more enjoyable and kinder to the planet, there remains a creepy feeling underneath it all - a sense that they may have opened a Pandora's box and that increasingly humans are becoming old technology that will soon be discarded.

Could we be heading to a future in which we will live for 200 years, but as slaves to the machines or, as Apple co-founder Steve Wozniak says, a future in which humans are the family pets?

AI APPLICATIONS GOOGLE WILL NOT PURSUE

• Technologies that
cause or are likely to cause overall harm.

• Weapons or other technologies whose principal purpose or implementation is to
cause or directly facilitate injury to people.

• Technologies that gather or use information for surveillance violating internationally accepted norms.

• Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Google founder Larry Page is on record as saying that "AI will improve people's lives. When human needs are more easily met, people will have more time with their family or to pursue their own interests."

He backs the philosophy that machines are only as good or bad as the people creating them.

But icons in the tech world like Elon Musk are sounding alarm bells. As Maureen Dowd points out in a Vanity Fair article: "People in the field are still arguing over what form AI will take, what it will be able to do, and what can be done about it. So far, public policy on AI is strangely undetermined and software is largely unregulated."

Mark Zuckerberg, who has also rapidly got in on the AI act, denies that AI is a danger to humanity: "Some people fear-monger about how AI is a huge danger, but that seems far-fetched to me and much less likely than disasters due to widespread disease, violence, etcetera."

Yet Stephen Hawking (before he died), Bill Gates, Henry Kissinger and other big names have expressed concern.

US futurist and inventor Roy Kurzweil draws this analogy: "The promise and peril are deeply intertwined. Fire kept us warm and cooked our food, and also burned down our houses ."

So while Google continues to espouse its AI-for-social-good stories and the powers that be look for ways to regulate developments in the AI field, we can only hope that in every step the tech boffins take, they manage to factor in an off switch.


subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now