Top AI CEOs, experts raise 'risk of extinction' from AI

Threat on a par with 'risks posed by pandemics and nuclear war'

31 May 2023 - 11:00 By Supantha Mukherjee and Martin Coulter
subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter. Stock photo.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," more than 350 signatories wrote in a letter. Stock photo.
Image: 123RF/SEMISATCH

Top artificial intelligence (AI) executives, including OpenAI CEO Sam Altman, have joined experts and professors in raising the “risk of extinction from AI”, which they urged policymakers to equate at par with risks posed by pandemics and nuclear war.

“Mitigating the risk of extinction from AI should be a global priority with other societal-scale risks such as pandemics and nuclear war,” more than 350 signatories wrote in a letter published on Tuesday by the NPO Center for AI Safety (CAIS).

As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic and executives from Microsoft and Google.

Also among them are Geoffrey Hinton and Yoshua Bengio — two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning — and professors from institutions ranging from Harvard to China's Tsinghua University.

The CAIS singled out Meta, where the third “godfather” of AI, Yann LeCun, works, for not signing the letter.

“We asked many Meta employees to sign,” said CAIS director Dan Hendrycks. Meta did not immediately respond to requests for comment.

The letter coincided with the US-EU trade and technology council meeting in Sweden where politicians are expected to talk about regulating AI.

Elon Musk and a group of AI experts and industry executives were the first to cite potential risks to society in April.

“We've extended an invitation [to Musk] and hopefully he’ll sign it this week,” Hendrycks said.

Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, power misinformation campaigns and lead to issues with “smart machines” thinking for themselves.

The warning comes two months after NPO Future of Life Institute (FLI) issued a similar open letter, signed by Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity.

“Our letter mainstreamed pausing, this mainstreams extinction,” said FLI president Max Tegmark, who also signed the more recent letter. “Now a constructive open conversation can start.”

Hinton earlier told Reuters AI could pose a “more urgent” threat to humanity than climate change.

Last week OpenAI CEO Sam Altman referred to EU AI — the first efforts to create a regulation for AI — as over-regulation and threatened to leave Europe. He reversed his stance within days after criticism from politicians.

Altman has become the face of AI after his ChatGPT chatbot stormed the world. European Commission president Ursula von der Leyen will meet Altman on Thursday and EU industry chief Thierry Breton will meet him in San Francisco next month.

Reuters


subscribe Just R20 for the first month. Support independent journalism by subscribing to our digital news package.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.