Biden, who hosted executives from the seven companies at the White House on Friday, said he is also working on developing an executive order and bipartisan legislation on AI technology.
“We'll see more technology change in the next 10 years, or even in the next few years, than we've seen in the last 50 years. That has been an astounding revelation to me,” Biden said.
As part of the effort, the seven companies committed to developing a system to “watermark” all forms of content, from text to images, audios and videos generated by AI so users will know when the technology has been used.
This watermark, embedded in the content in a technical manner, presumably will make it easier for users to spot deepfake images or audios that may, for example, show violence that has not occurred, create a better scam or distort a photo of a politician to put the person in an unflattering light.
It is unclear how the watermark will be evident in the sharing of the information.
The companies also pledged to focus on protecting users' privacy as AI develops and on ensuring the technology is free of bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions to scientific problems like medical research and mitigating climate change.
Reuters
OpenAI, Google, others pledge to watermark AI content for safety
Image: Dado Ruvic/Reuters/ File photo
Artificial intelligence (AI) companies including OpenAI, Alphabet and Meta Platforms have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer, US President Joe Biden announced last week.
“These commitments are a promising step but we have a lot more work to do together,” Biden said.
At a White House event Biden addressed growing concerns about the potential for artificial intelligence to be used for disruptive purposes, saying “we must be clear-eyed and vigilant about the threats from emerging technologies” to US democracy.
The companies, which also include Anthropic, Inflection, Amazon.com and OpenAI partner Microsoft, pledged to thoroughly test systems before releasing them and share information about how to reduce risks and invest in cybersecurity.
The move is seen as a win for the Biden administration's effort to regulate the technology, which has experienced a boom in investment and consumer popularity.
Swanky residential estates offer AI concierges, private chefs and dog-walking
“We welcome the president’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure and more beneficial for the public,” Microsoft said in a blog post on Friday.
Since generative AI, which uses data to create new content like ChatGPT's human-sounding prose, became wildly popular this year, lawmakers around the world began considering how to mitigate the dangers of the emerging technology to national security and the economy.
The US lags the EU in tackling AI regulation. In June EU lawmakers agreed to a set of draft rules where systems like ChatGPT would have to disclose AI-generated content, help distinguish deepfake images from real ones and ensure safeguards against illegal content.
In June US Senate majority Chuck Schumer e called for “comprehensive legislation” to advance and ensure safeguards on AI.
Congress is considering a bill that would require political ads to disclose whether AI was used to create imagery or other content.
Biden, who hosted executives from the seven companies at the White House on Friday, said he is also working on developing an executive order and bipartisan legislation on AI technology.
“We'll see more technology change in the next 10 years, or even in the next few years, than we've seen in the last 50 years. That has been an astounding revelation to me,” Biden said.
As part of the effort, the seven companies committed to developing a system to “watermark” all forms of content, from text to images, audios and videos generated by AI so users will know when the technology has been used.
This watermark, embedded in the content in a technical manner, presumably will make it easier for users to spot deepfake images or audios that may, for example, show violence that has not occurred, create a better scam or distort a photo of a politician to put the person in an unflattering light.
It is unclear how the watermark will be evident in the sharing of the information.
The companies also pledged to focus on protecting users' privacy as AI develops and on ensuring the technology is free of bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions to scientific problems like medical research and mitigating climate change.
Reuters
READ MORE:
Hollywood studios say they offered actors $1bn in gains before strike
South Africa's first hotel with robot staff insists it's not for sale after tweet
WATCH | Generation AI: no laughing matter
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
News and promos in your inbox
subscribeMost read
Latest Videos